- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
Susan White and Paul Cottle
In 2007, the authors of Rising Above the Gathering Storm [1, subsequently referred to as “RAGS”] were “deeply concerned that the scientific and technological building blocks critical to our economic leadership are eroding at a time when many other nations are gathering strength.” (p. 3) The highest priority actions recommended by this report focused on science, engineering, and math teaching in K-12 programs.
The Bioscience Industry Organization (BIO) released a report in 2009: “Taking the Pulse of Bioscience Education in America: A State-by-State Analysis” . This report referred to RAGS explicitly; it provides a framework to assess bioscience education in the US at the state level. Not surprisingly, the report focused mostly on K-12 biology courses and student achievement in this subject. States were sorted into four categories (“leaders of the pack,” “second tier,” “middling performance,” and “lagging performance”) by their ratings on a list of indicators, including the average scale score of 8th graders on the National Assessment of Educational Progress (NAEP), the average scores on the science section of the American College Test (and specifically on the biology items on the test), the pass rate on the Advanced Placement Biology test, and the percentage of biology teachers who had certification in biology. Florida was included in the “lagging performance” group, prompting an official of the state’s Department of Education to tell a gathering of school district officials that the state’s students are “pretty much last in the nation in science.” 
The BIO report focused on biology. We propose a Science and Engineering Readiness Index (SERI) as a tool for policy-makers and educators to use to examine progress in K-12 physical science and engineering education. The foundation provided in K-12 schools to better prepare students for careers in science and engineering is crucial.
Economics, Physical Sciences, Math, and Engineering
The authors of RAGS were concerned with the erosion of the scientific and technical building blocks for our economic leadership. Not only is this impact felt at the macroeconomic level, but it also rings true at the microeconomic level, too. The American Institute of Physics recently demonstrated  that the very best economic opportunities for new bachelor’s degree graduates are in the mathematical and physical sciences and engineering.
Research shows that students’ success in the demanding undergraduate programs leading to these degrees depends not only on the work of college faculty in these programs, but also on how well these students are prepared during their K-12 years. The 2007 study by Tyson et al.  that followed Florida high school graduates through their postsecondary years demonstrated strong correlations between high school preparation and bachelor’s degree attainment, particularly in STEM (science, technology, engineering and mathematics) fields. In particular, Tyson et al. demonstrated the importance of high school physics and advanced mathematics courses (including calculus) in the preparation of successful STEM students. Sadler and Tai  demonstrated that taking a high school physics course is correlated with success in physics at the college level (with the corresponding statements about biology and chemistry being correct as well), while taking calculus in high school is correlated with success in all college science courses.
Given the importance of high school physics and calculus in preparing students for the engineering and physical science fields that offer the best economic prospects for bachelor’s degree graduates, it makes sense to offer policy makers and the public a succinctly-stated measurement of how well their states are doing in preparing K-12 students for these fields. Here we propose a Science and Engineering Readiness Index (SERI) that incorporates results from the National Assessment of Educational Progress [7,8] (“NAEP”, conducted periodically by the US Department of Education), Advanced Placement Examination results in calculus and physics , the physics course-taking results from the American Institute of Physics National Survey of High School Physics Teachers  and information on teacher certification requirements in science compiled by the National Council on Teacher Quality (NCTQ) . The information from these sources is gathered into three scores on mathematics performance, science performance and teacher qualifications. The scores are then used to assign each state a single composite score. The formulation of this index provides an opportunity for examining the strengths and weaknesses of each state’s K-12 mathematics and science programs.
All of the information used in formulating SERI is available to the public on the Internet. While other indicators would have been desirable – the percentage of graduating high school seniors who have taken a quantitative physics class, for example – we focused on indicators that are readily available.
Two indicators were used to obtain the SERI math score. The first was the percentage of students who earned achievement levels of proficient or higher on the 2009 8th grade NAEP Math Assessment . NAEP sorts students into four “achievement levels” – advanced, proficient, basic and “below basic”. A student below the “proficient” level is not on track for a career in the physical sciences or engineering, so we focus on this achievement level. Nationally, the percentage of students rated “proficient” on the 8th grade NAEP math assessment was 34%. The state with the highest percentage is Massachusetts (52%) and the state with the lowest percentage is Mississippi (15%). The 12th grade NAEP math assessment, which might have been more directly applicable to a readiness index, is not reported on a state-by-state basis.
For each indicator used in SERI (like the percentage proficient on the 8th grade NAEP Math Assessment), we formulate a scaled NAEP sub-score between 1 and 5 so that the lowest state (in the case of the 8th grade NAEP Math Assessment, it is Mississippi) receives a scaled NAEP sub-score of 1 and the highest state (here Massachusetts) receives a scaled NAEP sub-score of 5. We explain the details of the calculation in the appendix. The sub-scores are scaled so that they reflect the actual variation in a given indicator.
The second indicator used in the formulation of the math score reflects the numbers of students passing Advanced Placement Calculus Examinations (Calculus AB and Calculus BC) in 2010. The AP Calculus sub-score is calculated using the combined number of students passing (score of 3 or better) both exams, relative to the number of high school seniors in the state. The numbers of students passing and the numbers of high school seniors are taken from the College Board’s “7th Annual AP Report to the Nation” and the subject and state supplements . Nationally, the number of students passing both the Calculus AB and BC exams divided by the number of high school seniors is 6.3%. The highest state is Massachusetts with 11% which yields an AP Calculus sub-score of 5. The lowest is Mississippi with 0.8% which yields an AP Calculus sub-score of 1.
Finally, the score for math is calculated to be the average of the sub-scores for the 8th grade NAEP Math Assessment and AP Calculus. Massachusetts has the highest math score, 5. The lowest score, 1, was earned by Mississippi.
Three indicators were used to calculate the science scores. Two of them, corresponding to performances on the 2009 8th grade NAEP Science Assessment  and the AP Physics examinations, were formulated in ways identical to the corresponding indicators used in the math scores. The third indicator was the course-taking rate for high school physics as determined by the AIP’s National Survey of High School Physics Teachers .
Nationally, the percentage of 8th graders who earned “proficient” or above on the NAEP Science Assessment was 30%. The highest and lowest states by this measure were Montana (42%) and Mississippi (15%), respectively. Numerical state sub-scores for this indicator were calculated using the same procedure used for the NAEP Math Assessment.
The AP Physics indicator (and the corresponding numerical sub-score) were calculated using the same prescription used for AP Calculus. However, the two exams used in formulating this indicator were the Physics B exam and the Physics C Mechanics exam (Physics C Electricity and Magnetism was not used because the students who passed this exam were almost certainly just a subset of the students who passed the mechanics exam). Nationally, the ratio of passing scores on the two AP Physics exams to the number of high school seniors was 2%, and the states with the highest and lowest ratios were New York (4.3%) and North Dakota (0.3%), respectively. The resulting AP Physics sub-scores were topped by the 5 awarded to New York; North Dakota had the lowest sub-score of 1.
In 2009, the AIP National Survey of High School Physics Teachers  sorted states into three categories for physics course-taking – states significantly higher than the national rate of 37% (Massachusetts, Michigan, Minnesota, New Hampshire, Texas and Wyoming), states significantly lower than the 37% national rate (Alabama, Alaska, Arkansas, Idaho, Mississippi, Montana, Nebraska, Nevada, North Carolina, North Dakota, Oklahoma and Tennessee), and states not significantly different from 37% (everybody else). Similarly, in 2005, the national physics-taking rate was 33%. The above-average states were Massachusetts, Maryland, New Jersey, and Pennsylvania, and the below-average states were Alabama, Arkansas, Mississippi, Oklahoma, and Tennessee. Specific details on how we calculated the physics-taking sub-score are described in the appendix.
The numerical score for science is then computed by averaging the numerical sub-scores for the three indicators. There are four states that did not participate in the NAEP Science Assessment (Alaska, Kansas, Nebraska and Vermont). For these states, the numerical score for science is calculated by averaging the numerical sub-scores for AP Physics and the physics-taking rate. The highest score for science was earned by Massachusetts (4.55). The lowest was awarded to Mississippi (1.27).
Teacher qualifications score
In 2010, the NCTQ issued a report  on science teacher certification in each state and assigned a grade of “red light”, “yellow light” or “green light” depending on whether a state had discipline-specific certifications for each science discipline. Some states have only general science certifications, while a few have a certification for each discipline (biology, chemistry, Earth science and physics). States with discipline-specific certifications earned green lights, while states with general science certifications earned red lights. Yellow lights were awarded to states with certification procedures that were somewhere in the middle of these two limits.
We assign a teacher qualifications numerical score of 5 to those states that earned a green light from the NCTQ, a score of 3 to yellow light states, and a score of 1 to red light states.
Composite SERI score
In calculating a composite SERI score, we weight the scores for math and science to each account for 40% of the total. The teacher qualifications score accounts for the other 20%. While teacher qualifications are very important, the NCTQ report provides a very coarse evaluation that does not account for the difficulty of earning (for example) a physics certification. Under these circumstances, the 20% seemed appropriate.
Only a few good states
Massachusetts easily leads the field with a SERI of 4.82.
Minnesota, New Jersey, New Hampshire, and New York score between 3.94 and 4.06. These scores are well above the national average of 2.82. We rate Massachusetts as “Best in the US” and the call the next four states “Well above average.”
A third group of states post above average scores between 3.24 and 3.73. These states are Virginia, Maryland, Indiana, Connecticut, and Maine. We call these states “Above average.”
These ten states with SERI scores above the national average accounted for just over 20% of high school graduates in 2009.
“Average” states post average SERIs between 2.53 and 3.13. Nineteen states fall into this group. These states accounted for just over 37% of 2009 high school graduates.
“Below average” states post SERIs that range from 2.14 to 2.47. These twelve states include California. These states accounted for almost one-third of 2009 high school graduates.
“Far below average” states have a SERI between 1.58 and 2.01. These states accounted for about 9% of 2009 high school graduates.
Finally, Mississippi’s SERI is 1.11. This lags the other states by almost half a point and results in Mississippi being labeled “Worst in the US.”
Table 1 shows the sub-scores, component scores, SERI, and rating for each state. Figure 1 depicts the percentage of seniors graduating from schools by SERI rating, and Figure 2 shows the SERI data by state.
The need for science and engineering readiness
In 2010, the sequel to RAGS, Rising Above the Gathering Storm, Revisited: Rapidly Approaching Category 5 (2010) examined the status of the recommended actions.  The findings with respect to K-12 math and science education are not good: “[I]n spite of sometimes heroic efforts and occasional very bright spots, our overall public school system – or more accurately 14,000 systems – has shown little sign of improvement, particularly in mathematics and science.” (p. 4)
Science and engineering readiness are building blocks for scientific and technical achievements which contribute to a better society. Not only do the tools, techniques, and tangible goods developed by scientists and engineers provide for more comfortable living conditions, but the economic viability of new discoveries is vital to the sustaining our place in the global marketplace. The call to improve science and engineering education has been sounded repeatedly. Our goal has been to provide a tool to enable policy makers and others to assess progress in that direction.
Table 1: 2011 Science and Engineering Index indicator scores
|State||NAEP||AP Calculus||SCORE||NAEP||AP Physics||Physics-taking||TOTAL||SCORE||SERI||Rating*|
|AL||1.54||1.92||1.73||1.57||1.63||2.11||1.77||1.00||1.60||Far below average|
|AZ||2.51||1.84||2.18||2.00||1.41||2.88||2.10||1.00||1.91||Far below average|
|LA||1.54||1.61||1.58||1.71||1.07||2.88||1.89||1.00||1.59||Far below average|
|MA||5.00||5.00||5.00||4.71||4.87||4.06||4.55||5.00||4.82||Best in the US|
|MN||4.46||3.61||4.04||4.57||2.49||3.80||3.62||5.00||4.06||Well above average|
|MS||1.00||1.00||1.00||1.00||1.05||1.77||1.27||1.00||1.11||Worst in the US|
|NE||3.16||1.72||2.44||N/A||1.42||2.54||1.98||1.00||1.97||Far below average|
|NH||4.03||3.88||3.95||4.43||2.56||3.74||3.58||5.00||4.01||Well above average|
|NJ||4.14||3.84||3.99||3.71||3.56||3.54||3.60||5.00||4.04||Well above average|
|NM||1.54||1.91||1.73||1.86||1.51||2.88||2.08||1.00||1.72||Far below average|
|NV||2.08||2.48||2.28||1.71||1.74||2.71||2.05||1.00||1.93||Far below average|
|NY||3.05||4.21||3.63||3.29||5.00||2.88||3.72||5.00||3.94||Well above average|
|OK||1.97||1.76||1.87||2.43||1.58||1.00||1.67||3.00||2.01||Far below average|
|WV||1.43||1.46||1.44||2.00||1.12||2.88||2.00||1.00||1.58||Far below average|
*Best in the US (SERI = 4.82)
Well above average (3.94 < SERI < 4.06)
Above average (3.24 < SERI < 3.73)
Average (2.53 < SERI < 3.13)
Below average (2.14 < SERI < 2.47)
Far below average (1.58 < SERI < 2.01)
Worst in the US (SERI = 1.11)
Figure 2: SERI Scores by State
Appendix: Calculating the Sub-scores
We calculated the sub-score for each indicator, S(indicator, state), in two steps.
In the first step, we calculated a ratio for the indicator for each state relative to the highest indicator:
R(indicator, state) = I(indicator, state) / I(indicator, max)
R(NAEP-Math, MS) = 15 / 52 = 0.288
where 15 is the Math NAEP Indicator for Mississippi and 52 is the maximum Math NAEP Indicator over all states.
The resulting ratios ranged from a minimum of less than 1 to a maximum of 1. We then converted these ratios, R(indicator, state), into sub-scores where were scaled to range from 1 (for the lowest state) to 5 (for the highest state) for each indicator as follows:
S(indicator, state) = 1 + (R(indicator, state) - R(indicator, min)) / R(indicator, range) * 4)
R(indicator, min) is the minimum ratio for a particular indicator over all states
R(indicator, range) = R(indicator, max) - R(indicator, min)
For the physics-taking indicator (I(physics-taking, state)), we used the national average physics-taking rate for states in which the rate did not differ significantly from the national average. For states in which the physics-taking rate was significantly higher than the national average, we used the lower bound of the 95% confidence interval for physics-taking in the state. For states in which the physics-taking rate was significantly lower than the national average, we used the upper bound of the 95% confidence interval for physics-taking in the state. We used the average of two ratios (R(physics-taking, state, 2005) and R(physics-taking, state, 2009)) using data from both the 2005 and 2009 surveys. We chose to use the average to account for year-to-year fluctuations in these data.
Susan White (email@example.com) works for the Statistical Research Center at the American Institute of Physics. Paul Cottle (firstname.lastname@example.org) is a Professor in the Department of Physics at Florida State University.
Disclaimer- The articles and opinion pieces found in this issue of the APS Forum on Education Newsletter are not peer refereed and represent solely the views of the authors and not necessarily the views of the APS.