Volume 28, Number 2, April 1999
Efficiency Standards: Mitigating Global Climate Change at a
Profit. Howard S.
Geller and David B. Goldstein Presented at FPS-APS Awards Session, Columbus,
Ohio, April 14, 1998
Equipment Efficiency Standards:
Mitigating Global Climate Change at a Profit.
Howard S. Geller and David B. Goldstein
Presented at FPS-APS Awards Session, Columbus, Ohio, April 14, 1998
Introduction: Physics in the Public Interest
A. Refrigerators and Power Plants That Were Never Built.
One of Leo Szilards claims to fame is the invention, with Albert Einstein, of several new technologies for domestic refrigerators. But due to the Depression and to unexpected progress from vapor-compression cycle refrigerators based on CFCs, the Szilard-Einstein refrigerators were never built. This talk is also about refrigerators that were never built: inefficient mass-produced refrigerators, along with inefficient air conditioners, washing machines, furnaces, and many other products. It is also about many costly and polluting power plants that were never built thanks to appliance and equipment efficiency standards.
The refrigerator that was never produced might be consuming as much as 8,000 kWh/year, if 1947-1972 trends had continued. (See Figure.) Instead, as a result of six iterations of standards, the average American refrigerator sold after the year 2001 will consume 475 kWh/year, down from an estimated 1826 kWh/year in 1974 despite continuing increases in size and features. (See Figure.) Peak demand savings, estimated on the assumption which nobody realized was untrue at the time that 1972 energy consumption would have remained constant without standards (rather than increasing), are about 13,000 MW today. But if we had extrapolated pre-1970s performance, peak demand by refrigerators today would be about 120,000 MW, compared to the actual level of about 15,000 MW. The difference exceeds the capacity of all U.S. nuclear power plants.
Exponential extrapolation of past trends was not an unrealistic assumption at the time. Virtually every utility in the country, backed by their regulatory agencies and Department of Energy forecasters, was assuming that residential electricity growth would continue at about the 9.5% rate that it had grown during the prior decades. The total growth in electricity consumption for refrigerators was also about 9.5%. Suggesting that this rate would come down in the future, as one of the authors did, was highly controversial.
Why were the projected inefficient refrigerators and other products not built? The overwhelming cause was the development of efficiency standards for the products. Non-governmental organizations (NGOs), of which the authors are representatives, played a seminal role in creating the policy and legal atmosphere in which standards could be promulgated.
B. Institutionalizing Public Interest Physics
We are honored to receive an award as two representatives of the non-profit sector, both professionally trained scientists working full-time in public interest institutions that value scientific training and expertise. This is a new phenomenon in America, which appears to be ahead of the rest of the world in this regard. Leo Szilard of course pioneered this type of work in his founding of the Council for a Livable World.
Non-profit organizations promoting environmental quality or energy efficiency have been around for all of the 20th Century, but until the late 1960s, these organizations were based primarily on volunteer effort and did not widely employ the knowledge of scientific or other professionals on a regular basis. This situation changed with the rising environmental awareness of the last three decades, and the non-profit sector has reached a level of scientific maturity that we believe is recognized by our receipt of the Szilard Award. Such awards are no longer solely for scientists working for universities, large laboratories, or the private sector. This year marks the second time that a scientist in one of our organizations has received this honor.
The accomplishments for which this award is presented this year are based, we believe, on a different perspective in looking at energy problems and their environmental consequences. There are two sources of this perspective. The first is our base in the non-profit sector. Scientists working full-time for NGOs had the resources to analyze the problems of energy use from a policy viewpoint as well as a technical viewpoint and to pursue answers and solutions to the questions of why the world was using so much energy.
Another source of this new perspective is the problem-solving approach that is provided by physics, as contrasted to the traditional economics approach.
Traditional economics tends to see energy as merely one of a set of commodities in the economy. Demand and supply of energy are determined by market equilibration.
When the first energy crisis hit, this line of reasoning predominated. It held that energy underlay most all of the productive processes of the United States, and its use was optimized, so reductions in energy use necessitated by supply constrictions or high prices would come only at a sacrifice.
Physicists began to question this theory. First, analyses of the technologies for energy use found widespread unexploited potentials to reduce energy use by 30% or 50% or more with payback periods of three years or less.
Physicists began looking at broader ways of defining the problem that energy was being used to solve, and at broader views of different design principles that would allow large energy savings. This systems approach frequently could offer larger energy savings, lower overall costs, and higher quality energy services compared to a component approach.
The systems approach can be applied to the entire energy sector, comparing efficiency improvements with energy supply upgrades and developing a policy framework that picks the cheapest and most secure options first. This approach has been used in California, the Pacific Northwest, New England, and Wisconsin, saving consumers in these jurisdictions tens of billions of dollars.
Another key intellectual contribution by physicists was the need to compare theory with experiment. Economic theory asserts optimization, but remarkably little study had ever been performed about whether this hypothesis was validated or contradicted by real world practice. NGOs found, performed, or encouraged empirical research that showed massive market failures in the area of equipment efficiency.
Most recently, the hypothesis of market optimization has been falsified on a grand scale by the elaborate measurement and evaluation of utility incentive programs in California and elsewhere. Studies confirmed that California utilities had found over $2 billion of societal benefit, averaging a benefit-cost ratio of more than 2:1, during the early 1990s.
II. Benefits of Appliance Standards
Minimum efficiency standards on appliances and equipment provide broad benefits. Consumers save money, energy savings yield reduced pollutant emissions in the home and at the power plant, utilities benefit from the reduced need for investment in new power plants, transmission lines, and distribution equipment, and appliance manufacturers as well as retailers can benefit from selling higher priced, higher value- added products.
Appliance standards in the U.S. were initiated through a complex process involving the interplay of national and state regulatory initiatives. The first standards were adopted by states in the mid-1970s. Federal legislation called for national standards by 1980, but this effort was dropped by the Department of Energy in 1983. NGOs and states challenged this DOE decision in court; at the same time California responded to an NRDC petition and initiated proceedings on refrigerator and air conditioner standards in 1983. Following Californias adoption, other states began to promulgate their own standards.
In this atmosphere, the manufacturers agreed to negotiate consensus national standards with our organizations in return for preemption of further state standards. These discussions bore fruit, and national efficiency standards were adopted on a wide range of residential appliances, lighting products, and other equipment through the National Appliance Energy Conservation Act (NAECA) in 1987, along with amendments to NAECA adopted in 1988 and 1992. Pursuant to these laws, the Department of Energy (DOE) issued tougher standards via rulemaking on four occasions so far.
Standards already adopted are expected to save about 1.2 Quads (1.3 EJ) per year of primary energy in 2000, rising to 3.1 Quads (3.4 EJ) per year by 2015 (see Table 1). Since most of the savings is electricity, standards are expected to reduce national electricity use in 2000 by 88 TWh -- equivalent to the power typically supplied by 31 large (500 MW) baseload power plants. By 2015, the electricity savings from standards already adopted is expected to reach 245 TWh.
These standards will save consumers about $160 billion net (i.e., energy cost savings minus the increased first cost, expressed as net present value in 1996 dollars). This means average savings of over $1500 per household . Consumers save $3.20 for each dollar added to the first cost of appliances.
Appliance standards reduce air pollution and greenhouse gas emissions substantially. Lawrence Berkeley Lab estimates that existing standards will prevent 29 million tons of carbon emissions, 286,000 tons of NOx emissions, and 385,000 tons of SO2 emissions in 2000. The carbon savings by 2010, around 65 million tons, is equivalent to removing around 30 million automobiles from the road.
Manufacturers' bottom lines will not be adversely affected by standards.Manufacturers incur additional costs to improve the energy efficiency of their products, but recoup these costs by selling higher value-added, higher priced products. Whereas competitive pressures make it difficult for an individual manufacturer to enhance energy efficiency unilaterally, uniform regulations level the playing field.
The benefits of appliance standards extend worldwide. Many products covered by the U.S. standards are produced and traded internationally, leading to diffusion of new technologies worldwide. For example, today refrigerators are more efficient in Brazil because many compressors used in U.S. refrigerators are manufactured in Brazil. The U.S. standards led to steady improvements in the efficiency of these compressors, which are used in Brazil as well as exported.
Following the U.S. lead, appliance standards have also been adopted by Canada, Mexico, Brazil, Japan, Korea, and the European Community. These countries are extending standards to additional products, and other countries including China are developing standards, in order to reap even greater benefits
III. Future Standards and the Kyoto Climate Protocol
NAECA requires the Department of Energy to consider amended standards on a regular schedule. It contains specific criteria for such standards , including cost- effectiveness for consumers and manufacturers. Physics and economics are supposed to guide the setting of standards,.
Appliance efficiency standards and their close cousins, efficiency standards for new buildings could be a significant contributor to the U.S. goal under the Kyoto Protocol to reduce greenhouse gas emissions by 7% from their 1990 level. This goal entails a reduction of around 505 megatons of carbon equivalent by 2010. New appliance standards could provide roughly 30 megatons (see Table 2). This is 6 percent of the entire goal, coming mainly from the buildings sector, which accounts for 30% of total U.S. carbon emissions.
The effect of standards is amplified if we include the potential savings from new building efficiency standards. States that have taken a leadership role in promulgating appliance and equipment efficiency standards have also been global leaders in building energy efficiency standards. When pursued in tandem, savings from each policy have been comparable in magnitude.
If the rest of the United States is able to achieve the improvements in new building efficiency standards adoption and enforcement that West Coast states have already achieved, these standards will provide an additional 44 megatons of avoided carbon emissions by 2010 (see Table 3). These emissions reductions will be achieved with a net benefit of about $65 billion.
Total carbon savings from building and appliance efficiency standards cover 15% of the total U.S. goal under the Kyoto Protocol.
Efficiency standards are not the only policy that can promote expanded energy efficiency in the building sector. Market transformation programs, tax credits, utility energy efficiency programs as facilitated through a public benefits charge, private or public research and development on energy efficiency, and information services can build upon the savings achieved by standards.
Adoption of these economically attractive measures greatly reduces the likelihood that unprofitable measures will be needed to meet the Kyoto target.
But the benefits from standards are dependent on policymakers taking prompt action. Savings from standards take a relatively long time to occur. The standard setting process itself takes two years or more , and manufacturers must be provided three years or more of lead time.
After standards take effect, energy savings will be obtained from that portion of the stock of equipment (or buildings) that is turned over. Energy-using capital tends to be long-lived: 10-25 years for equipment and 45-100 years for buildings.
These considerations limit the amount of energy savings and emissions reductions that can be achieved by the yearf 2010. Setting new standards on a wide range of products over the next three years could result in an emissions reduction of 59 MtC by 2020, but only 30 MtC by 2010. And if setting these standards is delayed by three years, the avoided emissions by 2010 would drop about 50%.
This point has policy importance. Many policymakers are undecided as to whether the U.S. should ratify the Kyoto Protocol, primarily because of concerns about its economic effects. But appliance standards have a positive economic impact, and there is essentially no scientific dispute about this fact. It would be an economic (as well as an environmental) mistake to delay the adoption of new standards, particularly if the Kyoto Protocol is eventually ratified.
Considering both appliance and building efficiency standards, annual carbon emissions savings from the building sector more than double between 2010 and 2020, even assuming that no new standards are adopted after 2010. This calculation shows that if the U.S. building sector meets its share of the U.S. 7% greenhouse gas emissions reduction goal for 2010, even larger savings can be achieved automatically in 2020 and beyond.
In conclusion, this analysis presents multiple reasons why the United States should move forward aggressively with new appliance standards and other climate mitigation measures that can be justified without considering environmental benefits. By doing so, we not only mitigate global warming, but we facilitate compliance with the Kyoto Protocol painlessly indeed, profitably if the United States decides to ratify the Protocol.
Appliance efficiency standards have been one of the most successful public policy initiatives to promote energy conservation in the United States if not the world. We are proud of the results and proud of being recognized for the leadership we provided. While Dr. Szilard's refrigerators were not commercialized, we think he would approve of the "refrigerator revolution" brought about by these efficiency standards.
TABLE 1 - SAVINGS FROM EXISTING STANDARDS
1) The percentage of projected U.S. use is based on forecasts in the Annual Energy Outlook 1998, Energy Information Administration, Washington, DC.
2) Net economic benefit is expressed in 1996 dollars, using a 7% real discount rate to calculate net present value.
TABLE 2 - ESTIMATED NATIONAL SAVINGS
FROM FUTURE EFFICIENCY STANDARDS
Avoided carbon emissions are expressed in million metric tons assuming electricity savings come from fossil fuel-based power plants. Assumptions about power plant heat rates and carbon coefficients are derived from the Annual Energy Outlook 1998, Energy Information Administration, Washington, DC.
TABLE 3 - ESTIMATED SAVINGS FROM FUTURE BUILDING
NOTE: Calculations of energy and carbon savings are based on NRDC modifications of the Pacific Northwest National Laboratory-developed model for DOE input to the Government Performance Results Act. Costs are estimated by summing over annual results of the modified model runs.
Howard S. Geller, American Council for an Energy-Efficient Economy
David B. Goldstein, Natural Resources Defense Council
New Automotive Technologies
The Need for Changes in Automobiles
One major goal for automobiles is reduced local pollution. In the US, grams-per-mile emissions of carbon monoxide and hydrocarbons are restricted in formal tests to be at most 4% of their mid-1960s levels. Nitrogen oxides are also strongly regulated. Unfortunately, the average
car on the road emits 2 to 5 times more than the test-levels.1 As a result, a great variety of new regulatory initiatives have been undertaken; and manufacturers have taken major steps to meet them, as discussed below. While this problem is being solved in new autos, emission of particles is of increasing concern, as discussed briefly in connection with diesels.
Another concern is fuel economy and greenhouse gas emissions. In the US, the fuel economy of new automobiles improved dramatically from 1975 to 1982 primarily in response to the Corporate Average Fuel Economy (CAFE) regulations, supported by gasoline price increases. Since then the fuel economy of new automobiles has stalled at an average test-value of 25 miles per gallon.2 Presumably because the inflation-corrected price of gasoline is low, individual buyers of new vehicles are not interested in fuel economy. However, as citizens rather than as individual buyers, people do want society to achieve higher fuel economy; they are concerned about the long-term future of fuel availability and, perhaps, about climate change.
Although fuel economy has stagnated, manufacturers have introduced more-efficient technology, using it to increase power, size and weight - at fixed fuel economy. Two large autos have also been introduced: minivans and "sport utility vehicles". The minivans provide transportation for one to seven people, great versatility for the cost. The SUVs are essentially alternative styling. Where the pickup truck has long been a popular car-substitute among modest income households, the SUV is popular with wealthier households. Roughly 80% of pickups and near 100% of SUVs are used just like cars, in spite of being regulated as trucks, and thus being less safe, polluting more, and using more fuel per mile. There is pressure to bring these vehicles under more stringent regulation.
In a sense, the strongest driver for new societal goals for automobiles is new technological capabilities. The capability to design and reliably manufacture high-tech products is revolutionary, with new materials and new kinds of sensors and controls based on microprocessors. Sensors are at the heart of this revolution. Where sensors used to have to give simple unambiguous signals, sharply limiting their application, today responses are interpreted by a microprocessor, enabling the practical development of sophisticated controls on- board the vehicle and in its manufacture.
New Technologies for Conventional Autos
Conventional vehicles are achieving ever- higher performance and reliability while meeting stricter emissions and safety standards. And the petroleum-fueled internal combustion engine will continue to be improved. Consider emissions first.
A recent technological surprise with major consequences is that the conventional automobile can be extraordinarily clean in actual use. Two improvements are responsible: A proliferation of sensors coupled to the microprocessor that manages the engine is enabling improved and durable control of the air-fuel ratio, the variable to which catalytic exhaust clean-up is most sensitive. Second, more durable and rapid acting catalytic converters have been developed, using coatings that degrade less at high temperature; as a result, a small catalytic converter can be placed next to the exhaust manifold where it heats up quickly, converting much of the pollution in cold start. In addition, the manufacturers, in meeting a regulatory requirement for on-board diagnostic equipment, have learned much more about how their emissions controls function in the real world.
In the future this success can be strengthened. Conventional gasoline-fueled vehicles with essentially "zero emissions" are feasible. Almost complete control of combustion, based on measuring the performance of each cylinder in each cycle, is in hand, using sensors with good time resolution. One type examines the angular acceleration of the flywheel, another examines properties of the exhaust. Pressure measurement within each cylinder is also in development. Coupled with this, "direct injection" of fuel into each cylinder, being introduced by Mitsubishi and others, improves opportunities for control of the fuel-air ratio. (In today's spark-ignition engines, fuel injection is into the intake manifold. As a result, only about half of the fuel taken into the cylinder is injected during the same stroke, the rest being swept up from previously injected fuel.)
Energy efficiency improvement is a more- difficult challenge than after-treatment of the exhaust. The laws of physics make it more difficult, and so does the absence of new regulatory pressure. Nevertheless, powerful energy-saving technologies are being adopted and are in development. An example is, again, the direct-injection gasoline engine. With a fuel spray controlled in space and time, a stratified charge can be created such that combustion is reliable even with overall fuel- air mass ratios about one-third of stoichiometric. (With a uniform static mixture, the flame goes out if the gasoline-air ratio is less than about 0.7 stoichiometric.) This has three benefits: Air molecules are a good thermodynamic fluid: more of the heat goes into increased pressure than with the complex fuel molecules. Second, work output can be varied without changing the amount of air, reducing the need for throttling. Third, heat loss is reduced. Overall, a 25% efficiency improvement is feasible in urban driving. There is also a disbenefit: reduction of nitrogen oxides in oxygen-rich exhaust is difficult, which calls for inventive solutions.
Modern direct-injection diesels offer even better efficiency. However, with petroleum-based diesel fuels there are many particles in the exhaust. Small particles, perhaps coated with toxic fuel components, can lodge in the lungs, causing major health problems. However, the information is complex, in part because there are several types of particles with different causes. In addition, the largest group of particles is tens of nanometers in size, while the regulations are stated in terms of the mass of particles less than 10 microns in size! Some size distribution studies even suggest that as diesel engines are being designed to meet stricter particle-mass regulations, the number of particles emitted is actually increasing, with adverse impacts on health. Particles may also be an important problem for gasoline engines. Badly needed is fast on-line measurement technology which can count particles in different size ranges.
Much of the efficiency improvement achieved in the last two decades has come indirectly from increasing the engine's specific power (the maximum power per unit of displacement). In the past 20 years the specific power of the average car was increased 90%! This achievement enabled a 58% engine downsizing and a 26% reduction in 0-to-60 mph acceleration time. Engine downsizing means reduced engine friction and reduced weight. The increase in specific power was enabled in part by adding valves and by more accurate manufacturing. If the improved specific power had all been dedicated to it, fuel economy would have increased about 15%. The opportunity to continue increasing specific power is excellent.
Substantial progress in transmission efficiency will also occur. Two developments are: 1) adding gears (e.g. 5-speed automatic transmissions) so one drives at a modest engine speed more of the time, and thus with less friction, and 2) continuously variable transmission. Moreover, a CVT can be less lossy than a typical automatic transmission.
Progress may also be made in technologies that reduce vehicle load: reduced air drag, tire rolling resistance, mass and accessory loads. The appearance of streamlining is popular with buyers; and much more can be done. Lower-energy tires continue to be introduced as original equipment to help meet the CAFE standard.
Although typical vehicle masses have been increasing recently, mass presents the best opportunity for load reduction. Lighter materials, especially high-strength steel, plastics and aluminum, are taking increasing shares. The manufacturers are optimistic about the prospects for up to 40% mass reduction. Amory Lovins and collaborators have proposed the "Hypercar" with more-radical mass reduction.
Hybrid propulsion is a design which combines the engine with supplemental motor and storage such as a battery. One promising type of hybrid uses a small petroleum- fueled engine like a 1.2 liter three-cylinder direct-injection engine. The engine is truned off and the supplemental system drives the vehicle when very little power is needed. They are used together at very high power. The supplemental system is recharged by regenerative braking. A hybrid with an advanced engine and with practical reductionss in vehicle load would achieve two to three times greater fuel economy. I hope such a breakthrough will soon be seen in an SUV.
Two interesting technical points: The battery in this hybrid is quite different from that in an all- electric vehicle: High power rather than energy density is needed; and that is an easier target for electrochemists than high energy density. Second, one wants to turn the engine off at low power because the frictional work in a conventional engine at normal engine speed is about 7 kW, while the power loss at low output with a motor-inverter-battery system is about 1 kW.
Given all the technological options for improving fuel economy, the question is whether improvement will occur - in the absence of stronger regulations and/or very large increases in fuel prices.
A quite different approach to fuel economy is the small vehicle. A small narrow vehicle could meet several goals: much higher fuel economy, reduced emissions, and less congestion, both on the street and in parking.3 The basic rationale for small cars is that some 87% of trips involve only one or two riders. The disadvantage is the household's need for another vehicle when more people or large loads are involved.
Safety is critical to the future of small and light automobiles. Given a collision with another vehicle, lighter vehicles are of course at a disadvantage in principle; but the danger is sensitive to design, and the accident-severity correlation with vehicle weight is observed to be small in the frontal crash test with today's vehicles. (The danger correlates with the length of the vehicle, however, as one might expect.) Moreover only about one-fourth of fatal accidents are collisions between two automobiles. It should be required that all vehicles be designed to reduce the danger to other vehicles with which they may collide.4 The safety issues need more research and public discussion. Certainly the concept Ive heard expressed that driving a SUV will make one safer because it is a killer of others, is bad logic as well as bad morals.
Alternative Energy Systems
Alternative automotive fuels: ethanol, methanol, natural gas, electricity (called a fuel here), hydrogen, and others, have enthusiastic, and often single-minded, advocates. There are huge investment implications, often including huge subsidies. One has to be skeptical of many claims.
A few comments: 1) Ethanol is now extracted from corn. This process is subsidized, undertaken as a result of lobbying by agriculture and the agro-industrial giant ADM. Alcohols based on growing plants will be serious contenders when economical technology is developed for making fuel from the woody material rather than from the seeds. 2) Natural gas is of interest. It is currently less expensive than gasoline but is awkward to store on board. It could be important for heavy duty vehicles. 3) Methanol and Dimethyl ether, made from methane are potential alternatives to diesel fuel. They avoid production of soot, one component in diesel particle emissions, because they have no carbon-carbon bonds.
Proponents of electric vehicles fail to make it clear that with the available batteries one cannot achieve size and performance comparable to conventional vehicles. Consider a simple exercise: Gasoline is conventionally used on board with an overall efficiency of about 17%. With excellent design, battery electricity might be used on board with an efficiency perhaps four times greater. This is a rough general result including the benefits of regenerative braking. With today's best batteries, 30 kWh storage requires a lot of weight, roughly 1/3 ton. Multiply 30 kWh by 4 to get the gasoline equivalent energy: It's about 3 gallons. When I have three gallons in my car's tank, I'm thinking I need a fill up. Moreover, recharging this three-gallon battery set is not simple. EVs cannot be like today's vehicles, barring much better batteries than those recently publicized.
Unfortunately, many of the alternative fuels would not help much with greenhouse gas emissions. For example, electricity is mainly made with coal, and there would be little GHG reduction if EVs had the same size and performance as today's vehicles.
The most interesting alternative is the fuel-cell automobile, fueled with hydrogen. The proton exchange membrane (PEM) cell is now favored for autos. While the efficiency of heat engines is strongly constrained by the second law and friction, physics is more generous to fuel cells. The efficiency of a hydrogen-fueled cell can be as high as 55% at moderate load. This does not include the rest of the propulsion system.
Daimler-Benz, Ford and Toyota are among early PEM developers. Chrysler has announced a fuel cell initiative with the hydrogen to be generated on board from gasoline. The advantage of this approach is, of course, its use of the existing gasoline distribution system. This concept could get fuel-cell vehicles on the road.
Eventually, hydrogen may be produced centrally and stored on board as a gas. At present, hydrogen is produced in large quantities at refineries from natural gas; and it can be produced renewably. With their high fuel economy, fuel cell vehicles using hydrogen stored on board could become practical, although the storage is technologically challenging. This would be an clean vehicle, with low energy and greenhouse gas implications. Fuel-cell autos are an excellent goal for the long run.
1. J. G. Calvert, J. B. Heywood, R. F. Sawyer and J. H. Seinfeld 1993, "Achieving Acceptable Air Quality: Some Reflections on Controlling Vehicle Emissions", Science, vol. 261, pp 37-45. M. Ross, R. Goodwin, R. Watkins, T. Wenzel & M. Q. Wang, "Real-World Emissions from Conventional Passenger Cars", Journal of the Air & Waste Management Assoc., vol. 48, pp 502-515, June 1998.
2. "Automobiles" refers here to passenger cars and light "trucks" under about 4 tons. The latter, pick- ups, minivans and "sport utility vehicles", are almost all used as passenger cars in the US.
3. Robert Q. Riley, 1994, Alternative Cars in the 21st Century: A New Personal Transportation Paradigm, Society of Automotive Engineers, Warrendale PA.
4. US Congress, Office of Technology Assessment 1995, Advanced Automotive Technology: Visions of a Super- Efficient Family Car, OTA-ETI-638, USGPO, pp 196-202. H. C. Gabler & W. T. Hollowell, "The Aggressivity of Light Trucks and Vans in Traffic Crashes", Society of Automotive Engineers technical paper 980908, 1998.
Physics Dept., University of Michigan, Ann Arbor MI 48109-1120
Nuclear Weapons After the Cold War
Paper Given at APS Centennial Meeting, Atlanta, GA, March 24, 1999
W. K. H. Panofsky
The cold war is over but little has changed in respect to United States nuclear weapons policy. But the nature of the threats to United States security from nuclear weapons has shifted dramatically since the end of World War II. Today the likelihood of deliberate large scale nuclear attack against the United States is much less than the risk of nuclear weapons accident or unauthorized use and the threat from the proliferation of nuclear weapons across the globe.
During the cold war we have seen a dramatic nuclear build-up, reaching a rate on the U.S. side of more than 5,000 weapons per year, shifting to a build-down of nuclear weapons, which currently proceeds at a rate of around 1,500 per year. Figure 1 shows the pattern, including both the nuclear forces of the United States and the Soviet Union-Russia. The peak of the build-up "enriched" the world with over 60,000 nuclear weapons, an insane figure on its face considering the fact that two nuclear weapons, of explosive power of about one-tenth of the average of the weapons in current stockpiles, killed a quarter million people in Japan. The build-down as of today has cut the cold war peak by only about one-half.
This pattern is characterized by the fact that during the build-up the United States led the Soviet Union by roughly five years but that when the United States ceased building, the Soviets did not. Much ink has been spilled explaining the causes of this inexcusable build-up. Two sources stand out: One is that nuclear weapons have become symbols of political power with their physical reality relegated into oblivion. We as physicists have a major responsibility to maintain public awareness of the awesome reality of nuclear weapons. This task is made even more difficult in that, thanks to the tradition of non-use of nuclear weapons since 1945 and the cessation of atmospheric nuclear tests, only some old fogies have ever seen a nuclear explosion. The second reason for this vast nuclear arsenal has been the extension of the proclaimed utility of nuclear weapons beyond their "core mission," that is deterring the use of nuclear weapons by others to other military uses. This concept of extended deterrence, that is using nuclear weapons to deter threats posed by non-nuclear, meaning conventional, chemical and biological weapons, or to use the threat of nuclear weapons to protect the interests of other nations, denied the policy makers a meaningful answer to the fateful question "When is enough, enough?"
All this is now behind us -- or is it? Nuclear weapons are still viewed by many as symbols of power. The recent nuclear tests by India and Pakistan were largely motivated by politics, not by a profound and realistic analysis of security needs. The latest full review of United States policy concerning nuclear weapons -- the Nuclear Posture Review
(NPR) of 1994 -- retained a great deal of cold war thinking. The magnitude of "required" forces was still determined by a list of thousands of nuclear targets which had to be covered. The policy underlying the NPR was designated as "reduce and hedge," meaning that while the reducing trend illustrated should be continued, large non-deployed stockpiles should be retained in order to re-equip United States nuclear delivery systems with additional warheads should a more hostile Russia reemerge.
Since the time of the Nuclear Posture Review, there has only been one additional revision of official United States nuclear weapons policy. This occurred last year. The only change made was that the United States should no longer be prepared to fight a "protracted" nuclear war but be able to reply to a large variety of threats by a single response. However there was still a large target list, and deterrence was still to discourage a wide variety of conduct by other countries believed to be capable of hostile action against the United States, its allies or "interests."
These official policies tended to subordinate the threat due to increasing nuclear weapons proliferation and the risk of accidental or unauthorized use to a role distinctly secondary to the need of nuclear weapons to counter a large spectrum of specified conjectured threats. Yet the threat of nuclear proliferation is largest to the United States among all other nations. The United States, being the world's dominant power politically and measured by its conventional prowess, has most to lose if nuclear weapons proliferated. Nuclear weapons concentrate the destructive energy which can be delivered by any vehicle carrying weapons of a given size and weight by approximately a factor of one million. Thus nuclear weapons are in many respects the "great equalizer" among nations powerful and less powerful in the same sense that in the middle ages fire arms equalized the power of the physically weak and physically strong
Potential proliferants can deliver small numbers of nuclear weapons in many ways. Note that the U.S. developed a nuclear projectile system, the Davy Crockett, which could be handled by a single soldier. Thus nuclear weapons could be detonated on ships in harbor, delivered by light aircraft, smuggled across U.S. boundaries, as well as by ballistic and cruise missiles of a variety of ranges. Meaningful defense against such a spectrum of delivery options is impossible -- note the continuing failure of the "war on drugs" to prevent surreptitious entry. The merits and lack of merit of the huge effort to prevent delivery of nuclea