Volume 28, Number 3 July 1998


The "cold war" is now officially over - what happens to physics now?. While it was "on", it provided great professional challenges, and ample economic rewards, to the physics community - on both sides. The major investments in science also had great impact upon the general society, as sketched in the recent talk by Burton Richter, reprinted below. But, in spite of its apparent growing disinterest in science (and increasing fascination with pseudo-science) society still needs us: there are important, challenging problems which society needs us to address. Some are the result of past "hot" wars - as illustrated by the Kosta Tsipis article on the clearing of land mines (recently a major item in the news due to U.S. refusal to sign a treaty banning such mines). Some utilize the competencies and tools we have developed while waging the "cold war", e.g., the piece by Goldstone, et al, sketching the use of weapons lab computer facilities to address major environmental questions. And some represent the continuation of "cold war attitudes", as illustrated by the secrecy article by Aftergood. This latter problem has recently manifested itself, in the pages of this journal, via the unresolved dispute between Miller and von Hippel - claiming that useable nuclear weapons can be made from reactor-grade plutonium, which thus should limit its civil/commercial use - and DeVolpe - arguing that such materials are not a source of significant weapons and hence their commercial use presents little proliferation threat. Until classified files on this matter are opened, the debate - important for our future energy/environmental concerns - is a dialogue of the deaf.

(Editor's Note: The following piece is a record of the remarks made by Burton Richter, Director of the Stanford Linear Accelerator Center, and past president of The American Physical Society (1994), to the Senate Forum on "Research as an Investment.")

Long-Term Research and Its Impact on Society

Burton Richter

It is a privilege to participate with this distinguished group in this forum on Research as an Investment. My perspective is that of a physicist who has done research in the university, has directed a large laboratory involved in a spectrum of research and technology development, has been involved with industries large and small, and has some experience in the interaction of science, government and industry.

Science has been in a relatively privileged position since the end of World War II. Support by the government has been generous, and those of us whose careers have spanned the period since World War II have, until recently, seen research funding increase in real terms. Support for long-term research really rested on two assumptions: science would improve the lives of the citizens, and science would make us secure in a world that seemed very dangerous because of the US/USSR confrontation. The world situation has changed radically, both politically and economically. The USSR is no more, and economic concerns loomed much larger as our deficit grew and as our economic rivals became much stronger. It is therefore no coincidence that federal support for long-term research peaked in the late 1980's (according to the National Science Foundation's science and engineering indicators) and only biomedical research has grown in real terms since that time.

The emphasis of this forum, the economic value of public investment in long-term research, looks at only one of the many dimensions in the impact of research on our society. In examining that dimension, it is important to understand the time scale involved. Product development, the province of industry, takes technology and turns it into things which are used in the society. Typically, these days, the product development cycle runs for three to five years. However, the research that lies behind the technologies incorporated by industry into new products almost always lies much further back in time--twenty or more years. I'd like to make four brief points and illustrate them with a few examples:

 Today's high-tech industry is based on the research of yesterday.

 Tomorrow's high tech industry will be based on the research of today.

 The sciences are coupled--progress in one area usually requires supporting work from other areas.

 Federal support for research has paid off and will be even more important in the future.

Today's high-tech industry is based on the research of yesterday.

Telecommunications has been revolutionized by lasers and fiber optics, coming from research in the 1960's and 1970's. Lasers allow much higher communications speeds and much lower communications costs on cables made of tiny glass fibers that carry pulses of light instead of electricity. The theory on which the laser is based goes back much further to work by Albert Einstein in 1917 (he did work on atomic theory as well as relativity).

The Global Positioning System (GPS) that allows precise location of anything and anybody anywhere is based on ultra precise atomic clocks developed for research starting in the 1950's. The GPS system has a growing commercial importance in activities ranging from transportation to recreation.

The biotechnology industry is based in large measure on recombinant DNA techniques developed in the 1970's.

The explosive growth of the Internet--of such importance to commerce and information--is the result of four decades of work by a worldwide research community culminating in the development of the browser at the NSF's super computer center at the University of Illinois, and the development of the World Wide Web by the high energy physicists at CERN in Europe. As a high energy physicist I can only wish that we had been smarter, and, instead of having people type WWW when they want to surf the Internet, we had them type HEP. Perhaps we would have bigger budgets now had we done so.

Tomorrow's high tech industry will be based on

the research of today.

The semiconductor industry's road map for ever more complex chips which increase the power of computers will, in about a decade, run into a regime of such small feature size that the behavior of even wires is not understood--quantum mechanical effects will become important.

The pharmaceutical industry is increasingly moving toward the design of drugs that interfere with the ability of pathogens to act. The designs are based on the detailed molecular structure of the pathogens determined by the structural biologists using the physicists' x-ray diffraction techniques.

The human genome project shows promise of developing the information to treat many health-related problems. It needs the development by the applied mathematicians of systems to allow efficient searching of huge data bases.

The sciences are coupled--progress in one area usually

requires supporting work from other areas.

HIV protease inhibitors were synthesized by the chemists in the pharmaceutical industry based on the structure of HIV protease determined by the biologists using the physicists' x-ray diffraction techniques. Two of the drug companies finalized their formulations using the ultra-powerful x-ray beams from synchrotron radiation sources built by the accelerator builders. Today, about 35% of the running time on the Department of Energy's synchrotron radiation sources are used for this kind of structural biology.

The development of neural network computing algorithms to efficiently sort complex multi-dimensional data sets has it origins in the neurobiologists developing understanding of the structure of the brain.

Today, one of the most important treatment methods of cancer is irradiation with very high-energy x-rays. These x-rays are generated from small linear accelerators that are scaled-down versions of the machines made for nuclear physics and particle physics research. In the U.S. alone there are 3,000 such accelerators which treat more than 75,000 patients every day. There are 5,000 of these machines world wide. The first preclinical trial of this therapy took place in the 1950's.

Magnetic resonance imaging, the least invasive and most precise of the medical imaging techniques comes from the work of the chemists, mathematicians and physicists. The physics work goes back to 1938 when I.I. Rabi demonstrated nuclear magnetic resonance (NMR) on one atomic nucleus at a time. NMR in solids was demonstrated in the late 1940's. The mathematicians developed the two-dimensional Fourier transformer in the 1960's which cut the time required for a MRI scan by an enormous amount. Those among us who have spent 20 minutes inside one such machine should realize that without that mathematical breakthrough, a scan of a single patient would take more near to a day.

Federal support of research has paid off and will be

even more important in the future.

Patent applications in the United States are supposed to cite the prior art on which the particular patent is based. A recent study of patent applications from U.S. industry shows that 73% of the prior art cited comes from publicly-funded research. (F. Narin, K.S. Hamilton and D. Olivastro, The increasing linkage between U.S. technology and public sciences., To be published.)

A recent publication from the Brookings Institution and American Enterprise Institute looks at the impact of R&D on the economy (Technology, R&D, and the Economy,' B.L.R. Smith and C.E. Barfield, editors, 1996). In that book, Boskin and Lau studied the impact of new technology on economic growth and found that 30-50% of the economic growth in our society comes from the introduction of new technologies. Mansfield looked at the economic returns on research investment and found that they are 40-50% a year, though returns to an individual firm doing long-term research are much lower because it is not possible for an individual firm doing long-term research to keep all the potential benefits to itself.

The Future Role of Federal Support

Twenty and more years ago it was true that much new technology came from long-term R&D done in industry--one need only think of the glory days of Bell Laboratories, and the IBM, General Electric, and RCA research laboratories. However, the world economic system has changed and international competitive pressures have driven most of the long-term research out of U.S. industry. Today it is exceedingly rare to find an R&D program in industry whose time horizon is longer than three to five years to a product. We may regret this change, but it is real and it has come about because of deregulation and competition. If one's rivals don't spend money on research, in the short term they are going to have a better bottom line. Our economic system, indeed the economic system of all of the developed world, rewards short-term results and punishes those who don't do as well as their competitors. Thus, changes in society have made the federal investment in such long-term research much more important than ever before.

As I said earlier, today's high-tech industry is based on the research of yesterday, and that research was funded at a time when high-tech industry made up a much smaller fraction of our GDP than it does today. Since high-tech industry is a much larger fraction of GDP today than yesterday, and will be even larger tomorrow, the fraction of the federal budget invested in long-term research should also be larger. It is odd, and it is indeed dangerous for the long term, that the converse is true.


Special thanks to Richard M. Jones of the

Public Information Division at the

American Institute of Physics

for making this transcript available through FYI Number 37.

Thanks also to

Professor Burton Richter,

Director, Stanford Linear Accelerator Center,

for permission to reprint his remarks here.

Technological Innovation in Humanitarian Demining

Kosta Tsipis


In 1993 the U.S. State Department publication "The Hidden Killers, The Global Landmine Crisis" pointed out that there are about 120 million landmines still buried in 62 countries, potentially lethal remnants of armed conflicts over the past half century. Even though the combatants in these wars did not generally intend to harm the civilian population, abandoned landmines now kill or maim about 30,000 persons globally every year, 80 percent of them civilians.

Four hundred million mines were deployed between 1935 and 1996; of these 65 million were emplaced during the last 25 years. Various nations currently manufacture 7.5 million mines annually. In 1993 alone, according to U.N. estimates, 2~5 million mines were laid. In the same year, only 80,000 were removed. It is estimated that 100 million mines are currently stockpiled around the world ready for use.

Mines are durable objects that can remain active for decades. They are manufactured in large numbers by many nations including the U.S., Russia, China, Italy, and a hundred other suppliers. Their cost varies mostly between $3 and $15 each. A few may cost as much as $50 each. A mine usually consists of a casing (metallic, plastic, or even wood); the detonator, booster and main charge; the fuse; and sensors that range from a simple pressure plate or a trip wire to more sophisticated triggers. a collapsing circuit, pressure-distorted optical fiber, pneumatic pressure fuse, or various influence sensors -- acoustic, seismic, magnetic, thermal. The most common triggering mechanisms depend on pressure --5 to 10 kg of force -- applied on the top of the mine.

In addition to the human toll landmines claim in many, mainly poor, underdeveloped areas (in Cambodia an incidence of one amputee per 250 people has been caused by land-mine accidents). their negative effects are multidimensional. Landmines can, over the long term, disrupt normal economic activities, such as travel and transport, and deny land to farmers, in turn often causing malnutrition, hunger, or migration of agrarian populations to urban centers. Clearance is not only a safety issue, but an economic and social issue as well.

Demining Operations; Current Practices

Demining operations differ sharply according to their purpose. Tactical demining, including minefield "breaching," aims at rapidly clearing a corridor for combat use through a minefield during battle, often in hours. "Tactical countermine" operations aim at the removal of most mines by military personnel from areas occupied by the military over days or weeks. "Humanitarian demining," the subject of this paper, involves the peacetime detection and deactivation over a considerable time of all mines emplaced in an area.

Because most mines have metallic casings or contain at least a few grams of metal (usually the firing pin and associated spring) the standard method of detecting mines either buried or hidden in overgrowth is a pulsed-induction eddy-current sensor that can unambiguously detect the presence of less than a gram of metal buried in nonmetallic soils to a depth of 10-20 cm. The pulsed-electromagnetic induction (PEMI) detector applies a pulsed magnetic field (T ;.5 msec) to the soil. The magnetic field propagates into the soil and diffuses into buried conducting materials. Eddy currents are induced in the conducting material which in turn produce an opposing magnetic field, (T ;200 5sec) as the applied field collapses. This opposing field disturbs the magnetic field produced by the detector. Perturbations in the detector field indicate the presence of a metallic object buried in the soil and are signaled by an audible sound. In effect, such detectors can detect reliably most of the smallest antipersonnel mines buried close to the surface. But they cannot detect totally, or almost completely, metal-free mines.

But this method suffers from a major disadvantage. Metal detectors detect not only mines but all metallic objects in the ground. Since quite often mines are laid in or near battlegrounds, metallic detritus -- shrapnel, bullets, pieces of metal, screws, wires, etc. -- causes a false alarm ratio often higher than 1000 to 1. Therefore, one of the major technical challenges in humanitarian demining is discriminating between false alarms and real mines.

Discrimination is currently accomplished in a very slow and dangerous fashion. The metal detector locates the buried metallic object to within five cm. Such detection takes about a minute per square meter of terrain. Demining personnel then probe the spot with a rod (metallic, plastic, or wood) about 20-25 cm. long to determine whether the detected object has the characteristic size and shape of a mine or is instead a mere piece of scrap metal. Depending on the soil type and condition (hardened, overgrown, etc.), discrimination by probe can take anywhere between two and 20 minutes. Once a mine is confirmed, it now takes about 10 minutes to dig it out, another 10 to explode it in situ (creating additional metallic clutter), and 10 more minutes for the de-miners to walk away from the explosion and back. All detected objects must be identified, or even dug up to assure that no explosives have been left in the ground.

This is clearly too time-consuming a method of discrimination. With current equipment and practices the process can take about 30 minutes for every metal detector signal. Moreover, the use of a probe to determine the nature of an object detected by a PEMI detector does not tolerate carelessness or boredom. The resulting average casualty rate for this work is one injured or dead deminer per 1000 mines detected. But rates as high as an injury per 100 mines have been encountered.

An additional problem is that many types of mines are designed and constructed with very little metallic content; some are completely metal-free. Metal detectors are of little use for these types of mines.

The laboriousness and riskiness of the current canonical method of discriminating mines from false alarms, and the existence of non-metallic, or low-metal content mines, have led to new technologies for use in humanitarian demining. Some of these are evolutionary versions of older approaches, some are quite novel. Here I describe first detection/discrimination methods, based on the mine's explosive contents, in its solid or vapor state:

1. Thermal neutron activation of the element nitrogen in explosives

2. Back scatter of X-rays from plastic landmines based on the lower-Z contents of such mines compared to the Z of average soils.

3. Nuclear Quadrupole Resonance properties of nitrogen nuclei in crystalline structures like explosives.

4. High speed gas chromatography that detects explosives vapors (or particles?) emanating from buried mines.

5. Arrays of organic polymers that can sense and identify vapors.

A second class of detection technologies is based on the fact that a buried mine represents a discontinuity of dielectric or conductivity properties in the soil. This approach uses:

1. Ground penetrating radar or its cousin

2. Microwave Impulse Radar to detect mines on the surface or buried in the ground.

A third class of detectors, based on the differences in dielectric and diamagnetic properties of materials, uses:

1. Magnetoquasistatic detectors

2. Electroquasistatic detectors

These two types detectors detect and discriminate metallic and non-metallic mines respectively from the clutter presented by the ground and its contents.

An entirely different approach is to detonate mines without detecting them, instead of using detection and discrimination methods to locate mines (which are subsequently destroyed). This capital-costly "brute force" approach involves the use of vehicles equipped with rollers or treads that detonate anti-personnel mines by riding over them. Application of this is limited by terrain, the potential presence of antitank mines that can destroy the vehicle, and the difficulty of assuring that all mines in a given area have been destroyed (on uneven ground, the equipment may not apply the needed pressure everywhere).

Before either approach -- detection/discrimination, or brute force neutralization -- can be used, it is necessary to find, and determine the boundaries, of a minefield. Perhaps even more important is the confident identification of areas that are free of mines. This is the second major hurdle for humanitarian demining: developing rapid and efficient area search methods that will reliably determine the presence or absence of mines. Currently finding minefields and determining their approximate boundaries, as well as declaring areas as mine-free depends on visual observation, history of mine accidents or records of laid mine fields. Specially trained dogs or simple metal detectors are now used for area surveillance, a slow and risky method.

Several technological avenues are being followed in pursuit of a satisfactory method for remote, rapid, and reliable minefield detection.

1. Several passive IR systems relying on thermal images of mines or "scars" in the soil resulting from excavation to bury mines. Some of these systems are airborne (fixed-wing aircraft, helicopter) and some are vehicle-based.

2. Multi-spectral and hyperspectral systems based on the resulting imagery.

3. Airborne active laser systems based on detecting the reflected light.

4. A helicopter-borne system using an active laser (1.06 5m) and a passive long wavelength IR sensor (8-12 5m). The system collected reflectance and polarization image data, and thermal emission data. The system incorporated real-time imaging and data analysis that automatically detected minefields.

Although this latter system was the most successful, it detected 99% of conventional surface-laid minefields but 66% of scattered mineflelds and 34% of minefields with buried mines.


Thus, the problem of rapid minefield surveillance remains active, and is being addressed. A fusion of synthetic aperture radar and a hyperspectral imager data is showing good promise.


To summarize the current state of humanitarian demining technologies: even though humanitarian demining has the dual advantages of time and low wage local labor ($1500 - $3000 per person per year), the currently used method is unacceptably slow, expensive., and dangerous. More important, it has only insufficient impact on the global landmine problem. More specifically, research and development in humanitarian demining needs to focus on four key areas:

1. Efficient method to survey large tracts of land to identify confidently mined and mine-free areas, roads, etc.

2. Improvement by an order of magnitude in the speed and safety of equipment and methods used in the current labor-intensive approach.

3. Rapid and efficient methods to neutralize discovered mines.

4. Development of advanced detection/discrimination technologies that can be mechanized and used with automated or even robotic systems-- which can replace the existing labor-intensive demining practices no later than five years from now.

The Next Steps

Humanitarian demining technologies in the near horizon easily fall in two groups:

technologies approaching maturity that can be applied to increase the efficiency, speed, and safety of the labor-intensive current demining methods and those, more promising perhaps, that won't be ready for field operations for half a decade or so.

In the first category I include the Meandering Winding Magnetometer (MWM) and the various configurations of the Interdigitated Electrode Dielectrometer (IDED), the air knife (already in multiple uses), the explosive foam "Lexfoam," and a family of smart probes -- acoustical, thermal, or magnetic. In the second category I put the Nuclear Quadrupole Resonance explosives detectors, the rapid gas chromatograph, and the polymer-black carbon composites arrays to detect explosives vapors, and the family of remotely controlled, automated, or robotic vehicles that can perform both the detection and the neutralization of landmines. The wide area surveillance system that uses fused SAR/hyperspectral imager data naturally falls in this second group.

The MWM can reduce false alarm rates by a factor of 10 within a year or so and consequently reduce the time spent for detection to 5-20 sec/m2 of searched terrain. The air knife, which uses high pressure air as a hand-held probe to uncover buried metallic objects (false alarms or mines), could replace the simplest manual probes and speed up discrimination of mines from metallic fragments by a factor of 5-10 while improving safety at the same time. The use of Lexfoam ($9 per pound) to blow up the exposed mine would speed up the overall demining process by a factor of 2 to 5.

I shift now to the more advanced, more promising, but still untested in the field, detectors of explosives. Their advantage is clear: only mines and other UXO would trigger the detectors, fusing the tasks of detection and discrimination into a single step and therefore speeding up humanitarian demining decisively. One key advantage of this approach is that its efficiency of detection is not dependent on the metallic content of the mine. Plastic and low metal-content mines can be detected and identified as well as metallic ones.

Nuclear Quadrupole Resonance (NQR) depends on the fact that some atomic nuclei, such as nitrogen, are not spherically symmetrical, i.e. they possess electrical quadrupole moments. Depending on what kind of crystalline structure nitrogen nuclei find themselves in, their non-sphericity produces a unique set of energy states characteristic of the crystalline structure when precessed magnetically. Using this property, an explosive in its solid phase can be identified by its nitrogen absorption radio frequency lines. The explosive RDX can be readily detected by NQR, but TNT (the most common explosive in mines) and PETN are more difficult to detect. Although TNT detection will require longer detection times, many seconds, or even several minutes, NQR detectors can be used in discriminating mines from false alarms.

Two-sided nuclear quadrupole resonance explosive detectors have been tested in airports where they detect quantities of RDX comparable to those in a mine in six seconds. But applications of NQR to mine detection will first require the satisfactory solution of several problems: a) A way must be found to improve the detectability of TNT by exploiting more absorption lines, and by improving the electronics and the detector coil; b) The possibility of interference from stray radio signals at the relevant frequencies must be reduced; c) Some method must be found to deal with the inhomogeneities caused by the one-sided detection geometry that mine detection dictates.

The fact that trained dogs can detect mines unerringly indicates that mines emit vapors that characterize them uniquely, though this may depend on how long a mine has been buried. In all probability these are explosives vapors that either escape from the interior of the mine or come from traces of explosives "smeared" on the outside of the mine during manufacture, storage, or emplacement. It is not clear that these vapors emanate in real time from the mine or are vapors from particles that have stuck to dirt or vegetation directly above the mine.

Arrays of sensors, each with some specificity to a particular molecule or compound, are quite commonly used in the food and perfume industries to identify constituent compounds of the product. One such sensor is under development at Cal Tech. It uses physical-chemical properties of carbon-black organic polymer composites to develop vapor-sensing elements, each sensitive to different molecules. The collective response of an array of such elements can identify the type of vapor. DARPA is actively pursuing an array sensor for explosives detection intended of airport use, but probably adaptable for humanitarian demining work.

An electronic vapor detector that claims detection sensitivity that is 1 order better than what dogs achieve (i.e. 10-20 picograms of TNT) has been developed. The detector initially collects particles on which the vapor is attached and then performs the vapor recognition by rapid gas chromatography and chemiluminescence. In principle, this technology could be adapted to a probe that could be inserted in the ground near a suspected mine to "sniff" the vapor aura of the buried mine. It is not clear to what degree explosives vapor remnants in old battlefields will generate unacceptable clutter for the electronic vapor detector. Detailed field measurements of the presence and behavior of explosives vapors will have to be conducted in support of the development of such a detector.

Conclusions and Recommendations

Several of the technologies described in this paper appear to promise decisive improvement in humanitarian demining operations. If these technologies are to mature into useful, affordable, field equipment, I believe we have to follow four general guidelines.

First, since there is no indication that a single entirely new, revolutionary approach to humanitarian demining lies just beyond a near horizon, efforts should focus on incremental improvements of the various demining operations, starting with the field use of better tools in the current deminer-intensive method, and gradually introducing new, sophisticated mine detectors. Since no single "silver bullet" will solve all demining problems in all cases, a spectrum of new detection and neutralization technologies should be developed, field-tested and applied flexibly.

Second, efforts and funding should focus on technologies that lead to systems that are easy to operate and maintain in countries infested with land-mines. Power sources for demining instruments must be portable, detectors must be rugged, and associated electronics must be impervious to humidity, dust, and temperature extremes.

Third, the magnitude and complexity of a systematic humanitarian demining campaign are so large that its goals cannot be achieved by earnest, even ingenious, efforts that remain un-coordinated. A coherent, systematic progression from measurements of physical and chemical properties of mines, followed by experimentation, equipment development and laboratory testing, then field testing in realistic conditions, modification, engineering development, production, distribution to users around the world, training of operators, and creation of a central but easily accessible data-bank of mine and soil properties and of the latest results of demining research have to be carefully organized, guided, supervised, and evaluated.

Fourth, the entire effort to develop demining equipment of gradually increasing sophistication and efficiency must be centrally coordinated, guided, and overseen. A central agent is needed to set research priorities, assign technical tasks, coordinate their implementation, and evaluate the results. In addition, such an entity could act as an advocate for humanitarian demining within the U.S. government. This proposed coordinating agency will need high-level technical and scientific advice. Such a need can be satisfied by the establishment of a properly constituted Science Advisory Board that will advise and provide information about relevant scientific and technical developments in academic and high-technology industrial laboratories.

Meanwhile, in parallel, stable, long-term, adequate funding for these tasks must be secured. This latter task implies the need for a persistent effort to inform and educate decision makers, opinion makers, and through them, the tax-paying publics of developed democracies in parallel with the scientific and technological efforts. I believe that several of the technologies examined here will work well in the field and therefore can be politically attractive to governments wishing to assume leadership roles in humanitarian activities.

Kosta Tsipis is with the

Program in Science & Technology for International Security

Massachusetts Institute of Technology,Cambridge, MA 02139 USA


Stockpile Stewardship, Breakthroughs in Computer Simulation, and Impacts on Other Complex Societal Issues

Philip D. Goldstone, Donald R. McCoy, and Andrew B. White

Stockpile Stewardship is continuing to develop and mature as a national effort. While stewardship of the nations' nuclear weapons without nuclear testing has been a de facto reality since the last test in 1992, we recently marked the second year that the safety and reliability of the stockpile was certified to the President through Stewardship activities (this new annual certification requirement is tied to U.S. pursuit of a CTBT). An earlier article in Physics and Society described the need for the Stewardship program and its general outlines. Here we note some recent progress in the transition to a sustainable science-based capability for weapon safety and reliability assurance, and then focus on a key element of the Stewardship effort: the development and use of breakthrough computational simulation capabilities through the DOE's Accelerated Strategic Computing Initiative, or ASCI. We will also discuss why we believe these developments, driven by a need to support national security while reducing nuclear dangers, can have major effects on our ability to address other issues of great importance to society--for example, to more accurately and predictively model global climate and the consequences of human-caused changes.

Since Rocky Flats Plant ceased operation in the late 1980's, the U.S. has been without a functioning manufacturing capability for stockpile-qualified plutonium "pits." Even with arms reductions, reestablishing a small capability has remained necessary. Efforts to demonstrate that stockpile-quality pits for existing weapons types can be built and that rebuilt weapons with these components can be certified (for example to replace surveillance samples or aged weapons) recently achieved a milestone: the fabrication of the first demonstration pit for this purpose.

Subcritical experiments conducted at the Nevada Test Site, and other measurements, are beginning to provide improved data on plutonium properties to aid with assessments of aging phenomena and to support certification of remanufactured weapons. To provide a range of complementary studies of hydrodynamics and radiative interactions in high-energy-density conditions, the National Ignition Facility laser and the Atlas pulsed power facility are being built to replace their smaller predecessors, and the reconfigured "Z" pulsed power machine began experiments. (Atlas and "Z" use high-current induced magnetic pressure to provide precision microsecond hydrodynamics or multinanosecond x-ray experiments, respectively.) February 1998 marked first occupancy of the structure to house the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility, an x-ray radiography tool for dynamic experiments and hydrodynamic testing, and parts of the first-axis accelerator system are being installed for anticipated operation in 1999. Meanwhile an emerging technique, dynamic radiography using high-energy protons rather than x-rays, provided new data on high explosive behavior.

In addition to the mostly experimental activities just described, new simulation codes have been run on ASCI-derived computer platforms at Sandia, Livermore, and Los Alamos. While experiments, analytic models of underlying phenomena, and computer-based simulation are all necessary under a CTBT, simulation is now the only means to integrate our knowledge of nuclear weapons science and engineering to evaluate safety, reliability, and performanceas the weapons age, are repaired or remanufactured. While it is not truly correct to call this "testing in the computer" as the press occasionally does, the simulation is the only way all the pieces can be connected. The challenge of making serious predictions with serious consequences based on complex computer simulations, without a full-up experimental test of these predictions, stresses the simulationsas well as the rigor with which they are tied to experimental reality, and the degree to which their users appreciate that a misplaced faith in an inadequate simulation would be a failure mode. We do not belittle or underestimate this challenge, though we have confidence it can be surmounted. Therefore the accelerated advancement of almost all aspects of complex simulation science and technology through the ASCI effort, as well as the continuing need for experimental capabilities and activities, and preservation of nuclear-test data. We will focus on simulation and computing for the rest of this article.

ASCI WILL DRIVE BREAKTHROUGHS IN SIMULATION. ASCI's "accelerated" nature is driven by several real-world factors. The U.S. faces aging of the technical skill base that has direct experience in weapon design, development, and testing. Demographics imply that about half of that skill base will to be lost to retirement by 2004. In this same time frame, the average age of more than one stockpiled weapon system will begin to exceed its intended design lifetime; therefore, aging concerns may begin to occur more frequently. Based on these timescales we will need to materially complete a transition from the era of nuclear test-based certification to one of sustainable science- and simulation-based certification by about 2010.

To do that, we will need to have a substantively complete and accurate simulation treatment of weapon performance and safety that can be used with high confidence to help inform weapon assessments even after the directly-experienced design staff is gone. This in turn implies new physics and engineering models, in new three-dimensional simulation codesand roughly three to five years of effort to validate these new codes. The availability of extensive past nuclear test data for the systems in the continuing nuclear stockpile is crucial to this validation process, as will be new experimental data, but it will also be vital to engage individuals who designed and tested the current stockpileas well as new scientistsin the process. Given the time required for validation, to make the transition we have just described by 2010 we will need this "substantively complete and accurate" code capabilityand the platforms to run it on, and the visualization and communication systems to interact with itin hand by about 2004.

Stewardship's requirements place far greater demands on the physical and numerical accuracy of the simulation models, and on overall attention to scientific rigor. Three-dimensional codes are required because aging effects (and accident scenarios) will more often than not break symmetries designed into the weapon--and such changes will have to be assessed to know whether they require corrective action, either at the time or at some future date. (Similarly the as-built characteristics of newly manufactured components will be assessed, to ensure that weapon replacement does not cause unsuspected problems.) The codes must include improved physics and engineering models, since earlier approximations "tuned" to a specific region of parameter space, or set to produce agreement with certain data, will no longer be a sufficient basis for safety and reliability assurance now that nuclear tests are not available to check the code predictions. For example, more physically accurate models for high explosives initiation and burn, material behavior including spall and ejecta formation, and fluid hydrodynamic turbulence and instabilities are being developed through a combination of theoretical and experimental efforts, and techniques for modeling materials behavior simultaneously across many length and time scales are being developed.

Running predictive three-dimensional simulations with these more complete, accurate physics and engineering modelsand adequate spatial resolution to assess stockpile aging effectsin a reasonable time will require immense computational parameters because we will be handling about 100,000 times more information than today: roughly a billion zones, 100-teraOps peak operational speed, and 30 terabytes of computer memory. Were we to stagnate at today's levels of speed, for example, the kind of problem we will need to run in several hours or days would require centuries to complete. To achieve 100 teraOps-level capability by 2004 implies that we must exceed the rate of capability doubling (every 18 months) in the computer industry that is known as "Moore's Law". If this is to occur, ASCI must drive it, in concert with the industry.

These requirements drive the development of new hardware, software, and infrastructure capabilities to be developed jointly by the DOE weapons laboratories, the computer industry, and a number of universities. New numerical methods and algorithms will have to be developed, as will "terascale" visualization tools to help analyze, interpret, and interact with these massive amounts of information. The labs have to develop independent models and codes so that there can be effective peer review of weapon assessments, and each examines different hydrodynamic, transport, and other modeling methods to evaluate their suitability and efficiency. For example, both structured and unstructured grid methods are being used with alternative interprocessor communications strategies.

In working toward a 100-teraOps-level simulation capability, a series of hardware platforms are being developed and used in a phased-evolution approach, with different industrial sources and technologies. One such platform (the "option Red" Intel machine at Sandia National Laboratories) uses a massively parallel processor architecture, and achieved a speed of 1 teraflop on a benchmark problem in 1996. It is expected that to achieve 100-teraOps effectively will require a different architecture, that known as Shared-memory MultiProcessor or SMP, built on clusters of processors. As we write this, at Los Alamos the Blue Mountain machine by SGI/Cray is operational at 400 gigaOps while the Blue Pacific machine by IBM is operational at 920 gigaOps at Lawrence Livermore National Laboratory. Both are to achieve a sustained 3 teraOps in 1999 (plus a 1-teraOps companion to Blue Mountain for unclassified research). 10- and 30-teraOps platforms are planned for in 2000-01.

Peak operational speeds greater than 3 teraOps require software and hardware improvements for high bandwidth communication between processor clusters. A high performance communications corridor between the computer platform and the users is also needed, and access from distant weapons laboratories to these platforms requires high-speed encrypted networks. All are being pursuede.g., Sandia National Laboratories is leading an multilab-industry initiative to develop technology for secure gigabyte-per-second long-distance data transmission and communication.

We noted the engagement of the university community in ASCI. This takes two forms: individual collaborations on specific computer science and mathematics issues such as visualization or algorithms; and a set of "strategic alliances" which pursue unclassified, multi-disciplinary thrusts that involve appropriate science and help drive advances in areas of computer or computational science. There are five such "alliances": with Stanford, Cal Tech, University of Chicago,University of Utah/Salt Lake,and the University of Illinois, Urbana/Champaign.

Other than the record-breaking platform speeds, there are other markers of note from the ASCI effort so far. At Los Alamos, one code project has run a 3D hydrodynamic simulation with 60 million cells in parallel across 16 of the Blue Mountain SMP multiple-CPU boxesmore that 1000 processors. Another has demonstrated parallel execution of codes on all three architectures (the Blue Mountain and Blue Pacific SMP as well as Red MPP platforms), and linear scaling to over 3000 parallel processors on the Red platform. 3D calculations of nuclear one-point safety have been performed and cross-compared with a previous 3D code. A factor of 16,000 speedup in performing Monte Carlo simulations has been achieved relative to a couple of years ago--about half because of computer speed and half from improvements in algorithms. And ASCI tools have already contributed significantly to the ongoing revalidation of a stockpile weapon system, and to designing and interpreting experiments.

STEWARDSHIP AND OTHER COMPLEX PROBLEMS: CLIMATE PREDICTION. Stockpile stewardship is a challenge with certain characteristics. It is a problem of extreme technical complexity. Decisions of high consequencein this case whether the stockpile is safe and reliable, whether repairs are needed, or even if a nuclear test is necessarymust be made. How wisely they can be made depends in part on the quality of the complex technical assessments that inform them, since full-system experiments (nuclear tests) are not possible. The ability to perform more precise and predictive scientific simulations is therefore one key technical need. And there is some urgency to establish these capabilities--because both the stockpile and the experienced skill base are aging.

There are other problems of societal importance that share similar characteristics, including the inaccessibility of controlled full-system experiments, and the need for scientific simulations of great complexity. Global climate prediction, and the understanding of human-caused effects on climate, is an obvious example. There is general scientific consensus that continued unchecked growth of worldwide greenhouse emissions is unwise. Yet we still lack the capability to confidently assess the outcomes of specific human strategies on the climate, or the regional effects of possible climate change. It is apparent that simulation and computing capabilities comparable to those of ASCI are needed, and that synergism between these efforts will be valuable.

We make these comments guided by recent comparisons of global simulations with observation, and from a perspective of interaction with the climate-modeling community. Under the sponsorship of two DOE programsthe Climate Change Prediction and High-Performance Computing and Communications Programsour Laboratory has applied massively parallel computation to high-resolution simulations of the global ocean. Our colleagues are engaged in many collaborations in the community, for example with the National Center for Atmospheric Research (NCAR) on the development of a coupled climate model based on NCAR's atmospheric and land-surface models and the Los Alamos Parallel Ocean Program (POP) ocean and sea-ice models.

Great strides are being made by the community, yet present-day coupled climate models still are constrained to insufficient resolution and physical simplifications. Current global coupled climate models employ horizontal resolutions of about 3o0 with roughly 20 vertical levels in the atmosphere, and about 10 and 30 levels in the ocean. The resolution in the atmosphere (~300 km at the equator) is too coarse to evaluate regional climate effects, and the resolution in the ocean (~100 km) is too coarse to adequately resolve mesoscale eddies and western boundary currents such as the Gulf Stream. Even so, a century-long simulation with a model including component models for the atmosphere, ocean, sea ice, and land surface requires nearly a month of dedicated time on a Cray C90. Many such runs are needed to investigate various climate-forcing scenarios and parameter sensitivities.

A high-resolution simulation of the Atlantic Ocean recently completed at Los Alamos demonstrated, through comparison with measurements from the TOPEX/Poseidon satellite, that adequate resolution of mesoscale eddies and boundary currents requires about ten-fold finer grid spacing. The POP global-ocean model, using realistic bottom topography and observed surface winds, was run on a massively parallel CM-5 to extend such simulations to the highest resolution achieved. In global simulations, while mesoscale eddies and boundary currents like the Gulf Stream are somewhat resolved, there are discrepancies between the sea-surface height variability simulated by the model and satellite measurements.

Since higher resolution was not yet feasible on the global scale, to test the resolution requirements POP was used for a simulation limited to the Atlantic Ocean. Forty vertical sea levels were used; horizontal grid spacing was 0.1o0 (11 km at the equator). At this resolution, the behavior of the simulated Gulf Stream is much more nearly accurate: it separates from the coast at Cape Hatteras and includes a branch around the Grand Banks, both characteristics in agreement with observations. Furthermore the energy spectrum of the mesoscale eddies is much better resolved, as indicated by the close quantitative agreement between the simulated and measured sea-surface height variability.

From this and similar results one can estimate the computational scale needed to do adequately resolved simulations and to complete a century-long integration in a month of computing. A global ocean simulation at 0.1o0 and 40 levels would require about 0.25 terabytes of memory and a dedicated 10-teraOps platform. Increasing the resolution in the atmosphere by a similar factor would provide a grid spacing of ~40 km, sufficient to evaluate regional climate effectsanother important thresholdand would also require a dedicated 10-teraOps platform to do a century in a month, with about a terabyte of memory. Incorporating a more comprehensive treatment of physical processes in the atmosphere and ocean would increase these requirements further, as will coupling the atmosphere and ocean.

It appears that an integrated, global simulation at adequate resolution (regional in the atmosphere, resolving mesoscale eddies in the ocean) and more comprehensive physical treatments will require ~ 40-teraOps platforms and ~20 terabyte memory, a capability quite similar to that being sought under ASCI. A number of such runs to investigate various climate scenarios, model sensitivity, and natural variability would require about a year, bringing within reach ensembles of "what-if" century or multi-century simulations hypothesizing different futures of human activity. Such simulations could add greatly to the information and insight the scientific community can provide to society to further develop, and fine-tune, its response to the crucial question of global change.

Furthermore, other issues that ASCI must addressterabyte-scale information flow and visualization, high-speed communication, model validation, and understanding issues of predictabilitywill be similarly faced by such climate simulations. Aspects of the accelerated advancement of scientific and technical capabilities under ASCI, initiated to address one major policy issue, can benefit the scientific community's approach to other complex issues.

"Stewardship of the Nuclear Stockpile Under a Comprehensive Test Ban Treaty," in the April 1997 issue of Physics and Society

"Stewardship of the Nuclear Stockpile Under a Comprehensive Test Ban Treaty," in the April 1997 issue of Physics and Society

DOE report "Enhanced Surveillance Program: FY97 accomplishments" issued as LA-13363-PR, October 1997

J. D. Mahlman, "Uncertainties in Projections of Human-Caused Climate Warming," Science 278, 1416 (November 21, 1997)

M.E. Maltrud, R.D. Smith, A.J. Semtner, and R.C. Malone, "Global Eddy-Resolving Ocean Simulations Driven by 1985-1994 Atmospheric Winds," accepted for publication in Journal of Geophysical Research - Oceans.

L.-L. Fu and R. D. Smith, "Global Ocean Circulation from Satellite Altimetry and High-Resolution Computer Simulation," Bull. Am. Meteor. Soc. 77 (1996) pp. 2625-2636

R. D. Smith, M. E. Maltrud, M. W. Hecht, and F. O. Bryan, "Numerical Simulation of the North Atlantic Ocean at 1/100," in preparation

Philip D. Goldstone, Donald R. McCoy, and Andrew B. White are with the Los Alamos National Laboratory, Los Alamos NM 87545


Government Secrecy after the Cold War

Steven Aftergood

"Contrary to perhaps what is the most common belief about secrecy," Enrico Fermi once wrote, "secrecy was not started by generals, was not started by security officers, but was started by physicists."1

Actually, of course, there has always been a measure of secrecy in American government. Some secrecy may in fact be indispensable to the performance of certain government functions. But the physicists of the Manhattan Project helped create a secrecy system of unprecedented scope and impact, which eventually metastasized throughout the entire national security bureaucracy.

"One of the consequences of the depth and breadth of the active participation of many top US. academic scientists in this very secret wartime Project was that the subsequent peacetime control of scientific and technical information did not seem as unusual or unacceptable to those academic scientists as similar measures would have been prior to World War II."2

In the Cold War, some of the most fateful realms of government decision-making--matters of war and peace and life and death--were declared to be beyond democratic norms of public knowledge and debate. But as secrecy was bureaucratized, it also extended into more mundane areas. If scientists were partially responsible for the new secrecy system, they quickly came to rue the fact, as secrecy became increasingly burdensome.

"The story is told that in the days of the Manhattan District, a scientist was summoned to Washington and reprimanded for having mentioned in public a physical constant which was still secret. The accused fingered through the Smyth Report and pointed to a number there. 'Yes,' said the security officer, 'but that is in pounds per square inch, while you gave the figure in kilograms per square centimeter. Why make it easier for the Russians?'"3

Less amusingly, an internal security apparatus came to focus on scientists as a potential threat to the nation. The whole weight of the government's investigative bureaucracy was brought to bear on numerous individual scientists who, in the exercise of their constitutional freedom of expression, had caught the attention of wary security officers.

In a 1956 book called The Torment of Secrecy, sociologist Edward A. Shils wrote that "An official of the Federation of American Scientists, given to moderation in his judgments, estimates very tentatively that somewhere in the neighborhood of a thousand qualified scientists have encountered security difficulties."4 Similarly, Pugwash and other scientist-based organizations were singled out for official scrutiny, particularly during the 1950s and 1960s.

Throughout the decades of the Cold War, the secrecy system became ever more entrenched. By the 1980s, science and national security sometimes seemed to be on the verge of open conflict, as the Reagan Administration practiced an aggressive classification policy and even pressed for new limits on the dissemination of certain unclassified scientific information.

It is all the more remarkable, then, to observe that the entrenched secrecy bureaucracy has now been rolled back-- only partially, but to a real and measurable degree. The evidence that Cold War secrecy is in retreat includes the following:

 In the last two years, an almost unimaginable 400 million pages of historically valuable documents have been declassified, according to the Information Security Oversight Office.5 This represents a significant dent in the estimated backlog of 1.8 billion pages of 25 year old documents awaiting declassification under President Clinton's executive order 12958.

 New declassification programs have been initiated in the most secretive corners of the national security bureaucracy, including the Central Intelligence Agency, the National Security Agency, and the National Reconnaissance Office (the very existence of which was officially acknowledged only in 1992).

 The size of the US. intelligence budget, an icon of secrecy for its own sake, was declassified last year for the first time in 50 years in response to a Freedom of Information Act lawsuit brought by the Federation of American Scientists.6

 The creation of new secrets has reportedly "decreased to historic lows."7

 A broad ranging Fundamental Classification Policy Review conducted by the Department of Energy (DOE) resulted in the declassification last year of some 70 categories of information previously restricted under the Atomic Energy Act. Since former Energy Secretary Hazel O'Leary undertook her "openness initiative" in 1993, DOE has declassified far more information than during the previous five decades combined.

 An unprecedented quantity of government information is now easily available on the Internet, including vast resources on military and intelligence structures, functions, organizations, budgets and operations that previously would have been classified or very difficult to obtain.

All of this is impressive and quite novel. Nevertheless, one could conclude from another point of view that the glasnost is still half empty. Thus:

 The fact that hundreds of millions of pages have been declassified does not necessarily mean that they are now accessible to researchers. Many are, but many others must still undergo a painstaking screening and review to address privacy and other concerns. The continuing eruption of declassified documents has generally overwhelmed the ability of archivists to process them for public access.

 Many agencies are still complying half-heartedly-- or not at all-- with the declassification requirements of the President's executive order. The mandatory annual declassification quotas established by the President have not been met for the last couple of years by the Army, the CIA, and several other agencies. In the absence of effective oversight, there is no real incentive for compliance and no meaningful penalty for non-compliance.

 Most agencies are also not in compliance with the Electronic Freedom of Information Act, which requires them to post certain information about the agency