- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
By Richard L. Wagner, Jr.
Richard L. Wagner, Jr.
This particular discourse is about risk, and about how polities assess and manage risk through processes which are difficult and imperfect. How scientists assess and portray uncertainty in what we say about complex physical phenomena is also difficult and imperfect. To improve communication, the public and decision-makers must come to understand science better. For our part, scientists must work on our side of the gap by learning how to better assess and describe the basis for confidence in what we say, particularly about uncertainties.
The touchstone for confidence in science's statements about physical phenomena is experiment. Richard Feynman once said, "The test of all knowledge is experiment. Experiment is the sole judge of scientific 'truth'." But increasingly, for many phenomena at the intersection of science, public understanding, and policy decisions, it is impossible to do the definitive experiment(s). One could hardly validate climate models by inducing a deliberate change in climate. Indeed, the more complex the phenomenon, the more important and more difficult it is to design sufficient experimental validation. Controlled experiments can often be done for disaggregated pieces of a complex, integral phenomenon, but without designed integral experiment, how the errors and uncertainties aggregate will remain in doubt, further complicating risk assessment and management.
This is both an epistemological problem of some depth and a practical one in terms of how science is applied, and developing approaches to it might be termed applied epistemology. Making progress will require sustained effort aimed expressly at this problem. Useful approaches are likely to focus on how those experiments that can be done relate to confidence or doubt about understanding and prediction of the overall, integrated phenomenon.
There are, of course, well-established fields of study in which observations and measurements can be made, but controlled, designed experiments on the entire phenomenon cannot be done, as is the case with many questions in astrophysics. Another set of complex physical phenomena is involved in the question of reliability of nuclear weapons in the absence of nuclear testing. How to assess and describe the uncertainties in the physics and engineering involved, and how to establish the basis of confidence in statements about those uncertainties, are questions that are getting increased attention at the Los Alamos and Livermore laboratories, as well as from scientists outside those facilities. There could be a mutually beneficial interaction between scientists thinking about how to assess confidence in statements about other complex physical phenomena and those who are thinking about this problem in terms of the weapons application.
Whether or not the nuclear weapons case proves illuminating, more attention should be focused expressly on developing better ways by which confidence and doubt, certainty and uncertainty, about complex physical phenomena can be assessed and conveyed, especially where definitive, designed and controlled experiments cannot be done. This should be a project for science in the coming decades.
I previously referred to climate change. As human activity increasingly impinges on the ecosphere, as it will, virtually every aspect of the physical and biological functioning of the ecosphere, including ourselves, is likely to become the subject of risk assessment and management. One hundred years ago, the ecosphere was essentially sovereign in its functioning. Now, we are trying to limit the impact of human activity on it. Despite our current efforts, 50 or 100 years from now, human activity may be sovereign, and a properly functioning ecosphere may be one that is engineered. (For many reasons, I personally find this worse than painful to contemplate. But it may be a reality we should prepare for, although doing so might make it a self-fulfilling prophecy.) If humankind must engineer the future ecosphere, it will be an imperative of truly historical proportions for science to be able to accurately assess uncertainty and convey the basis for confidence in those assessments.
Of course, scientists usually work very hard to assess the uncertainties in their predictions. The climate community is a fine example. Despite this, such assessments often turn out to have been wrong, and as the stakes of being wrong increase, as they will, perhaps we should look again at the fundamentals: at our basis for confidence in assessing uncertainty. Scientists deal with complex phenomena in many ways. The following conceptualization will, I believe, serve to illustrate the underlying problem of understanding and describing the connectedness of less-than-definitive experiment to assessments of uncertainty.
Sciences approach many complex phenomena by building models, often large computational models. The structure of such models is often to disaggregate the overall integral phenomenon in question into less complex components, continuing this process until, at the finest level of detail, the individual phenomena—what scientists consider fundamental -are isolated and can be dealt with by well- established theory, the result of all the previous efforts in science.
This is the classic reductionist approach of science. But for the complex phenomena at issue here, the models must reaggregate the phenomena, and each of them, even those considered fundamental, have uncertainties associated with them. These uncertainties may be as simple and fundamental as experimental uncertainty in measuring the values of physical constants, but often they are much more complicated. As the model integrates them, the uncertainties concatenate in complex, nonlinear ways. In developing the model's strategies for disaggregation and reintegration, judgments are made—often on the basis of physical intuition-about how these nonlinearities work, and how much computational and experimental effort should be applied to each of the disaggregated phenomena.
These problems are not so bad when designed, controlled experiments can be done to measure many aspects of the phenomenon to validate the models. But when such experiments cannot be done, estimating uncertainty becomes very difficult. Often, designed experiments can be done at intermediate levels of aggregation, and measurements can almost always be made on aspects of the overall phenomenon. But insidious fudge factors can creep in, especially when the models being validated are used to interpret the measurements. An even more fundamental problem is captured in the old saw, "Nobody believes a calculation except the person who did it, and everybody believes an experiment except the person who did it."
Complexity theory offers a different conceptualization of how scientists deal with complex nonlinear phenomena. It deals head-on with the problems of reintegration of disaggregated phenomena by treating features of the overall phenomenon as emergent behavior. But the problem still remains of assessing uncertainty, and the basis for confidence in that assessment, especially absent definitive experiment. Using small-scale experiments raises problems of scaling and specification of boundary conditions.
Since the last U.S. nuclear test a decade ago, Los Alamos and Livermore have carried out a large program to strengthen the scientific underpinnings of the phenomena that occur during nuclear explosions. Supporting this work are large facilities for nonnuclear experiments, with more on the way, and computational power already in the tens of teraflops. Funding applied to this work has been several billion dollars. It may be the world's largest, single current program in applied science focused on a particular set of complex phenomena.
In that program, the problem of assessing uncertainty, and the basis for confidence in those assessments, without the ability to do definitive experiments (i.e., nuclear tests), has much in common with the other kinds of complex phenomena I have described. But it differs in one useful way: nuclear tests were done, with extensive measurements of the phenomena involved, many times before the last U.S. nuclear test. And it may be useful in a more particular sense. A pivotal issue is whether the data from those nuclear test measurements are sufficient to validate the models with enough rigor to allow confident statements about the performance of weapons in configurations that are to some degree different—because of aging or remanufacture—from the configurations tested.
Within this program, structured approaches to assessing uncertainty are just beginning to emerge, and currently come under the heading, "Quantifications of Margins and Uncertainties" (QMU). The "margins" are between acceptable values for the performance and reliability parameters of various phenomena, and the predicted values. Defining those margins and how the various performance parameters relate to each other is not easy. Neither is quantifying uncertainties. We would like to be able to base statements about uncertainties directly on measurement, perhaps even on measurement error, but the relationship between some of the performance parameters and what has been and can be measured is unclear. This is in part because of the difficulty in scaling across wide ranges of size and energy density. The hardest part is structuring and interrelating the uncertainties, including those that can't be quantified. Thus far there has been little explicit attention to metrics and frameworks for these relationships between experiment and confidence or uncertainty. The whole program is a work in progress, and there is no guarantee of success.
I do not see a clear way forward for addressing the problem I have posed and illustrated in this article. Perhaps it will not be solved, only improved. Bringing the emerging theories of complexity to bear on how models reaggregate phenomena might be one avenue of approach. Developing strategies for model development that allow models to be more amenable to experiments that check how the disaggregations are reintegrated might be another.
Still another might be developing structured methods by which uncertainties in the integrated phenomena are tied as directly as possible to those experiments that can be done. Ultimately, how much credence to place in the predictions of various models is based in the judgment of those who understand them, their relation to experiment, and the experiments themselves. If that judgment can be parsed out, and reduced as much as possible to judgments about the possibilities of systematic error in experiment, some progress will have been made.
It is not too sweeping to say that the scientific method cannot be fully applied in the cases of complex phenomena for which designed, controlled experiments cannot be done. And there are profound implications, for science and its applications, in the inability to use the scientific method in these matters.
The claims of science to "truth" are under attack in certain quarters. I think most scientists believe these attacks are either without basis or are based on a misunderstanding of scientific claims. But as the stakes rise, these attacks are likely to intensify, and they will hinder the ability of science to contribute. Developing, during the next decades, something like an extension of the scientific method that deals with confidence and uncertainty when definitive experiments cannot be done is a crucially important task for the scientific community to attempt.
Richard L. Wagner, Jr., is a senior staff member at Los Alamos National Laboratory.
©1995 - 2020, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.