Science and Religion: An Evolutionary Perspective,

by Walter Isard.

Avebury Publishing Company, 407 pages, $99.95. ISBN 1 85972 4752


Physicists live in the broader civil, cultural, and religious society as well as that of their profession. Hence, many have long been interested in the possibilities of the application of the concepts and methodology of physics to broader social questions, the more so since physics has usually been perceived as "successful" whereas the "sciences of society and culture" are much less so. Many of us feel that we are "needed" out there, that we bring distinctive means for helping to understand the human endeavor and hence further the possibility of its successful practice.

This book, written by an eminent economist, seeks to find commonalties of description and analysis in art, in the physical, biological and social sciences, and in religion. Among these concepts are hierarchy, symmetry and symmetry breaking, determinism and indeterminism, chaos and order, dynamic and linguistic modes, entropy and self-organization, evolution and catalysis, genes and memes, etc. Isard ranges from the "Omega" of Teilhard de Chardin to the "implicate order" of David Bohm. Here is a book wherein the physicist can see unusual applications of his/her usual concepts. Hamiltonians--to macroeconomics; phase transitions--to welfare; force field potentials--to leadership influence; Master and Fokker-Planck equations--to population distributions; etc. Very few readers will be familiar with all of the subjects and sources covered and yet, for the most part, Isard manages to keep the sophisticated reader's interest throughout this wide-ranging discourse. In his search for common elements, he hopes that the diversity of applications will lead to deeper understanding of these elements and hence, as a feed-back loop (another common element), to still deeper and more useful applications.

Books with such diverse yet deep coverage, and addressed to general audiences, are becoming more common. For example, I recently came across: Thinking in Complexity: The Complex Dynamics of Matter. Mind. and Mankind by Klaus Mainze, and Chaos in Discrete Dynamical Systems: A Visual Introduction in 2 Dimensions by Ralph Abraham, Laura Gardini, and Christian Meia. These books raise a number of questions for those of us interested in the mutual interactions and well being of science and society. They cover more diverse physics than is in the working armory of most physicists, let alone non-physicists. Who are these books aimed at? What are their goals, and how realistic are they? Do they succeed? The two books just mentioned are addressed to scientists and technologists, whereas Isard is addressing intellectuals more generally.

Most readers will be familiar with only a subset of Isard's problems and concepts. Isard attempts to explain some of the concepts, but just quotes descriptive phrases for some of the others. I have grave doubts as to whether the explanations are sufficient for novices in these fields. He covers an enormous vocabulary. But is it useful (except perhaps for cocktail party conversation) to acquire vocabulary without real understandings of the concepts and their usage? And can a single book, even of 400 well-written pages, initiate novices into so many mysteries?

For example, what gain in understanding is made by describing the sudden flowering of classical art in 5th century BC Greece as a "Prigogine jump to another far-from-equilibrium state" (p. 217)? Or by describing the short-lived Athenian development of perspective painting as "a non-amplified perturbation" (p. 218)? Even granting a familiarity with the arts in question (there are no illustrations!), one would have to have some notion (which I still don't have) of what an "equilibrium art state" meant before perturbation--large or small--of that state communicated any real insight. Equilibria, and their disturbances, imply forces to the physicist. What are the artistic analogues of these interactions? And what is the point of talking about a "Prigogene jump" from community to individualism (p. 290), when the relevant equilibrium state is not defined? Why talk about symmetry breaking in religion (p. 293) when the underlying symmetry is not evident? In general, what is the gain in transferring the vocabulary of one field to a description of the problems of another field unless an appropriate basic theoretical foundation is first constructed? And can many such structures be erected in a finite practical book? In other words, why should the application of physics ideas to the areas of art, religion, society, etc., be any easier or quicker than their application to the area of physics?

There are also cases where presumed commonalties are pushed so far as to be erroneously or misleadingly used. For example (p. 346), in the High Middle Ages, changes in agricultural technology furthered the growth of monasteries in Western Europe, which in turn contributed to further advances in agriculture. Isard says "The advances were cross-catalytic." But, by the usual definition, the catalyst is not changed by the process of catalysis whereas each of the elements in Isard's cross catalysis were certainly changing each other. If it is desirable to use a physico-mathematical concept, rather than the usual historical term of mutual influence, the appropriate phrase would be "feedback loop."

This is not to say that such books are of no use. Isard demonstrates the power of cross fertilization of fields by comparing the development of the idea of the gene as the self-perpetuating unit in biological evaluation, with that of the "meme" as the self replicating unit of cultural evolution. His discussion of the co-evolution of genes and memes is clear and interesting and certainly displays the commonalties of the biological and cultural sciences. And there are many historical-political insights which are quite valuable, whether or not they demonstrate commonalties with the natural sciences. For example: " a hierarchical structure becomes composed of more and more levels, impersonal relations more and more replace personal ones, and more and more possibilities become available for distorting the flow of information and for allowing corruption to take place and incompetents to hold office" (p. 344).

In addition to the massive melange of ideas, relationships, and historical-social-physical-biological "facts," the book contains ample references and notes, sufficient to allow an interested reader to probe further into any of the concepts introduced by Isard, and perhaps to transcend the line between vocabulary and knowledge. And, the sketchy discussions in the book are usually sufficient to raise interest. Perhaps that's all that can be expected in a reasonably-sized book, and this one fits the expectations better than most.

Alvin M. Saperstein

Wayne State University



Particles in our Air: Concentrations and Health Effects

Richard Wilson and John Spengler, editors

Harvard University Press, Boston, MA, 1996, ISBN 0-674-24077-4


This book is an excellent summary of the epidemiology case for the increased stringency of the U.S. Environmental Protection Agency's (USEPA) new particulate standard for fine particles less than 2.5 microns in aerodynamic diameter, called PM2.5. While appropriately referenced to the original literature, this book is written for a more general, but scientifically literate audience, to present the case for the more stringent PM standard to a wide audience. The case was undoubtedly successful: in 1997 EPA promulgated a very stringent standard in response to the type of data included in this volume. However, the book is clearly biased, as it excludes all of the authors and much of the literature which question the epidemiological point of view, and draws its conclusions almost exclusively from the epidemiological evidence, which by its nature can only draw associations and cannot define a cause and effect relationship. Likewise, this book goes outside the epidemiology evidence to recommend cost benefit control strategies, which are clearly beyond the expertise of the authors.

When reading this book, consider whether damaging health effects are more likely to be caused by specific chemical effects involving chemicals such as sulfate, or whether all particles of a certain size act in the same fashion, regardless of their chemical composition, to produce mortality.

Wilson's introduction presents the concept of harvesting and notes that several authors have found significant negative correlations consistent with the harvesting hypothesis, but those authors' viewpoints are not included in this book. Wilson notes that when several air pollution variables are collinear, epidemiological studies by themselves cannot distinguish which is the causative agent. The book adds animal data in chapter 5 and human experimental studies in Chapter 8 to ascribe the biological plausibility, which has been lacking in previous treatments of the subject.

Chapter 2 is a qualitative description of air pollution monitoring which is largely irrelevant to the theme of the book. It does not provide information regarding the extent of monitoring or the efficacy of central stations in measuring personal exposure.

Chapter 3 presents an emissions inventory table with no units, rendering the information almost useless. Contributions of different PM sources are discussed and documented, and differences in chemical composition of fine particles in selected sections of the United States are presented, but hard data on PM2.5 concentrations are lacking. The Gaussian plume equations included are not used, and imply more sophistication than is actually present in this text.

Chapter 4 addresses the crucial question of whether central outdoor ambient air quality monitors are an adequate surrogate measure of individual personal exposure to particulate matter. Since the USEPA has determined that on average, people spend over 80 percent of their time indoors, and since highly individual activities greatly influence indoor personal exposure, can a central monitor correlate with the true cause of health effects, if those health effects are caused by personal exposure? While the authors, Ozkaynak and Spengler, note that small particles have a large potential to penetrate from outdoors to indoors, central station outdoor measurements at current ambient levels are not strongly associated with personal exposures to particulates. Nonetheless, the authors conclude that epidemiology demonstrates a significant positive effect of centrally measured particulate concentrations on public health. It would seem that we must instead conclude that something correlated with centrally measured particulate concentrations causes the health effects.

Chapter 5 should be a crucial link between the associative inferences of epidemology and the establishment of causation of public health effects. The right studies would establish biological plausibility. Unfortunately, this chapter turns to issues of specific chemical toxicity, focussing primarily on sulfate particles, rather than finding studies which deal with solely particulate matter at or near ambient concentrations, but making the emphatic point that extrapolation from results obtained on high concentrations is not appropriate.

Chapters 6 and 7 discuss the epidemiological evidence that definitely demonstrates an association between particulate matter, normally measured as PM10, and various categories of mortality and morbidity. These studies do not support the finding of a threshold, beneath which health effects do not occur, a concept that underlies the fundamental approach of the Clean Air Act. The CAA requires the setting of standards for public health with a safety margin below the hypothetical threshold, and hence this observation, if correct and if causative, presents a serious problem for our control strategies. Time series studies, which use acute health indicators, avoid some of the confounders, such as smoking and urban lifestyle, to which chronic geographical cross section studies may be subject.

Utel and Samet conclude in Chapter 8 that "neither clinical experience nor review of the literature identify a direct pathophysiological mechanism that can explain the relationship between inhaled particulate and mortality."

Chapter 9 presents a probabilistic model of loss of life expectancy, which is interesting, but does not add to the causality argument.

Chapter 10 is an outstanding summary both of the book and of the national debate, which preceded EPA's action and which continues today. There is little doubt that there is an association of mortality and morbidity rates with particulate matter, but the scarcity of data on PM2.5 and closely associated sulfate does not allow a causative relation to be established, despite the author's claims. Existing data on animals appear to suggest sulfate, or some other agent in the complex combustion products that make up fine particles, as the cause, but the authors note that there is insufficient data to investigate this obvious confounding factor. The authors also note that the acute time series studies, which measure almost immediate harvesting, and the chronic studies of reduction of life expectancy in years, give the same coefficients, raising the question of whether the acute and chronic studies are actually measuring an association with the same phenomenon. Hopefully statements such as this can open the door to a learned discussion of other possible causes, overcoming the epidemiological fixation on PM2.5. In attempting to move from association to infer causality, the authors review Hill's attributes. They admit that the strength of association with risk ratios only slightly above unity are not normally satisfactory. They conclude that the association is consistent, but their arguments only hold if sulfates and PM2.5 are identical. The temporality criteria seems to be met, since all studies show that disease follows exposure, and certainly the dose response increase of effect with increasing dose is positive.

Hill's biological plausibility attribute is the most difficult attribute, and this book does not add evidence to suggest that this attribute has been met. The specificity of the association is fairly focused, but the reliance on cardiovascular elements is one of the continuing lines that divide experts in the air pollution field. Because animal data does not contribute to our understanding as yet, we cannot accept the coherence attribute. In sum, we are no nearer to unanimous fulfillment of Hill's criteria as a result of this book.

Finally, the authors go beyond their title and expertise to attempt a cost-benefit and technology assessment in two pages, using the full cost of a life rather than a marginal benefit consistent with harvesting. The distorted results suggest abundant resources for abatement, which the authors advocate despite a high probability that the wrong control target might be chosen.

In summary, this book should not be read alone. To balance its perspective, the reader might also read the critical review, "Ambient Particles and Health, Lines that Divide," by S. Vedal, in the Journal of the Air and Waste Management Association 47, 551-581 (1997).


Ralph H. Kummler

Dir., Hazardous Waste Mgt. Prog.

Assoc. Dean for Rsch

Coll. of Engineering, Wayne State U.

Detroit, MI 48202



Article reviews : from Science by John Michael Williams


Are We Seeing Global Warming?

K. Hasselmann, Science, 9 May 1997, 914-915


The summary graph and references in this "Science Perspective" tell the story: There can now be discerned a visible trend upward in global temperature of about 0.5oC over the past century. The article mentions some of the data analysis justifying this conclusion, which reverses a 1990 Intergovernmental Panel on Climate Change report. Although Hasselmann leans toward attributing the trend to civilization, he points out that the trend alone can not justify attribution of a cause. The article does not describe long-term data, such as the "Little Ice Age" of the late Middle Ages--which might confound the issue of human vs nonhuman influence.



Maximum and Minimum Temperature Trends for the Globe

D. R. Easterling, et al, Science, 18 July 1997, 364-366


These authors summarize the increasingly accurate air-temperature data available since about 1950 at about 1500 monitoring stations worldwide. They present results showing a rate of increase in nighttime maxima averaging about 1.8 oC per 100 years, and a corresponding increase in daytime maxima of about 0.9 oC. So, these measures of global warming show a decreasing difference in temperature between night and day, with most of the change at night. The authors suggest that increasing cloudiness may help explain this phenomenon.



Millennial Climate Oscillations

Delia Oppo, Science, 14 November 1997, 1244-1246


This perspective on paleoclimatology opens with the statement that it is the Milankovitch cycle of the Earth's orbit that causes long-term climatic changes including ice ages. The author then raises the question of how to explain the too-rapid, too-frequent millennial changes seen in the fossil record.

The author claims that each millennial warming trend is preceded by a 5 to 8oC warming over Greenland, which is followed by some millennia of relative global warmth and then a sudden drop back to global cold. After discussing frequency of rock fragments from icebergs, which is said to represent rate of iceberg-ice transport, the author concludes that millennial global temperature changes, including the Little Ice Age ending around 1900, depend mainly on deep-water oceanic convective currents. The suggestion is that the atmosphere-ocean system entails temperature changes amplified by ice sheet extent.

In this article, the author does not state whether the global warming heat flow is supposed to be driven by the Greenland prewarming, or whether the Greenland temperatures merely represent a lower average thermal inertia in that region. Clearly, the heat capacity of the system would be of greatest importance in deciding what the effect might be of the additional atmospheric heat from an increase in greenhouse gasses caused by civilization.



Arctic Environmental Change of the Last Four Centuries

J. Overpeck, et al, Science, 14 November 1997, 1251-1256


This is a must-read paper for anyone interested in the science of the recent global warming. The 18 authors review data from all available sources to conclude that major factors in the end of the Little Ice Age around 1900 were decreased volcanic activity and increased insolation--the latter more because of increased irradiance than greenhouse trapping. This reviewer also would have hoped for an astrophysical observation of a solar cycle--perhaps one correlated with the purportedly "missing" solar neutrino flux of the current day.

The authors make the point that the reflectivity of surface areas covered by snow or ice provide an amplification mechanism for millennial-scale global temperature changes. Regardless of the weight of human activity in the input, the current warming trend will continue at a rate difficult to anticipate.



Thermohaline circulation, the Achilles Heel of Our Climate System: Will Man-Made CO2 Upset the Current Balance?

Wallace S. Broecker, Science, 28 November 1997, 1582-1588.


The precession of Earth's axis of rotation, along with secular changes in its orbit about the Sun, define the Milankovitch frequency components, the shortest having a period of something over 20,000 years. But recent ice cores and other data seem to show drastic global climatic changes over much shorter, millennial-scale, periods.

Water, in vapor state described by the author as the dominant greenhouse gas, increases in density in liquid state with decreasing temperature or with increasing salinity.

The author postulates a deep-sea circulatory pattern with two dense-descent regions, one along the Antarctic shelf, and the other in the North Atlantic. Using the duo as a mechanism of instability, the author presents evidence that past interludes of global warming have stalled this thermohaline circulation merely because of superficial temperature increases. The pattern includes the Gulf Stream which, if stalled, would allow extreme weather patterns jeopardizing current food production in or near Europe.

The author presents the deep-water factor as a fundamental weakness of current climate models, several of which he describes. He points out that a lack of accurate, very long-term, data makes it difficult to separate civilization from other causes. Although he cannot offer a prediction, the author advocates reduction in civilized atmospheric CO2 as a precaution. He suggests delivery of energy in noncarbonic forms, as in hydrogen fuel cells, for which consumption would produce only water vapor as a waste product of the end-user.


Sensitivity of Boreal Forest Carbon Balance to Soil Thaw

by M. L. Goulden, et al, Science, Science, 9 January 1998, 214-217.


The twelve coauthors of this report used a variety of instruments to measure the carbon flux (as organic material and oxide) from a black spruce forest in Canada. The usually-frozen soil contained an average of about 150 metric tons of carbon per hectare, as compared with about 50 tons in wood. Extrapolated world-wide, such boreal soils contain around 3 x 1011 tons. If oxidized entirely, the total could increase atmospheric CO2 by some 50%.

Monitored from 1994 to 1997, the soil was found to have lost an average of about 0.3 +/- 0.5 ton of carbon per hectare per year. This reviewer is reminded of peat bogs which may support underground spontaneous combustion when drained. The authors attribute the average soil carbon loss to ongoing global warming.

The authors do not discuss the mechanism of carbon oxidation in the soil during the brief summer thaws; presumably, it was because of fungal activity. It is unclear to this reviewer how one might predict the shift in equilibrium of carbon balance, should increasingly warmer summers cause an adaptive change in the oxidative mechanism. In any case, as the authors conclude, global warming would be expected to increase the rate of soil carbon oxidation and release into the atmosphere.


John Michael Williams

P. O. Box 2697

Redwood City, CA 94064



3464 words