Dedicated Supercomputers Probe QCD Theory
By Ernie Tretkoff
In order to meet the enormous computing power requirements of Quantum Chromodynamics (QCD), the theory of strong interactions, researchers have built supercomputers specifically for that purpose.Experiments at national accelerators and labs need these calculations. "Many of these tests actually require this numerical work," said Norman Christ, of Columbia University, who has led the development of special purpose computers, "It's a critical part of the international experimental program."
At the moment, however, researchers lack the computing power to bring the precision of the calculations up to that of the experiments. "We're just beginning to be able to make accurate calculations," said Robert Sugar of the University of California, Santa Barbara.
To compute the quantities of QCD, physicists use lattice gauge theory, which represents fields on a four-dimensional space-time grid, or lattice. A large lattice with closely spaced points provides a good approximation to continuous space. The technique, invented in the US by Ken Wilson in 1974, remains the only known way to calculate some values.
Several characteristics of lattice calculations make them simpler than other large problems. For instance, researchers can easily divide the uniform grid evenly among the processors of a parallel computer, in such a way that individual processors rarely need to trade information. Also, lattice calculations don't require much input and output, and compared to other calculations, need relatively little memory.
Since 1982, Christ's group at Columbia has been taking advantage of these simplifications to build special purpose parallel computers that cost much less than general-purpose machines. The most recent version is QCDOC, for Quantum Chromodynamics On a Chip, developed in collaboration with IBM. Each chip contains a 500 MHz 440 PowerPC processor core with a 1 Gigaflops, 64-bit floating point unit, and each of these nodes is connected to six others.
In November, Christ's group tested a prototype version of QCDOC with 128 processors. With 2,000 processors, the machines will be capable of sustaining a teraflop—one trillion arithmetic operations per second. Machines with 10,000 nodes, which Christ hopes to implement by summer 2004, should be able to sustain five teraflops, with a peak speed of 10 teraflops. A variety of calculations should work well on the QCDOC machines.
The first involves the properties of weak decays of strongly interacting particles. Experiments at SLAC and KEK B factories, for instance, are trying to measure some of these properties, and such efforts require theoretical calculations of the effects of strong interactions. Lattice calculations will also provide precise values for other standard model parameters, such as quark masses and the strong coupling constant, and may help find physics beyond the standard model.
A second area of great interest is the quark-gluon plasma. Though quarks and gluons are normally bound up in other particles, many scientists believe that at sufficiently high temperatures or densities there will be a transition to a quark-gluon plasma. Experiments at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory have aimed to create such a state. A quark-gluon plasma probably existed in the early universe, and may exist in neutron stars. Lattice calculations are underway to determine the properties of this plasma.
Third, lattice gauge theorists want to better understand the internal structure and interactions of hadrons, such as protons and neutrons. Calculations can show their distribution of quarks, magnetic moment, and other properties.
The QCDOC machines may not even be limited to QCD calculations according to James Glimm, of the State University of New York at Stony Brook. "We are targeting specifically molecular dynamics with long range forces, with bio-applications in mind," he said. "It would also be good for atomistic equation of state studies, and probably a number of other problems." In fact, the computers would be good for any "problems for which the amount of data is significantly smaller than the amount of work done on this data."
IBM has based its new supercomputer, Blue Gene/L, on the QCDOC model. The machine which is still in development, will reach a peak speed of 360 teraflops. A prototype version was tested in November. Blue Gene/L was designed with protein-folding problems in mind, but will be capable of many other calculations. The first version, expected to be complete in 2005, will be placed at Lawrence Livermore National Laboratory, where, according to Glimm, they regard it as nearly general-purpose.
Although these special-purpose computers may be a very cost-effective way of getting the needed computing power for these calculations, scientists still get a lot of use out of general-purpose machines. One approach is to use clusters of general-purpose workstations, possibly optimizing the clusters for these particular applications.
QCDOC may have the speed and cost advantage now, but the clusters could catch up as commercial machines get better and cheaper. Steven Gottlieb, a lattice gauge theorist at Indiana University who uses a variety of supercomputers and clusters for his own calculations, estimates the cost of clusters could soon drop to several dollars per megaflop, making them competitive with QCDOC machines at one dollar per megaflop.
Also, using the cluster approach, the lattice gauge community can take advantage of power increases and cost decreases in commercial machines, without having to spend money and time developing machines themselves. QCDOC took years to design, and even Christ admits that it's been hard for him to find time to do physics while building the computers, though he said he expects to do more physics soon. "At the moment we're up to our eyeballs in constructing this machine. We'll soon transition to doing what we think will be very exciting science, " he said.
Gottlieb agrees that the QCDOC machine could be tremendously powerful, but points out that some types of calculations, such as those involving Fourier transforms, wouldn't work well on it. QCDOC would also have trouble with sparse matrices and implicitly solved partial differential equations, such as diffusion-type problems, said Glimm. Even for problems that QCDOC could solve well, said Gottlieb, "the expertise needed to program the machine very efficiently is not widespread."
"Probably the right answer is to have a mix of the two approaches," said Sugar, who has organized a large part of the US lattice gauge community—about 150 researchers—behind efforts to increase both computing power and software development for lattice calculations. The group, which will work on both QCDOC and cluster approaches, has received a grant from the Department of Energy under SciDAC, (Scientific Discovery through Advanced Computing). The SciDAC collaboration will also focus on developing efficient, easy-to-use software for both platforms. Sugar pointed out that improved algorithms are often as important as increased computing power.
In fact, many calculations don't even require supercomputers at all, especially as desktop workstations increase in speed, points out Richard Haymaker, a lattice gauge theorist at Louisiana State University. "A tremendous amount can be done on just workstations," he said. Though large groups and powerful computers may have gained the spotlight, many lattice gauge theorists still work on individual machines and produce useful results. Most of these calculations are not the sort that can be directly compared to experimental results, he added, but they do provide a lot of insight into the theory. "There's a whole spectrum of needs for both small and large-scale calculations," said Haymaker.
It's often hard to tell how much computing power is needed for a particular problem. "I've often said the last 20 years of my life are testimony to my inability to estimate how much time it would take to solve this problem," said Gottlieb. In the early 80's people were happy with a megaflop. Several years ago, Gottlieb remembers thinking, "If I only had 10 gigaflop-years, I could clear this up." Gottlieb now estimates 10 teraflops sustained would result in a lot of progress soon.
Though QCDOC looks very cost-effective, the US lattice gauge community needs more funding to buy the machines in order to remain a leader in the field. "This is an opportunity we're about to throw out the window," said Fred Cooper, of the National Science Foundation, when asked whether the NSF could help fund supercomputers for lattice gauge theory.
As the US scientists await funding, a group in the UK and a Japanese group have already invested in QCDOC machines. "It would be odd if there weren't one for the US community," said Sugar. The Columbia group is not the only one working on special purpose computers for QCD. An Italian collaboration, called APE, is also building special purpose computers for lattice QCD.
Japan has traditionally invested a lot of money in supercomputing, and is now home to the most powerful computer in the world, the earth simulator, a climate-modeling system that runs at over 35 teraflops and costs an estimated $250 million.
Though the US lattice gauge community isn't asking for a $250 million machine, they acknowledge that the US will have to invest money in lattice gauge computing in order to stay a leader in the field. "If we don't do anything, we will fall behind," said Sugar, "but I find it hard to believe that will happen."
©1995 - 2015, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.
Associate Editor: Jennifer Ouellette