- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
M.D. Lowenthal, Past-Chair of FPS
Stuart Russel (UC Berkeley) said that artificial intelligence (AI) is about making computers intelligent, which means making them do the right thing, which means maximizing the expected utility. Why? AI developers usually say that we do it for its own value, and more is better, and there are no limits. Recently, there have been rapid advances in deep learning in speech, vision, and reinforcement learning; universal probability languages enable cross fields; long-term hierarchically structured behavior to address the billions of operations. We can no longer compete with machines in chess, poker, and 29 Atari video games like space invaders (learned by a computer only from raw image data in a few hours). These methods have been applied successfully to nuclear explosion detection and identification. What if we do succeed? I.J. Good said that “the first ultraintelligent machine is the last invention humankind need ever make.” An intelligence explosion takes place if AI itself can do AI research. It is crucial to get the objectives, values, and constraints right or the AI system will do incredibly well at solving the wrong or incomplete problem. A responsible approach could be to do research not in AI but in provably beneficial AI. One could try to seal the system away or limit its function to answering questions. Stepwise progress would be an adversarial approach with functional AI and superintelligent verifiers. The bottom line is that we need to give this thought now before the technology gets ahead of our ability to manage it.
Guruduth Banavar of IBM argued that developments in AI are going to be driven by social and economic values. The systems are made to meet needs and a business case must be made for each development. There have been major advances because of huge computing power, vast amounts of data, more powerful algorithms. For example, speech recognition error rates are getting to error rates in the range of human level performance. Watson’s victory on Jeopardy! led to a whole new line of business at IBM on cognitive learning based on requests from health care, finance, travel and other sectors. They have developed debating technologies that examine articles (newspaper, review, and research) and relevant claims and assess probable useful arguments. In minutes, a computer extracts key arguments from four million articles. Similarly, computers can conduct image analysis and anomaly detection, such as a tumor in an ultrasound image leading to differential diagnosis and treatment recommendations. In short, Banavar said that we can create cognitive environments that enable people to be more effective.
Gill Pratt (DARPA) discussed the implications of cognitive computing and robotics for future defense systems. He began by noting that computers are capable of doing pattern matching and extrapolation for prediction, but asked whether that is thought. He argued that experts are modeling the brain, but we still know little about how it works. There has been progress on some of the challenges for which AI and robotics are well suited, but others remain stubbornly resistant to progress. Mobility of, for example bipedal robots, has improved to the point where a blind robot can have better balance than a human and can traverse complex terrain quickly and blindly better than humans. Autonomy has improved, but the capability is brittle and not adaptable. Looking at end uses, improvised explosive devices (IEDs) in Iraq and Afghanistan continue to be essentially as effective and harmful as they were 10 years ago, despite huge investment in robotics and sensors because of relatively simple improvements in the IEDs. At a more fundamental level, energy efficiency is a major challenge: human walking is very efficient (if terrain were flat, a human could walk from east coast to west coast without eating) and robots are 100 times worse. Human muscles are efficient because they are complex and adapt and distribute load to operate at the optimal efficiency. Neuromorphic chips are not faster than other processors, but they explore a territory of engineering tradeoffs inspired by neurons in the brain, which are optimized for energy (food is expensive) rather than simplicity (complexity in machines is expensive). Pratt identified opportunities for AI and robotics to address problems in climate change, health care, manufacturing, and a variety of others arenas. To do that, AI and robotics need to develop competency in unstructured environments, operate with intermittent communications (what do you do when the link breaks), surpass human performance, and reduce size, weight, and power demands.
Benja Fallenstein (Machine Intelligence Research Institute) said that smarter than human intelligence is not around the corner, but it probably will be and it is important to ensure that it is aligned with our interests. He asked, how do we specify beneficial goals and that systems actually pursue them, and how do we correct when we get it wrong?
He argued that we need a solid theoretical understanding of the problem and solution, using probability theory, decision theory, game theory, statistical learning, Bayesian networks, and formal verification. Contemporary AI systems use simplified models of the world. If a goal is not specified perfectly for the environments in which the systems operate, then the outcomes can be very wrong. Fallenstein discussed the various probabilistic and theoretical approaches and their limitations, and he concludes that much work is still to be done before we can trust AI systems with smarter than human intelligence to act in what we would consider to be our interests.
Michael Cima (MIT) holds the first patent on 3d printing (1985). He gave an overview of additive manufacturing and the evolution of manufacturing methods, challenges that were encountered, and how they were overcome, such as inkjet printing of curves using a rastering print head. Additive manufacturing was thought of as a tool for design, “printing” the prototype, but businesses began to use it for small-run manufacturing, such as short-turnaround production of complicated engine manifolds for which the world demand is only a handful, but they are needed as soon as possible. Cima foresees that custom prototyping and printing services are more likely to take hold (they already exist) than a 3d printer in every home. He explained that some of the main remaining challenges have to do with materials, such as elastomeric polymers.
David Keicher of Sandia National Laboratories argued for the promise of printed electronics: it reduces the number of steps and gives you more flexibility with less tooling. Keicher listed different methods, from extrusion casting to direct-write printing using nanoparticles in an aerosol jet (SNL formulates a lot of its own nanoparticles of different materials). SNL uses “aerodynamic lenses” to focus printing make it less sensitive to pressure variations. These methods enable designers to create advanced tools for maintaining continuity of knowledge (ceramic tamper seals with simple continuity circuit built in) and other security-related missions.
Prabhjot Singh described GE’s work using additive manufacturing, ranging from digital microprinting at 15-20 microns of ceramic metal to laser or electron-beam melting of metal in a powder bed. They have produced key parts of finished products, such as ultrasound transducers using printed piezoelectric transducer elements that are better than conventionally made elements, enabling designers to close in on 25MHz transducers, the “holy grail” of ultrasound. That said, even if additive manufacturing can be used, it must yield a net benefit in quality, time, and/or cost to be of interest to companies. GE made maternal fetal probes with additive technology that were not marketed because they offered no advantage over conventionally produced probes. A key advantage of additive technology was illustrated by new fuel nozzles for jet engines that previously were built by brazing 19-20 pieces together, but now can be made as a single integral part. Singh identified key needs, including sensors and non-destructive evaluation methods for certifying the quality of the pieces, ensuring that the process achieves the specified configurations and properties. Another challenge is to reduce the time required on three steps: design time; time on the manufacturing machine; and post-processing to achieve the right surface features and microstructure treatment.
Bruce Goodwin from Lawrence Livermore National Laboratory discussed how the combination of additive manufacturing and high-performance computing has the potential for enormous benefits to society but also latent disruptive impacts to national security. Additive manufacturing reduces waste and energy costs, relies on general purpose manufacturing equipment (rather than product-specific equipment), makes otherwise unbuildable technology, and reduces skill demand and factory footprint. Goodwin noted that a uranium processing machine in a 400sqft room replaces a one-mile long production line at Y12. It also enables durably transportable qualified CAD designs. All of this could make it much harder to detect proliferation of nuclear and other weapons (e.g., printing military grade high explosives). Uncertainty quantified high-performance computing can accelerate the engineering design cycle by an order of magnitude, and additive manufacturing can yield the same order of acceleration. Goodwin asks, So what’s the problem? These tools take expertise out of the manufacturing and make the footprint small and hard to detect (small energy and material consumption and waste streams). Qualified digital build files are hard to control and contain everything you need to produce a working product. All of these factors could compromise trade sanctions.