- American Physical Society Sites
- Meetings & Events
- Policy & Advocacy
- Careers In Physics
- About APS
- Become a Member
By Shannon Palus
APS March Meeting 2015 — Today, true artificial intelligence proliferates only in fiction. At the APS March Meeting 2015, robotics researchers debated how we’ll achieve smart robots in real life — and what we’ll do with them when we get them.
There are robots that can vacuum floors, robots that beat world-class talent at chess and Jeopardy, and even robots that are capable of driving a car. These are examples of what Michigan State University computational biologist Chris Adami calls “special-purpose intelligence”: robots that do just one complicated thing well, but not much more. Case in point: You wouldn’t want a Roomba behind the wheel.
Currently, computers have trouble recognizing faces and learning spoken languages, both skills that infant humans quickly acquire. Babies learn by exploring their world: as they wave their arms and legs around, they receive feedback as they find some movements more pleasurable than others. They take in that sensory information through one set of neurons and link it via synapses with different neurons that control motor actions.
Artificial neural networks that work in a similar way have been around for decades, with varying results. But a new piece of hardware, presented by Seyoung Kim of the IBM T. J. Watson Research Center, would make artificial neural networks smaller and more efficient than past versions, which have required multiple digital gates and control circuits to mimic synapses.
The IBM device is a semiconductor with two electrodes sandwiching a metal oxide. Putting a current through the device adjusts its resistance, and therefore the strength of connections through it. An array of these “artificial synapses” would link sensory signals with motor “neurons.”
In a simulation of the array, IBM researchers made the neurons spike randomly, causing random movement of a simulated Roomba-like robot. Like a flailing baby, the robot ambles around. Some movements bring the robot closer to a target, eliciting a positive sensory response. When a sensory neuron and a motor neuron fire together, they decrease the resistance of the device and have a stronger connection, explains Kim.
But it can only be scaled up so much. To Adami, it’s not a question of better hardware components. Neuroscientists don’t yet understand the whole brain fully enough to render it in hardware, Adami points out. Instead, he asks, “Can Darwinian evolution create sentient [artificial] brains?”
In simulations, thousands of sets of robot brain “genes” each determine a different network. Each brain is put in a simulated robot, says Adami, where it controls the robot and tries to keep it alive. “At the end of the process we transplant the best brain — or brains — on to real robots,” he explains. It’s a kind of natural selection in an artificial system.
He has already used the process to create a simple robot that can stay inside a circle. He envisions that the process can work for very complex, multipurpose machines. “When we turn them on, they will be infants,” Adami says of highly evolved brains. “We may have to wait 10 years, or 15, until they are worth taking seriously.”
One government agency doesn’t want to wait that long. The Defense Advanced Research Projects Agency (DARPA) is pushing robots to do useful things now. According to DARPA program manager Gill Pratt, the agency will host the DARPA Robotics Challenge in Pomona, California in June 2015. Created in response to the Fukushima disaster, the challenge offers $2 million in prize money for the team with a robot that can best complete a series of basic search and rescue tasks.
The 25 humanoid contestants will have to drive to a disaster zone, traverse tough terrain, move debris, cut a hole in a wall, adjust a valve, climb stairs, and then complete a surprise task. These robots will have supervised autonomy: A human controller can assign tasks and override the robot’s choices.
And poor choices by artificially intelligent robots could be a problem. University of California, Berkeley computer scientist Stuart Russell expressed concern that fully independent robots will make bad decisions — from a human, and moral, point of view — about how to complete tasks. Last year, Russell co-wrote an opinion article with Stephen Hawking, because they thought a question about sentient robots raised by the sci-fi box office flop Transcendence — Could a hyperintelligent machine become an unstoppable force against humanity? — “deserve some serious thought.”
If you ask a robot “to do something as simple as make some paper clips, or calculate digits of pi, well, if that’s the only thing you ask, it’s going to come up with ways of doing that optimally, which might involve converting all of the mass of planet Earth into computational facilities,” says Russell. “Clearly that’s not what we want.”
But Pratt’s vision for the smart robots born from the DARPA challenge paints a hopeful picture for AI. As he explains, “It’s robot and a person working as a team, each trying to do what they are best at.”
And Adami personally thinks that robots may grow adept and clever, but never more intelligent than humans: “We are going to be their teachers, in the same way that we teach our children.”
©1995 - 2021, AMERICAN PHYSICAL SOCIETY
APS encourages the redistribution of the materials included in this newspaper provided that attribution to the source is noted and the materials are not truncated or changed.