Your Thought Is Its Command

April 27, 2007
What good is a robot if you can't order it around with your thoughts? Rajesh Rao, a professor of computer science and engineering at the University of Washington in Seattle, has answered this question with an input system that can be used to control

What good is a robot if you can't order it around with your thoughts? Rajesh Rao, a professor of computer science and engineering at the University of Washington in Seattle, has answered this question with an input system that can be used to control the movement of a humanoid robot with signals from a human brain.

Rao and his students have developed a system that lets people tell a robot where to go and what to pick up merely by thinking about these actions. Donning a skullcap sprinkled with 32 electrodes, users view the robot's movements on a display that receives video signals from two cameras: one mounted on the robot and another above it (Fig. 1).

Objects in front of the robot are randomly illuminated. When users look at an item they want the robot to grasp and then see the item suddenly brighten, the brain registers surprise. A computer detects this response and relays a signal to the robot ordering it to grab the selected object (Fig. 2). A similar process is used to determine where the robot should place the item.

"We're using non-invasive signals from the brain to make the robot do something interesting," Rao says. The robot used with the interface is a two-foot-high research model manufactured by Fujitsu, although the system could be easily adapted to work with virtually any type of controllable robot, Rao notes.

Brain signals recorded from the scalp are inherently "noisy" and make efficient thought control difficult, Rao says. But the surprise response cuts through the clutter and delivers a definitive message. So far, Rao and his students have implemented only a few basic instructions: move forward, select one of two objects, pick the object up, and bring the item to one of two locations. But they claim a 94% success rate in trials.

Next on the agenda is making the robot's behavior more adaptive to its environment, Rao says, such as enabling it to avoid obstacles and handle more complex objects. This will require the addition of artificial intelligence technology. Rao also plans to expand the system's command set, allowing it to support a greater number of object and placement choices.

Rao admits that the thought control system, as it currently exists, is limited in its capabilities and serves only as a proof of concept. Yet he believes that the general technique has the potential to lead to a new generation of semi-autonomous robots. Such devices could be used by disabled people to retrieve and replace household items and to perform an array of personal tasks. "You might even have a personal thought-controlled wheelchair someday," Rao says.

The technology also could be used in settings where hands-on robot control would be impossible or inconvenient, such as in operating rooms or in space. He doesn't discount the system's potential as a video game system interface, either. "The entertainment industry is one of the more obvious applications for this technology," he says.

Basic thought control interfaces could begin appearing in video games within the next two to three years, Rao predicts, with more advanced versions to arrive in about five to 10 years. "It's something to think about," he says.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!