Cooperation Leads To Smarter Robots

March 16, 2011
Tools like open source Robot Operating System and 3D imaging support by Primesense are changing the way robots operate.

Fig 1. The Robot Operating System (ROS) runs on numerous platforms like Willow Garage’s PR2 (a) and the Bilibot (b). The Bilibot combines an iRobot Create with a Microsoft Kinect.

Fig 2. The Hokuyo laser scanner is a common sensor on robots.

Fig 3. Quadrotors at the University of Pennsylvania’s GRASP lab work cooperate to build structures.

Fig 4. Telepresence robots come in a wide variety including Willow Garage’s Texia (a), Gostai’s Jazz Connect (b), Anybots’ QB (c), and VGo Communications’ VGo (d).

There has never been a better time to get into robotics. Processing power is going up, sensors are improving radically, and costs are shrinking. Best of all, the software needed to command these robots is becoming more available. Despite this rosy picture, though, today’s robots don’t qualify as sapient androids that can follow Asimov’s Three Laws of Robotics.

Still, several successful commercial and military robots are on the market. For example, iRobot’s Roomba vacuum cleaner uses behavior-based programming (see “What’s Hot Today: Robotics” at electronicdesign.com). Qinetiq’s Dragon Runner has seen action in Iraq and Afghanistan (see “Robots Fly, Swim, And Roll At AUVSI” at electronicdesign.com). Roomba is fully autonomous, while the Dragon Runner is a remote control device.

Robot R&D

Autonomous and semi-autonomous robot research is occupying a significant number of researchers. This development comes into play as budding programmers and engineers enter competitions like Trinity College’s Fire Fighting Home Robot Contest (see “Volunteers Wanted For A Fair Affair And Robots Everywhere” at electronicdesign.com) and FIRST Robotics (see “Future Engineers Brace For Battle Of The Robots” at electronicdesign.com).

These students are benefitting from the research being done in colleges and universities and even in the commercial sector. Products like Lego Mindstorms and the National Instruments LabVIEW Robot are making robotics experimentation easier (see “Hands On A LabVIEW Robot” at electronicdesign.com).

Size and performance sometime matter in research. That’s why Willow Garage’s PR2 is so substantial (Fig. 1a). It runs the Robot Operating System (ROS) and packs some heavy-duty mechanics, computer hardware, and software into a person-size autonomous robot.

The PR2’s computational engine features a pair of quad-core Intel Xeon Core i7 processors with 24 Gbytes of error correcting code (ECC) memory. It has a half-terabyte hard drive plus a removable 1.5-Tbyte drive. Also, there’s a 32-Gbit/s Gigabit Ethernet switch plus an EtherCAT network for motor control. The main Ethernet network is used for a range of interfaces including cameras. It’s tied into Wi-Fi and Bluetooth wireless networks.

All this hardware runs off a 1.3-kW lithium-ion (Li-ion) battery pack that provides two hours of runtime. This is a bit short for some applications but more than enough for testing, which is the primary use of the PR2. The system also has a run-stop panic button on the back that disables the drive motors but leaves the rest of the robot active. There is a wireless version as well.

The PR2’s arms may look odd, but they have an interesting design. Their passive spring counter-balance system enables them to remain in position even when the power is turned off. It also means that the arms are back-drivable, so they can be pushed or moved without breaking them or fighting with servos.

Many assembly-line robotic arms are hefty, making quick movement difficult. There’s also a lack of feedback when they unexpectedly encounter an object. The PR2’s arms, though, have eight degrees of freedom.

The PR2 is loaded with sensors as well. Its two Hokuyo UTM-30LX laser scanners provide precise range information (Fig. 2). One is mounted on the omnidirectional platform, and the other is under the chin. The head alone has more than half a dozen cameras, including several stereo cameras and an LED texture projector with a narrow-angle stereo camera.

One sensor the PR2 may want to have is from Microsoft’s Kinect for the Xbox 360 (see “How Microsoft’s PrimeSense-Based Kinect Really Works,” p. xx). Developed by PrimeSense, the underlying technology for the Kinect addresses one of the challenges that robot designers have been grappling with for decades: How do you get a good 3D representation of a robot’s environment?

Products like the Hokuyo laser scanner are one answer. The Hokuyo laser scanner is actually inexpensive compared to many alternatives, but it is still on the order of $1000. The advantage is its accuracy. The Kinect is 10 times cheaper, but it offers a similar loss in accuracy. On the other hand, the PrimeSense technology combines video with position information. This is very useful for game software as well as for robots. The Kinect provides VGA resolution at 30 frames/s.

A robot can utilize this information to recognize and avoid obstacles. It also could employ this data to interact with people, considered obstacles, using the same kind of interaction that the Kinect is used for in gaming. This type of research has been going on in universities using image and laser systems, but the PrimeSense technology is going to make this process easier and more economical.

There is even an organization, OpenNI (natural interaction), that is supporting an open framework for user interaction via technology like that found in the Kinect. OpenNI’s framework will be useful for robotics, but it is not restricted to this environment.

The Bilibot is a robot that combines ROS and the Kinect (Fig. 1b). Its future versions likely will employ PrimeSense’s sensor instead of the Kinect because the Kinect requires additional power whereas PrimeSense’s module runs off USB power.

The Bilobot is built atop iRobot’s Create, which runs an 8-bit Atmel ATMega microcontroller (see “Commanding The iRobot Create” at electronicdesign.com). It has a platform where a laptop or mobile PC can be placed. The PC uses the Kinect like a smart mobile platform.

Like the PR2, the Bilibot is designed for research so flexibility and computational power are key. A dedicated system could be smaller and cheaper, but boundaries tend to be the bane of robot researchers.

Quadrotor ROS

The University of Pennsylvania’s GRASP (General Robotics, Automation, Sensing and Perception) lab also employs ROS. The lab’s quadrotor robotic helicopters run ROS and have a wireless link enabling multiple robots to cooperate (Fig. 3). They can be equipped with a gripper so they can pick up objects that are underneath them.

The quadrotor provides a stable flight system that can easily move in any direction. Check out the videos on GRASP’s Web site (www.grasp.upenn.edu/). It is quite impressive to watch three of these robots construct a simple structure. The structure joints contain magnets, making it easier to build the structure.

Multiple quadrotor robots have been used to cooperatively move a single object that one robot alone could not. Aggressive maneuvers are also in the mix. Quadrotor robots can perform amazing acrobatics, flying through windows and hoops. They’re even sticking on walls. And, they operate autonomously.

The quadrotors in these experiments do have an advantage. They’re using of information from external video sensors. The network of sensors and robots is tied together via ROS.

ROS: Robot Operating System

ROS is a distributed system. An ROS system consists of a set of nodes running packages that communicate using a message passing system. Essentially, each node is a small Web server in a network.

At least one master node provides service naming and registration facilities. It is essentially similar in function to a domain name service (DNS) server on the Internet. Multiple master nodes may be found in more complex ROS systems. The masters can exchange information, allowing other nodes within the environment to locate services.

Nodes can operate in a client/server mode as well as a publish/subscribe mode. The client/server mode provides direct interaction, while the publish/subscribe mode is normally used with sensors. For example, a sensor might publish a topic that contains new information when a change is detected.

A robot may have a single node associated with it providing remote sensor and control functions. Or, a robot may be totally self-contained. Even a remote control interface may be a node.

Packages can be collected together into stacks. The idea for stacks, and ROS in general, is to offer the most functionality for the least amount of work and structure. This provides a more flexible environment, permitting devices and algorithms to be changed more easily, and includes reusing as many standards as possible.

For example, XML could be used to define the stack manifest. The ROS Web site (www.ros.org) features links to hundreds of packages and stacks. The ROS navigation stack is a popular item to include in a project.

Another popular ROS package is the Point Cloud Library (PCL). It provides algorithms for operations such as filtering, down sampling, 3D feature estimation, registration, surface reconstruction, and segmentation. It is indispensable when operating in a 3D environment. Many simple robot projects are effectively operating in a 2D environment because of the challenges of dealing with a 3D environment.

Several ROS packages utilize the OpenCV library, which Willow Garage supports. OpenCV is an open-source computer vision library. It provides image processing for data from devices like cameras and radar systems. It even includes functions to assist in machine learning.

Artificial intelligence comes into play in some existing ROS packages that provide planning. Platforms like IBM’s Watson incorporate natural language processing (NLP). These platforms may be useful for robots that interact with people, and they might be included as ROS packages (see “Whatever Happened To Artificial Intelligence? Watson Has It!” at electronicdesign.com).

ROS is primarily a research platform used on a range of robots, including autonomous and semiautonomous units, so it foregoes some facilities like security that a production system may require. Communication can be secured using VPNs, which doesn’t address fine-grain management control, but it’s another area of research.

Telepresence Is Another Success Story

Semiautonomous robots like Qinetiq’s Dragon Runner and iRobot’s Packbot are used regularly to defuse bombs and rconnoiter (see “Unmanned Military Vehicles: Robots On The Rise” at electronicdesign.com). They let the controller manipulate the robot, but it tends to be a one-way operation.

Telepresence robots provide two-way interaction between the controller and people observed by the robot. Quite a few telepresence robots are on the market, including Willow Garage’s Texia (see “Any Bot In A Telepresence Storm” at electronicdesign.com), Gostai’s Jazz Connect (see “Gostai Urbi” at electronicdesign.com), Anybots’ QB (see “See CES From Another Point Of View” at electronicdesign.com), and VGo Communications’ VGo (Fig. 4).

The Texia runs ROS. Like the PR2, it was designed as a research platform. Other platforms designed for commercial and consumer use can be purchased now. These platforms are designed for simple operation via a Web browser. They provide two-way audio and video conferencing.

The Gostai Jazz Connect rolls while the Anybot QB performs a balancing act. Two-wheel robots may have to balance but they don’t necessarily use more power than their multi-wheeled counterparts because remaining balanced does not take much power.

Gostai provides different versions of the Jazz. The Jazz Security robot can operate autonomously or it can be controlled remotely. It includes a laser range finder for more precise sensing, allowing it to create detailed maps.

The company also has its own robot programming environment called Urbi, which has been integrated with ROS even though Urbi applications can operate on their own. Portions of the Urbi environment are open-source.

The VGo two-wheel telepresence robot is similar to the Anybot QB. Priced under $6000, the 4-ft tall robot has built-in Wi-Fi, a camera, and a display. Telepresence robots like the VGo are enabling people like Lyndon Baty to do things that they couldn’t do otherwise. In Lyndon’s case, it means attending high school in Knox City, Texas. He has polycystic kidney disease related to a kidney transplant, and he must avoid contact with other people.

Robots are becoming more useful and practical alternatives for a growing range of applications. There is still plenty of work to be done, though, but frameworks like ROS will help.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!