Don't look now, but robots are more than an academic curiosity. They're mowing the lawn, vacuuming the room, acting as hall monitors, and standing in harm's way. They've helped build cars for decades. Their senses and cognitive skills continue to improve.
Yet engineers, programmers, and designers are still the folks who turn a jumble of circuits, servos, and source code into a functioning robot. Many of the tools and techniques will be familiar to embedded designers, while other methodologies may be new.
A roboticist is often a jack of all trades. It typically takes electrical and mechanical engineering skills to construct a robot and an advanced programmer to tie the entire system together. Experience with artificial intelligence, computer planning, and real-time embedded systems is invaluable. Yet the need for all of this expertise can lead to runaway complexity. Luckily, changes in the robotic landscape are working to cap the complexity level.
In the past, robotics often operated in isolation. A single robot is simpler to manage. Multiple robots, on the other hand, often have a framework like the setup shown in Figure 1. Group management becomes part of the equation. In some instances, all robots within a group maintain a level of group reasoning so that any single robot can coordinate the group. Other approaches concentrate group management in one or more specialized robots.
The need for more sophisticated software frameworks comes from advances on the hardware side. Commercial-off-the-shelf (COTS) hardware is getting smaller, more powerful, less expensive, and more power-efficient. Most of these advances are due to factors other than robotic requirements, but robot designers can easily exploit these new options.
Camera chips for cell phones come in large volumes and feature a low-power design. These are suitable for robot vision systems. Low-cost, powerful microcontrollers, DSPs, system modules, and small motherboards provide high-power compute platforms for a host of solutions needed to process sensor inputs and analyze images and environmental conditions.
Robotic programming frameworks typically run atop conventional real-time or regular operating systems. Developers tend to be more concerned with the higher-level framework than the operating system, unless they're creating device drivers. Rule- or behavior-based systems seem to be the way robotics research has developed, which has begun to translate into more standard platforms (see "Procedures, Rules, And Behaviors," below).
Open Procedural Reasoning System (OpenPRS or OPRS) from the Laboratory for Analysis and Architecture of Systems (LAAS) is one completely open-source approach (Fig. 2). OPRS has piqued designer interest due to its level of complexity. It includes a knowledge editor, graphical (X-Windows-based) development interfaces, and real-time support found in the OPRS kernel. The kernel can run on a range of operating systems. Actual work is done on the OPRS server. A central message-passing module provides communication among the server, data, and rules. OpenPRS procedure rules can take advantage of the underlying multitasking system. For example, it's possible to split and join multiple tasks, which can in turn be used to build even more complex systems.
Various robotic research projects have OPRS applications. It's also found in a number of commercial products like those from FrontLine Robotics. As with many open-source solutions, these commercial endeavors often add a "secret sauce" (read: proprietary) to the mix. This typically comes in the form of features like a vision recognition system.
Also, OPRS addresses goal seeking and planning. This meta-level reasoning is necessary when a robot doesn't simply react to its environment, but rather tries to develop a plan to reach one or more goals. This might be a destination or completing a process such as collecting a set of objects.
ERSP from Evolution is one commercial solution that's not based on OpenPRS (Fig. 3). The behavior-based ERSP system can be found on about half of the current crop of research and commercial robotic projects that employ vision. Evolution Robotics RCC software can even recognize faces.
I HAVE A VISION
Sensors provide robots with their view of the world. Robots often use infrared, sonar, and lasers to locate obstacles. They deliver accurate information. But when sensors must cover a wide area, some complex scanning hardware and software is usually needed.
Vision systems employ the opposite approach, taking in a wide area in a single frame. But analyzing this information isn't an easy process.
However, two trends have made vision systems more practical. First, there's the falling cost and increasing resolution of CMOS and CCD camera modules. This is due to their use in high-volume consumer products like digital cameras and cell phones. Second, frameworks like Evolution's ERSP incorporate vision software support so that robotics designers can forego developing a vision system from scratch. Instead, it's simple to include a device driver for a camera and let the framework do the rest.
Most commercial vision systems employ proprietary techniques for object recognition and analysis. They tend to be very effective within the limitations of the hardware, where higher resolution, more storage, and more computing power can improve the system's accuracy and reliability.
For example, ERSP identifies multiple items within an image to compare against objects within its database. It can match objects even when the orientation and aspect differ between the current image and the training images. This information enables ERSP to determine the robot's relative position and orientation. A robot then can create a local map of obstacles and identify its position based on recognized landmarks.
One of the problems with vision systems is the need for processing power and storage. A Pentium 4-based PC has plenty of both. This type of heavy-duty platform is great for research but a bit of a problem for robots. Mobile robots have inherent power limitations, and they often have less powerful processors and little or no hard-disk space.
Fortunately, two trends are turning the tide. First, mobile processing power is on the rise, especially in the area of 32-bit microcontrollers. Second, flash memory storage is becoming larger and less expensive. Alternatives such as 1-in. hard drives are also generating interest.
Processing power is important due to the amount of information provided by cameras. A color megapixel camera generates 3 Mbytes of data in each frame. Digital cameras with more than 5 Mpixels are available, so there's the potential to process lots of information.
Often, image-recognition software is designed to cut this data down by a factor of a 1000 or more, making it possible to compare the results with images processed during a training or exploration period. Doing this in real time exacerbates the requirements because a 30-frame/s rate is typical.
Vision systems are being used in a range of non-robotic applications, and this type of research is finding its way into robots. Evolution's LaneHawk is a camera-based system that finds its home about a foot off the floor in the store's checkout isle. It's designed to recognize objects on the bottom of a cart and report them to the clerk. It's surprising how much money is lost because cashiers fail to check this area. The object recognition support is also utilized in Evolution's robotic software platform.
Flexibility is key to successful vision systems. In fact, lower-resolution monochrome systems usually have the necessary information required to identify objects and landmarks, thereby reducing the system's processing requirements. Switching between monochrome and color or low and high resolution brings the best of both worlds, with various tradeoffs such as processing requirements, buffering, and frame rates.
Most vision systems currently employ a single camera or multiple cameras aimed in different directions to widen the range of coverage. Binocular systems are used in research, but not as much in production robots. Lower-cost cameras may offer an answer, though.
Vision systems aren't without problems. Optical illusions and the movement of the robot, the camera, or objects around the robot complicate matters significantly. Again, processing power is the key.
MAPS AND NAVIGATION
Seeing an object, noting its relationship (distance and orientation) to the robot, and identifying it is just the first step. This information can create a map of the surrounding area, enabling the robot to plan its movements or determine that certain goals were reached.
Maps used by robots may or may not be similar to the maps we use. They're all logically similar, but they may have different storage methods. If the environment is sparse, a robot simply may have a list of objects and their attributes (such as coordinates or landmark information). A grid-style map would be more useful in a dense environment. There's also the issue of 2D versus 3D maps, depending on the restrictions of the environment (e.g., limiting a robot to a flat surface). No single approach is valid for all applications. A constantly changing environment, among other issues, may determine how map-related information is stored and updated.
One difference between our maps and robot maps is related to uncertainty. If a robot hasn't seen or visited an area, then it may be uncertain of what will be found at a particular location on its map. It may not even be possible to get to a particular point, so the robot also must keep track of different types of map information.
A single map could be used in a more complex environment where a robot is moving or part of a group, but it's often more useful to build up a hierarchy of maps. This typically translates into a two-level mapping system, which can be effective for both individual robots as well as cooperating robot swarms.
The higher-level map normally has less detail and changes less often. It may be maintained by one robot within a group. But it's typically replicated to provide a more robust system, as many of these swarms are designed to work in hostile environments where losing a robot isn't unusual.
The robotic software frameworks often incorporate this type of hierarchical mapping. It tends to be a requirement if the framework also provides navigation support. This navigation support is essentially a very specific planning system that uses the mapping information to generate a movement plan to get from point A to B without running into an obstacle.
A high-level plan may indicate that the robot must go north down a hall and turn right (east). The robot may use its local mapping and navigation to go down the hall but move around obstacles that may be in the hall such as a chair or box.
Handling this split between high- and low-level planning and execution may sound simple, but it tends to be much more complex than most developers anticipate. Again, a framework can make the developer's job much easier. Unfortunately, the exercise isn't trivial, even with framework support. Still, developers no longer need a PhD, although it helps.
Software robotics platforms are very important, but so are hardware platforms. In the past, most robots were custom-made from the ground up. New components like digital camera and compact processor modules have simplified the job. But the platforms that specifically target robotics have made the difference.
One example is the 914 PC-BOT from Whitebox Robotics (see the story's opening photo, p. 39). Designed to use off-the-shelf components, the 914 provides all the basics, including mounts for a Mini-ITX motherboard, multiple cameras, and drive bays. It includes wheeled locomotion, high-amperage batteries, and a steel frame. It's available with its own software framework and GUI control system that runs on Windows. Linux versions will be available in the future.
This type of platform is becoming more common, and a number of companies offer it. Software companies such as Evolution also have their own platform designed to get developers started quickly. It has even led to some interesting mobile platforms becoming more common.
Three- and four-wheeled vehicles are typical robot platforms. Although inherently stable, they tend to be more difficult to work with when the terrain gets rough.
Two-wheeled vehicles require dynamic stability support, but they offer a more flexible platform—especially in rough environments. The Segway RMP is one example of a two-wheeled system (Fig. 4). The RMP uses the same technology as the Segway HT (Human Transporter). The difference is that the RMP can handle a robotic package instead of a person. It appeals to college researchers because there is plenty of room for high-performance processors and sensors.
Using the RMP is no more difficult than using a platform with more wheels. In fact, it tends to be easier because the platform takes care of balance and movement, and it has a zero turning radius. It should never get stuck unless it falls in a rather large hole. The system is controlled via a pair of CANbus interfaces. It has a top speed of 8 mph and a range of eight to 10 miles.
CTGNet takes the same approach as the RMP, but it's a bit smaller (Fig. 5). Its Table Top Robot is designed for developers and hobbyists. It won't carry 100 pounds, but it will work with CTGNet's robot board stack. A control board handles balance and movement. Additional boards can be added for sensors and a control processor.
Airtrax might change the way robots work with more than a pair of wheels (Fig. 6). The Airtrax wheel looks odd, but a four-wheeled vehicle can move in any direction. It does so without rotating the platform, unlike the RMP and Table Top Robot, which must pivot to change direction. The rollers on the wheel are the key to its ability to move in any direction.
This will be critical for applications with a tight fit. It's also ideal for robot applications that can perform high-accuracy range detection and positioning and need the ability to move within a confined area.
The Airtrax system is currently being used on systems controlled by people, but some research is being done with robots. This is another example of technology that will make it easier to create and deploy complex robots.
Robotics continue to push the hardware and software envelope. They will become more common as the technology matures.
|NEED MORE INFORMATION?|