Smart phones. MP3 players. Notebooks. We can’t live without our portable gadgets. However, we probably drive our most computeintensive mobile electronics to work everyday. Today’s automobiles use a variety of networks, sensors, and computer platforms to deliver safer and more pleasant travel than ever.
Most companies concentrate their development efforts on safety, efficiency, and performance. These features rank high with consumers, and the newest and most sophisticated features always appear first in high-end models like the Lexus LS 460 (Fig. 1).
As with all modern cars, the LS 460 offers mandatory passive safety features such as seatbelts and airbags. It also has a voice-activated, heads-down-display (HDD) navigation system that’s standard in many high-end vehicles and an option in most others. The LS 460 leads the pack by moving into the active safety realm with NEC’s IMAPCAR (Image Memory Array Processor for CAR) image-recognition system.
MOVE TO ACTIVE SAFETY Passive safety systems are mature technologies that have less payback for new improvements, but they continue to be refined. They target post-crash actions, meaning they activate after a collision has occurred or is inevitable (e.g., airbags).Active safety features address accident avoidance or pre-crash actions. Advanced antilock braking, traction control, and vehicle stability controls also fall into this arena. Traction control and vehicle stability controls benefit from improved sensors as well as the significantly greater computing capabilities that are available in the latest crop of DSPs and microcontrollers.
These active systems contribute to improved safety and performance. But designers also are addressing new areas due to improvements and cost reductions in sensors, such as video cameras, lasers, and radar detectors. From a driver’s standpoint, new systems like adaptive cruise control are an active part of the driving process. They augment the driver’s senses and provide limited autonomous control. In the future, cars will exercise more autonomous control.
For example, initial cruise-control applications simply maintained a fixed speed. Some current systems can maintain a safe but variable distance from the cars ahead based on the environment. Even more advanced systems, like those on the LS 460, can apply the brakes in anticipation of a collision. Warnings are being improved as well, from simple tones to more complex audio and visual cues.
New systems warn the driver if there’s a high probability of a collision. If the driver doesn’t react in time, the system will employ recommended actions such as braking. Passive systems also assist in anticipating the collision and operating in a more optimal fashion. For instance, airbags needn’t be deployed when no one is sitting in their respective seats, or they can deploy with less force if the occupant is small. Weight sensors help make these determinations, but ultrasonic or even vision systems can be employed, too.
Today’s designs incorporate more sensors to provide more contextual, environmental information so computers can become part of the decision loop (Fig. 2). Sensor fusion, or the combination of sensor information for a typical task, will become more common. Adaptive cruise control can use vision and radar sensors to determine where an obstacle, such as another vehicle, is located. No one sensor system meets all of the requirements for current and forthcoming active safety systems, but vision is definitely one of them.
ON THE HORIZON Low-cost, high-performance imaging and computational hardware is bringing vision to the forefront of automotive safety. So are improved algorithms and applications for image recognition and analysis. Yet the availability of this kind of hardware in versions suitable for automotive use will be critical to their widespread adoption. Eventually, vision systems will be required by law, just like seatbelts and airbags.Multicore architectures that have very large numbers of processing units will continue to grow. The current NEC IMAPCAR processor employs 128 very long instruction word (VLIW) processing elements (PEs) (Fig. 3). Each VLIW instruction can control four logic units in each PE. A 16-bit RISC control unit provides the coordination for the IMAPCAR chip (Fig. 4).
The IMAPCAR’s architecture is designed specifically for video-feedback applications in the automotive market. It incorporates the video input and output into the buffering scheme for real-time annotation.
The system can handle a number of imagerecognition algorithms at the same time, providing information to the host microcontroller as well as to the driver by modifying the video stream as it passes through the chip. Each PE contains its own memory for copying and analyzing the frame buffer as necessary. Error correction coding (ECC) and parity are used to improve reliability.
Low power is also critical to this application space. The IMAPCAR chip draws only 1.7 W running at 100 MHz, delivering 100 GOPS of performance. Currently, the IMAPCAR system can handle lane and pedestrian recognition. The Lexus system employs two cameras in a stereo configuration as well as millimeter-wave radar, delivering features such as lane departure warnings. A third camera covers rear viewing.
Continue on Page 2
Expect to see even more efficient and powerful vision systems in the near future, allowing for better recognition and tracking. They might employ chips like Recognetics’ CM-1K neural network chip, which can apply up to 256 bytes to 1024 neurons in parallel. Furthermore, the chips can be logically stacked so they all operate in parallel.
The chip isn’t being used for automotive applications at this time, but it is performing real-time image recognition for a number of applications. Still, new image-processing architectures such as the CM-1K and IMAPCAR will expand vision-system performance, often with additional sensor support.
3D IMAGING Advances in sensor technology will have as much influence on future active safety systems as parallel-processing improvements have had in computing performance. Shrinking the size of a unique 3D camera could make a difference. Advanced Scientific Concepts has a Flash Ladar 3D camera system that employs a short pulse of laser light to deliver 3D information (Fig. 5). The resulting information can then be used in an automotive setting to identify objects within the environment with a very high degree of accuracy. The current incarnation of the camera can operate at 30 frames/s. The laser is eye-safe, suiting it for automotive applications.This approach is significant because it provides the accuracy of a laser or radar range finder with the scope of a vision system. The range and accuracy vary depending on the configuration. However, one system has a precision of 3 in. with a range up to 5000 ft. The system also includes much of the computational details within the camera.
Although the system uses optics similar to a camera, the sensing system is significantly different. Essentially, the sensor can detect when the start of the laser pulse is received. It then triggers the subsequent capture of light information at 1-ns intervals. Its synchronization and speed of capture distinguish the system from other 3D approaches.
SENSOR FUSION The ability to combine information from a variety of sources, such as Analog Devices’ MEMS gyroscope, will be key in many automotive safety applications (Fig. 6). Such combinations will provide more accurate information and allow better distribution of sensors because of their lower cost, smaller size, and lower power requirements.Systems that don’t use technologies like Advanced Scientific Concepts’ Flash Ladar 3D system often use a pair of cameras instead to provide streoscopic viewing that simplifies range analysis. In the future, expect additional cameras to provide the automotive control unit and arm the driver with more data about the car’s interior and exterior. Likewise, multiple sensor modules may be a better solution for covering smaller, possibly overlapping areas.
Some sensors operate differently under different conditions, such as rain or darkness. Multiple sensors with different operating characteristics will often provide better results than a single sensor. For example, a number of techniques can be used to monitor drivers to see if they are falling asleep.
Likewise, using vision alone for a range of information is a gamble at best. Lighting conditions, reflectivity, and other optical illusions can cause problems. Still, radar cannot determine the color of a stoplight, even if it can determine the distance down to the millimeter.
ROLLING SUPERCOMPUTERS Applying computing resources to an individual sensor or a group of sensors can be a daunting challenge by itself. Also, the amount of processor power—even for a single chip like the IMAPCAR—is significant and growing. But you can’t determine the system’s total amount of computing power until you consider the potential number of different networks in a car and the number of different nodes in these networks (see the table). Multicore design arrived just in time for the automotive industry.From a safety standpoint, a number of systems will be tied together via one or more networks, depending on the sensors and control systems involved. Network interconnects like FlexRay are already being used in braking and drivetrain applications. Networking makes it easier to develop cooperative systems, and it’s leading to centralized safety and environmental management systems.
This makes sensor fusion more practical, especially given a range of configurations where some car models contain a subset of high-end sensors. It also means the performance requirements will rise. Likewise, reliability and redundancy become harder to address.
Several companies are developing custom solutions that will likely move into the mainstream. Freescale has dual-core designs in which the cores check each other. In addition, a triplecore design includes a pair of cores in hardware synchronization, with the third acting as an I/O processor and traffic cop. Redundancy becomes significantly easier with multiple cores, even using standard processors.
Continue on Page 3
SAFE SOFTWARE, LEGALLY LIABLE All of these sensors and redundant processing bring up the issue of software. The analysis and complexity challenges are large, but they will be trivial compared to the standardization and legal hurdles associated with active safety.A few standards are popular but not universally adopted, such as AUTOSAR (AUTomotive Open System ARchitecture). Likewise, protocols for networks like CAN are standardized, at least at a low level, though vendor exceptions abound.
Wireless sensors like Freescale’s MPX8300 are embedded in a tire with the receiver in the body of a car (see “Tires Put Pressure On RF” at www.electronicdesign.com, ED Online 16497). Unfortunately, dissimilar radios and protocols can make it difficult to go to the nearby auto parts store for a replacement. The plethora of wiper blades and headlights is just a fraction of what will occur with the inevitable increase in sensors and associated processing systems.
In the longer term, cooperation between the vehicle and other cars or fixed wireless information sources will provide details that can be incorporated into the safety system. This is already done, albeit on a limited basis, with some GPS navigation systems that receive traffic information via radio.
One alternative that’s been tossed around would have cars talking to cars and sharing their environmental sensor data with each other. This would reduce the reporting requirements and related delays of the radio-based GPS navigation systems in place while significantly increasing the accuracy and timeliness of the data.
Unfortunately, this approach opens a can of legal and standardization worms. How do you prevent invalid information from being inserted from a third party? What happens if an accident arises due to the exchange of bad or insufficient data? What cars will talk to each other? The list goes on.
Other interesting ideas hovering on the radar include heads-up displays (HUDs) and verbal interaction. The cost and effectiveness of these technologies aren’t right for mass markets yet, but the same once was true for vision systems, automatic stability control, and a host of other features, including airbags. A HUD allows overlays from the vision systems, enabling direct driver feedback—and that’s only one possible use. Overlaying building and terrain information is another.
Voice-activated command systems are already common for multimedia device and climate control. Advances in voice recognition and the ability to bring more computing power to bear will allow this interface methodology to improve. In turn, it will reduce the need for drivers to interact with the car via manual controls, thereby improving overall automotive safety.
ELECTRIC SAFETY Hybrid and electric vehicles are becoming more prevalent, but they add their own safety issues to the equation. The primary concerns deal with high voltage and the batteries required by the system.Most systems employ a multicell battery pack. For example, Tesla Motors’ Roadster has a battery pack that incorporates more than 6000 lithium-ion (Li-ion) cells in the 18650 form factor, weighing almost 900 pounds (Fig. 7).
The system uses multiple microprocessors and sensors to monitor each cell as well as the battery cooling system’s temperature. Additional sensors track the environment and initiate a shutdown when an accident occurs or when maintenance is required.
Popular hybrids such as the Honda Civic and the Toyota Prius have less ambitious power systems, but their battery sensor and control systems are no less important (Fig. 8). Improvements in sensor technology and price reductions in microcontrollers will allow even safer systems to be constructed, including the cabling and connection points.
It should be interesting to see what kinds of safety systems next year’s car models have in store.