Peripheral vision

Jan. 1, 2007
Vision-sensing applications are surfacing in luxury vehicles but the market is fragmented, with design engineers divided on what technology works best. Chipmakers, meanwhile, are readying new devices with powerful image-processing capabilities.

URGED ON BY STRONG interest in safety applications, semiconductor developers, tier one suppliers and OEM customers are working to lower the cost and increase the effectiveness of camera-based systems for night vision, lane departure warning and various other applications.

STMicroelectronics, for ex-ample, is leveraging its development agreement with Mobileye N.V. while simultaneously offer-ing CMOS camera chips for automotive vision-sensing applications including adaptive cruise control (Figure 1).

ST is manufacturing Mobileye's EyeQ-1 system-on-chip and the companies are working together on Mobileye's second-generation EyeQ-2. Roger Forchammer, technical marketing manager in ST's North America Automotive Business Unit, said EyeQ-1 has been adopted by several OEMs and tier one suppliers; however, he said EyeQ-2 will be “six times more powerful” than the current version. The new chip, scheduled for production in 2009, will provide vision scalar and vector processing on a single die.

The EyeQ2 architecture consists of two 64-bit MIPS34K floating-point, hyperthread RISC CPUs; five vision-computing engines (VCE); three vector microcode processors (VMP); a 64-bit Denali mobile double data rate (DDR) controller; a 128-bit Sonics Interconnect; dual 16-bit video input and 18-bit video output controllers, 16 direct memory access (DMA) channels, and peripherals including dual CAN controllers; dual UART interfaces and an I2C interface.

“Mobileye's processors can do ‘everything,’ which can be overkill for OEMs developing dedicated applications,” said ST's Forchammer, who describes the automotive vision-sensing market as “fragmented.” His company also offers CMOS camera devices with dynamic range greater than 130 dB. “These applications demand high sensitivity for performance at low light, small pixel size for a minimum footprint, and high dynamic range to avoid information loss due to saturation,” Forchammer said.

ST currently offers the VS6624, a single-chip device based on a CMOS sensor with a 1280 × 1024 active pixel array based on ST's 3.0 µm pixel design. The device also includes a digital image signal processor (ISP) and analog system functions in an SmOP2 module that measures 8 mm × 8 mm × 6 mm, including passive components. The ISP provides pixel defect correction, sharpness enhancement, gamma correction, color space conversion, anti-vignetting algorithms and automatic white-balance control, all to optimize picture quality in varying light conditions. It produces an industry-standard digital video stream at up to 15 frames per second (fps) with full SXGA resolution and up to 30 fps at VGA resolution in a low-power video mode. An image-scaling feature optimizes data for display in non-native resolution settings. OmniVision Technologies in mid-2005 launched its first CMOS image sensor designed especially for automotive applications, the OV7940. Last fall, the company introduced the OV7710, a quarter-inch 640 × 480 (VGA) digital video camera that is said to perform well in low-light environments such as parking garages.

The single-chip device features a dual dynamic overlay function intended to allow a dynamic and a static text or graphics visual aid layer within the image. The OV7710 (and black-and-white OV7211) also includes a windowing feature for users to fine tune fixed-position cameras in the event of obstructed views by moving the sensitive area of the camera a few pixels. Algorithms in the chip cancel fixed pattern noise (FPN), eliminate smearing, and reduce blooming, according to the company.

Melexis offers automotive-grade CMOS camera circuits for visible and near-infrared light, including the 352 × 288 (CIF) MLX75006 and the 750 × 400 (PVGA) MLX75007. Both feature an overmolded plastic package with an optional integrated glass lens stack designed to simplify assembly and to protect both the chip and the bond wires against scratches and light. Pixel response is programmable to achieve various dynamic range levels, and Melexis' ICs include the ability to monitor communications lines as well as on-chip analog and digital circuitry.

Micron offers two quarter-inch single-chip CMOS cameras, the MT9V111 (Figure 2) and MT9V125, both offering VGA resolution. The MT9V111 consists of a sensor core and an image flow processor (IFP), and it requires only a power supply, lens and clock source for basic operation. The sensor core captures raw Bayer encoded images that are input into the IFP, which processes the incoming stream to create interpolated, color-corrected output and controls the sensor core to maintain desirable exposure and color balance.

Video output from the MT9V125 is formatted directly on the sensor, so no encoder chip is required. Both chips perform processing functions including color recovery, color correction, sharpening, programmable gamma correction, auto black reference clamping, auto exposure, automatic 50/60 Hz flicker avoidance, lens shading correction, auto white balance (AWB), and on-the-fly defect identification and correction.

While much industry attention is focused on CMOS devices for automotive vision sensing, development work is continuing on charge-coupled device (CCD) cameras. Fujitsu Microelectronics America, for example, offers the CJ-421N, a compact (21 cubic centimeters) device with sensitivity up to 2 lux and an anti-mist feature designed to reduce or eliminate lens condensation. It has a 1208 viewing angle, horizontal resolution of 320 TV lines, and a special optical filter to remove infrared light effects. Other CCD cameras include the CG311N, a color camera with an auto iris level control, and the CB-341, with a wide varifocal lens and 480-line resolution.

As with other leading-edge automotive technologies, vision-sensing applications are surfacing in luxury vehicles. Lexus' LS470, for example, offers an optional near-infrared night-view system (Figure 3) to supplement what a driver can normally see at night. The system consists of a camera mounted behind the upper portion of the windshield, two projectors mounted on the front bumper, and a 5.8-inch liquid crystal display located above the instrument panel on the driver's side.

The projectors emit near-infrared rays as far ahead of the vehicle as high-beam headlights; within the range of 178 of horizontal field angle and 12.758 of vertical field angle. A CCD camera receives the reflected rays and converts them to image data that is transmitted to the night-view electronic control unit (ECU). The ECU is mounted below the display. A battery ECU adjacent to the night-view ECU controls power to the system.

The night-vision system on 2007 model year Mercedes S class vehicles was developed by Bosch. The system uses near-infrared technology and has a range of nearly 500 feet, or about three times further than traditional low-beam headlamps. The system consists of an infrared video camera between the inside mirror and front windshield, a display in the cockpit, a control unit, and two infrared headlamps. Bosch is integrating night vision with other driver-assistance systems and can use the same camera for lane-departure warning and road sign recognition systems.

While Toyota (Lexus) and Mercedes use near-infrared technology for night vision, BMW and Honda prefer far-infrared technology based on thermal imaging. BMW offers night vision on its 5 series, 6 series and 7 series cars, while Honda reportedly offers a far-infrared system on the Legend, which it sells in Japan.

BMW's system is designed to detect people, animals and objects in front of a car before they become visible in the car's headlights. Images transmitted to a control display in the center of the car's instrument panel are projected with increasing brightness based on the amount of heat detected by the camera; thus people and animals are most prominent.

BMW engineers believe that to display a detailed image of the current traffic situation would delay a driver's recognition of people or animals within the image. “Insignificant” image details are cancelled so they do not distract the driver's attention. Far-infrared technology, according to BMW, also allows drivers to “look” further ahead; projecting close to 1,000 feet, or about twice the range of near-infrared systems. At 100 km/hr (62 mph), that amounts to as much as five extra seconds, according to BMW.

Near-infrared devotees counter that their technology provides more natural looking images, compared with thermal systems, and makes objects that don't generate heat visible to drivers. Beth Schwarting, vice president of Delphi's Safety Systems product business unit, said her company's active night-vision system, which uses near-infrared technology, provides high-beam visibility without blinding oncoming traffic.

Schwarting said Delphi uses a scalable architecture and a single CMOS camera to meet multiple vision-sensing requirements because that approach costs less than developing independent systems for applications such as smart cruise control, lane-departure warning, headlight control, active night vision, rain sensing, road sign recognition, and pedestrian recognition. She added that a single camera reduces contention for limited windshield space.

Other companies use different technologies for different requirements. Gary Collins, North America business development manager for safety electronics at Siemens VDO Automotive, said his company uses CMOS cameras for lane-departure warning systems but prefers 24 GHz frequency pulse-modulated, continuous wave (FMCW) radar technology for blind-spot detection, and lidar for adaptive cruise control.

“Each technology has advantages and challenges,” Collins said. “Radar is best for blind spot detection, because we're looking at a large area. A 3-D camera would cost too much, and lidar is better used in more limited areas.”

For its lane departure warning system, Siemens mounts a CMOS camera near a vehicle's rear-view mirror. A 32-bit, 1500 MIPS MCU controls the recording and processing of image data; finding lane markers on the road and alerting the driver if the vehicle begins to move outside the markers. To conserve windshield space, Siemens combined its lidar system for adaptive cruise control with its CMOS camera for lane departure warning.

In its LS 460 L, Lexus offers an optional advanced parking guidance system that combines ultrasonic sensors developed by Denso Corporation with camera-based image-recognition technology from Aisin Seiki Co. Ltd.

Key to the effectiveness of automotive vision systems is image-processing capability. Video- and image-processing blockset software from The Mathworks provides design engineers with a library of functions for embedded video/imaging systems such as target tracking, region of interest processing, blob analysis, and Hough transform.

On the hardware side, NEC Electronics, Toyota and Denso last summer introduced the IMAPCAR image processor for real-time detection of nearby objects such as vehicles, pedestrians and lane markers. Toyota includes the device in the Lexus LS460.

The IMAPCAR, fabricated on a 0.13 µm process and drawing less than 2 W of power, runs at up to 100 billion operations per second (BOPS) using 128 parallel processing elements with random access memory for each, but rely-ing on software for processing of image-recognition functions, which makes solutions easier to modify (Figure 4).

David Stone, director of marketing for NEC Electronics America's Automotive Strategic Business Unit, said the IMAPCAR chip can be interfaced with multiple sources of image information including radar, lidar or vision cameras. “Vision-based driver warning systems are in their infancy, and the market is fragmented,” he said. “Design engineers must be concerned with ‘sensor fusion,’ where multiple sources of information complement each other. The challenge is to be able to process this data in a timely manner.” Stone said NEC has been working on parallel processing technology for image recognition since 1990.

ABOUT THE AUTHOR

John Day writes about automotive electronics and other technology. He holds a BA degree in liberal arts from Northeastern University and an MA in Journalism from Penn State. He can be reached by e-mail at [email protected].

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!