Intelligent video’s expanding presence in all sorts of applications is driven by several factors: the shift from analog to digital sensing; improved wired and wireless networking; and more sophisticated software. The latest systems push beyond basic image sensing and capture capabilities to image analysis, thanks to powerful video-processing hardware and intelligent video software.
Today’s intelligent cameras are replacing yesterday’s PCbased systems in terms of functionality. These ever-shrinking cameras feature faster and higher-bandwidth connections between their outputs and other circuits, with some functions on the sensing chip and many more functions within the box or housing itself. Previously, such functions required an external PC platform or a host computer for processing.
From the business side, intelligent video has become a matter of practicality. In manufacturing, it’s virtually impossible to examine products coming off a high-speed production line for quality and defects without using intelligent video systems. In surveillance, security personnel viewing a monitor at an airport or other public facility face the dual challenge of spotting the bad guys while fighting off boredom.
Intelligent video-surveillance systems aren’t just more vigilant, they’re also more accurate. With their greater intelligence, they can discern nuances in observed individuals and make the necessary critical analyses and judgments that are difficult if not impossible for a human operator.
In addition to getting more intelligent by using on-chip or in-the-camera-box hardware and software, video cameras can support downloadable modules produced by third-party vendors. Some companies offer totally integrated camera solutions, while others allow for more flexibility. Flexibility in this case means that the user can fine-tune the camera system to observe and track a specific type of object or objects for specific features. The trend is to gather a small amount of data at high speed from the camera sensor for further analysis down the line, leading to a more intelligent analysis of what is being viewed.
VIDEO INTELLIGENCE IS BIG BUSINESS
Major cities in the U.S. and other countries now use sophisticated video-surveillance technology to improve their security infrastructure. Chicago, Ill., for instance, has partnered with IBM to implement a smart video-surveillance network that’s been dubbed the most advanced of its kind in the U.S. It can alert officials whenever it detects a specific vehicle’s license-plate number, observes a vehicle circling a specific location, or spies an unattended object.
“Video surveillance is the fastest-growing market for digital video chip providers,” says Chris Day, president and CEO of Mobilygen. According to the China Security Market Report issued by the Security Industry Association, China’s video security and protection market (including fire and safety monitoring, security surveillance, and access control) is projected to jump from $6.3 billion in 2005 to $18 billion by 2010.
Broadly speaking, there are two types of intelligent camera systems: smart or IP cameras and embedded systems. Smart cameras integrate the camera sensor and the processing circuitry. In embedded systems, the video sensing is in one location. It’s connected either wirelessly or by wire to an embedded system elsewhere that consists of a frame-grabber board and a processor external to the camera sensor.
These differences are narrowing, though. Some smart cameras have so much intelligence, either on the sensing chip or within the same case holding the chip, that many of their manufacturers consider them embedded systems. However, the application usually dictates the type of camera system needed.
Smart cameras are more useful in space-challenged applications, but typically they don’t have as much processing power as the embedded approach. Although they’re generally less expensive than embedded camera systems, that cost advantage can quickly evaporate when using multiple cameras.
An embedded approach will offer greater programming flexibility and becomes a very desirable option for applications in which multiple cameras are used to view many scenes. Moreover, it can support a wide range of cameras from different manufacturers , which is another space-saving feature because one embedded unit can service multiple cameras. On top of that, it provides more freedom when choosing a camera signal’s interface.
Leutron Vision’s LVmPC micro PC uses the embedded approach to save space in embedded vision applications. It combines innovative notebook and frame-grabber technologies in a small 91- by 92- by 182-mm footprint. Designed for popular operating systems like Windows, Linux, and VxWorks, it uses Intel’s ultra-low-power Pentium III and Celeron processors.
Another space-saver is the CCD-based (charge-coupled device) In-Sight Micro smart camera from Cognex, which measures just 30 by 30 by 60 mm (Fig. 1). It’s designed for mounting in tight spaces on robots, production lines, and machinery. A flexible mounting capability with a non-linear calibration tool allows for mounting at angles up to 45° for hard-to-reach applications.
Most machine-vision cameras use CMOS sensors in applications where low cost is important and performance isn’t demanding. CCD imagers, though, dominate applications that require high performance (see “Choosing An Image Sensor: It’s All About The Application” at www.electronicdesign.com, Drill Deeper 18801). Roughly half of all image sensors are analog, with the other half being digital. Both types have driven the trend toward smaller machine-vision cameras.
Continued on page 2
But CMOS-based cameras aren’t far behind in performance. The CMOS A400 area scan series from Basler Vision Technologies AG targets industrial users who require high resolution (up to 4 Mpixels) coupled with high speed (96 frames/s). Its three shading correction choices contribute to image quality.
A multiple sequencer built into each camera lets users change the automated-optical inspection setting from frame to frame with no time delay. A standard Camera Link interface simplifies system integration and provides increased flexibility when changing cameras or frame grabbers. Like many other manufacturers, Basler also offers CCD-based cameras.
The latest camera-sensor advances often involve the optics rather the type of sensor used. For example, Tessera Technologies Inc. is making its OptiML Zoom available for licensing. This capability combines a unique lens design with specialized algorithms to replace traditional mechanical zoom capabilities, enabling 3X zoom capabilities in a compact camera module.
OptiML Zoom permits camera-phone module makers to integrate vintage zoom functionality at a lower cost than traditional mechanical approaches, without the need for any moving parts. In addition, it reduces the overall size of the camera module.
Designers looking for high-quality camera images with high resolution may want to consider the hardware-acceleration Fast- Track IP from FotoNation. It improves face-tracking quality and performance up to 400% in digital cameras and camera phones. This is achieved by integrating location and exposure information of human faces with the camera’s exposure feedback system.
Today’s smart cameras feature built-in “video analytics,” a term that emerged during the 1990s to describe computer vision smarts for security and surveillance systems. Powerful DSPs now can execute more demanding software applications for video systems. They constitute the processing cores of intelligent video codecs like Siemens’ Sistore CX codecs. Acting with a host computer or a PC, they allow general administration, storage, and networking tasks of video system information.
One example of a smart camera, the CCD-based Li045 from Lumenera Corp., includes a Texas Instruments DaVincibased processor. It uses the Pixim Orca chip set, an ultrawideband (120-dB) sensor that overcomes washed-out images in challenging lighting environments. It delivers high-caliber color rendition and image quality under various lighting conditions, as well as selectable MJPEG and H.264 compression.
The company also recently released the Lm085 mini. This small form-factor (44 by 44 by 56 mm) CMOS camera offers a 100-dB dynamic range designed for challenging industrial environments with uncontrolled lighting conditions and tight space constraints (Fig. 2).
“Texas Instruments pioneered many algorithms for video analytics during the last two decades,” says Bruce Flinchbaugh, a Texas Instruments Fellow and director of its video and image processing laboratory. He points out that many of today’s smart cameras are built on TI’s DaVinci platform with a multitude of processors and that “the TI TMS320DM642 DSP has been instrumental in reducing the costs of video analytics applications, especially for digital video recorders.”
The TMS320DM642 is based on the second-generation highperformance, advanced VelociTI very long-instruction-word (VLIW) architecture developed by TI. Performance measures up to 5760 MIPS at a 720-MHz clock rate.
TI’s DM355IPNC-MT-5 high-definition, IP network camera reference design is based on the DaVinci TMS320DM355 digital media processor and Aptina’s 5-Mpixel high-definition security image sensor. Aimed at IP surveillance networks, it provides flexibility for an easy upgrade path to IP video at analog video-camera prices.
Texas Instruments’ TMS320DM6446 DaVinci platform and Pixim’s sensor are behind Nuvation’s ultra-compact IP Power over Ethernet (PoE) camera (Fig. 3). More recently, Nuvation introduced four video reference designs for the DaVinci platform to accelerate time-to-market.
Apollo Imaging Technologies also uses TI’s digital media processors in cameras aimed at OEMs for applications in fire, smoke and intrusion detection, true-color night vision, highspeed digital camera/event analysis, and unmanned aerialvehicle (UAV) video links. The company offers a low-cost imaging video-analytics development platform.
The Edge products from Cernium Corp. combine the company’s patented P-Core analytics technology with the portability of a DSP platform like the TI DM642 and DaVinci processors. This enables higher performance than DSP-based products alone can offer, because analytics is possible for a complete suite of behaviors on multiple inputs.
Continued on page 3
Many other processors on the market target video systems, including the PowerPC; Intel’s Pentium III, Celeron, and X86 processors; and AMD’s Geode SC2200. Ann Arbor Systems uses Analog Devices’ Blackfin ADSP-BF533 DSP to power its AXT100 thermal infrared-imaging camera (Fig. 4).
Startup company Stretch Inc. offers wrapped designs and software around its S6000a configurable processor for building lowcost networked surveillance cameras and digital video recorders. The company says its approach can deliver 30 frames/s for an H.264 codec video stream at D1 standard resolution—at a cost of as little as $6.25 for the processor.
Some companies like Apollo Imaging Technologies try to cram as much video circuitry into cameras designed for OEMs that specialize in developing video analytics. These OEMs also have IP primarily in the image-processing arena, as opposed to highperformance image-processing hardware development.
Functioning as development platforms, such products typically feature enough capability to replace a conventional camera, PC, frame grabber, and associated cable, power supplies, and other components, all within the space of an industrial camera.
THE RIGHT DEVELOPMENT TOOLS
Development tools as well as the software and its algorithms are key to cost-effectively developing intelligent video systems while meeting time-to-market. To that end, National Instruments’ NI Vision represents one of the more powerful and comprehensive development and software platforms.
NI Vision’s hardware ranges from plug-in devices for PCI and PXI systems to image processing on the sensor itself with NI’s Smart Camera (Fig. 5). Options include image-acquisition software to acquire images from thousands of cameras, a top-notch image-processing library, and a configurable interface for industrial machine-vision applications.
“It is important that a smart camera’s software platform be extremely open and flexible to handle a variety of different requirements. That’s the philosophy behind NI’s approach,” says Matt Slaughter, product marketing manager for NI Vision. “A lot of people are trying to make it easier to use out-of-the-box vision systems without having to invest a lot of money.”
When Sylvania Lighting needed to integrate machine vision and motion hardware and software to produce improved metal-halide lamps, it turned to NI’s products. It chose a Windows- based PC along with NI’s PCI7831R reconfigurable I/O board with an on-board FPGA, an NI PCI-7356 motion board, and an NI PCI-8252 IEEE 1394 camera interface board. The development software included NI LabVIEW, the NI Vision Assistant, and an NI LabVIEW FPGA.
Many popular operating systems are being used to develop intelligent video systems. These include Windows CE, XP, .NET and XP embedded (XPe), and VxWorks. Linux with its opensource platform is another popular software choice.
Several standard analog and digital interfaces are available, including FireWire (IEEE 1394), GbE (Gigabit Ethernet), USB, and Camera Link. Each accommodates different data-transmission rates, cable lengths and types, interface boards, the number of cameras supported, and plug-and-play capability (see “Different Interfaces For Camera Signals,” Drill Deeper 18804).
GbE is a popular interface standard for high-performance, machine-vision industrial cameras like the Dalsa Corp. Genie Color series (Fig. 6). The Automated Imaging Association is overseeing the standard’s ongoing development and administration. It features a data-transfer rate up to 1000 Mbits/s for distances up to 150 m, exceeding those of FireWire, USB, and Camera Link.
Increased camera intelligence and greater functionality have highlighted the need for a comprehensive application programming interface (API). As a result, the European Machine Vision Association (EMVA) has developed the GenICam standard, which encompasses cameras, the types of transport layer interfaces, and software libraries, regardless of type or brand name (Fig. 7).
The standard consists of GenApi for configuring a camera, SFNC for a standard naming convention for common camera features, and GenTL, a transport layer interface for frame grabbers. GenApi is a current part of the official standard, release 1.1.0. The GenTL specification is expected to be completed soon.
Wireless video connectivity has also improved. Developed to increase both the range and transfer rate of wireless video signals, the 802.11n protocol allows the use of advanced encryption techniques. It features operating frequencies of 2.4 and 5 GHz and a maximum data-transmission rate of 248 Mbits/s.