The wait may finally be over for engineers in the industrial sector looking for truly efficient, less costly machine-vision systems. A series of technological advances—high-performance and smart image sensing cameras, suitable illumination wavelengths, higher processing power, smarter software algorithms, and advanced communications interface links—will ultimately create systems that increase manufacturing productivity levels and lead to higher-quality and more reliable end products.
Advanced image-processing algorithms combining with new technologies like multicore processing, along with dropping memory costs, have allowed vision systems to be deployed over a larger range of industrial-automation applications (see “Successes Mount In The Machine-Vision Market,” p. 28). Among the common machine-vision tasks enhanced by these advances are go/no-go product quality assessment and accurate 3D measurements.
CMOS Pushes Into CCD Realm
CMOS image cameras are assuming more tasks from charge-coupled device (CCD) imagers thanks to plummeting CMOS imager costs and rising performance levels. On top of that, these ever-shrinking CMOS sensor cameras feature greater operating flexibility. CCD sensor cameras aren’t left out in the cold, though, since certain industrial applications still require their unmatched performance.
Typical of advanced CCD color image sensors are the Genie and Boa cameras developed by Teledyne Dalsa (Fig. 1). The Genie uses a high-resolution, 1034- by 779-pixel, 120-frame/s Sony sensor that allows data transmission over gigabit Ethernet (GigE) for distances up to 100 m. Software included with the Boa enables scalable solutions for applications ranging from positioning robotic handlers to complete assembly verification.
CMOS image sensors are often formed on 300-mm wafers using 40-nm line geometries and processes, using exotic materials such as hafnium oxide and tantalum oxide. Some estimates say that 1.7 billion image sensors were sold last year. Although a large number of these wound up in consumer electronics items, they’re beginning to take off in applications like machine vision and medical imaging.
According to Ray Fontaine, a process analyst at Canadian firm Chipworks Inc., there’s a tremendous amount of innovation occurring in the CMOS sensor arena. He notes that while CMOS image sensors traditionally required four transistors per pixel, newer designs have made substantial reductions. In fact, Sony now employs a transistor sharing scheme to bring the total down to 1.375 transistors per pixel.
One example of the strides made in CMOS image-sensor performance comes from Germany’s Fraunhofer Institute for Microelectronic Circuits and Systems (IMS). Using 0.5-µm pixels, researchers developed a CMOS image sensor that operates over a temperature range of –40°C to 115°C, suiting it for harsh industrial environments (Fig. 2). The upper limit of CCD imager operation is roughly 60°C.
“The main issue with making cameras that can operate at high temperatures is the increase in dark current,” says Werner Brockherde, head of the optical sensor systems department and project leader at IMS. A rise of just 8°C doubles the dark current, which manifests itself in electrical noise, reducing the camera’s dynamic range and affecting its ability to record images at low light levels. In addition, ghosting in the form of artifacts or fuzziness degrades image quality.
Key to overcoming this challenge was to bury each pixel’s photodiode and stacking two p-n junctions on top of each other. Thus, the researchers were able to avoid surface recombination, a major contributor to dark current, since the p-n junctions were not at the surface.
Today, scores of tiny CMOS image sensor cameras are used for machine-vision applications, some outside the factory floor. For instance, Basler Vision Technologies’ 5-Mpixel camera measures 29 by 29 by 42 mm and features GigE and Power-over-Ethernet (PoE) interfaces (Fig. 3).
According to the company, it’s the smallest GigE camera in its class with PoE communications. Resolutions range from VGA up to 5 Mpixels at up to 100 frames/s (up to 14 frames/s at full resolution). It was recently approved in several medical and traffic systems, at a list price of 349 Euros.
Three-dimensional camera technology measures the shape and color of an object during production, which helps improve product quality and cuts manufacturing costs. Adding color capability further enhances the quality and cost-cutting advantages.
Often, 3D machine vision is used to grade fruits and vegetables, lumber, cosmetics, baked goods, electronic assemblies, and pharmaceutical products. It boosts the throughput of quality parts, allows for the scrapping of bad parts earlier in the process, and reduces waste. It’s ideal for imaging such product attributes as height, shape, volume, and even color.
Much like the human eye, the color of a product under inspection is perceived differently by a machine-vision camera, depending on the illumination source and the type of image sensor (and its lenses). Most machine-vision systems provide product image analysis in grayscale. However, in certain cases, color machine-vision software is needed to accurately detect the shape and contour of a product’s image.
Most color cameras consist of a single sensor that uses a color filter array or mosaic. This mosaic typically consists of optical filters for the red, blue, and green (RGB) colors overlaid in a specific pattern over the sensor’s pixels. Then the mosaic is decoded by converting the raw sensor data into an RGB value for each pixel.
The advent of higher-speed and higher-power microprocessors has initiated new machine-vision applications. As a result, machine-vision designers are focusing on developing hardware-independent algorithms for colorimetry, better chrominance and luminance decompositions, and decoding color mosaics.
Along the way, they’ve discovered that there’s a greater selection of silicon processing engines than the traditional CPUs from Advanced Micro Devices and Intel. ASICs, DSPs, FPGAs, and graphics processing units (GPUs) offer designers a wider array of tools for software algorithm development.
One of the 3D CCD color imagers currently on the market is the GatorEye GigE vision camera from Matrox Imaging (Fig. 4). This IP67-rated industrial camera is designed to operate in very harsh environments thanks to a sturdy, dust-proof, and washable casing.
The GatorEye features laser line extraction—only the laser line of an image is extracted to sub-pixel accuracy, producing the corresponding positional depth/height array. That laser line is subsequently sent to the GigE link to lighten the load of the interface, which allows the controlling PC to concentrate on the inspected product measurement and analysis tasks.
The camera comes in six sensor configurations designed for specific resolution, frame-rate, and format requirements, in either monochrome or color versions. These include 640 by 480 pixels at 110 frames/s in a 1/3-in. format, 1280 by 960 pixels at 22.5 frames/s in a 1/3-in. format, and 1600 by 1200 pixels at 15 frames/s in a 1/1.8-in. format.
For connectivity to external devices, the GatorEye includes an optocoupled trigger input, a strobe output, eight general-purpose inputs/outputs (GPIOs), and a controlled current source for direct driving of LED illuminating sources. The camera is powered by either a 12- to 24-V dc source, common in industrial environments, or a PoE source in which both power and Ethernet streams share a single cable.
Matrox also offers color image-analysis software. For instance, the Matrox Design Assistant is an integrated development environment that’s bundled with smart cameras like the company’s Iris GT. Users can create machine-vision applications by constructing a flowchart instead of coding programs or scripts using languages like Visual Basic, C, C++, or C#.
Once development is finished, the project (or flowchart) is uploaded and stored locally on the Matrox Iris GT. The project then can be executed on the smart camera without the need for any companion PC. In fact, in this case, it’s monitored and controlled from the programmable logic controller (PLC) over an Ethernet link.
Gas-discharge sources have dominated the machine-vision illumination market due to their relatively flat light spectrum. Change is afoot, though, thanks to enhanced LED light sources featuring brighter outputs, longer lifetimes, and more modest price tags. Output wavelength consistency and uniformity are also improving for LEDs, issues critical to successful machine-vision implementation.
Manufacturers of machine-vision lighting sources are also concentrating on focusing LED sources. They’re developing user-layered optics that include not only the package, but also the light source’s diffusers and polarizers, to best match a customer’s needs.
Another driving force in the move toward LEDs for machine-vision lighting is smarter LED drivers, some of which sit on the same chip as the image sensor. The added intelligence allows the light source to provide more precise control over light intensity, and overdriving a LED for strobing applications, resulting in longer LED lifetimes.
Choose The Right Interface
Improved camera technology and demands for wider bandwidths to support high-speed and high-accuracy industrial inspections place greater pressure on the interface required for linking cameras to the control computer. Over the last decade, two interface standards—CameraLink and GigE—have effectively met those demands. The same can be said for the IEEE 1394B standard, although it requires complex cabling that only serves distances of a few meters. Now, several emerging interface standards like 10 GigE, CameraLink HS, and USB 3.0 are poised to take over (see the table).
The Pleora Technologies GigE Vision standard has demonstrated that it meets today’s real-time switched video networking demands and supports throughputs of up to 10 Gbits/s (Fig. 5). GigE Vision allows Ethernet-based vision products from different vendors to interoperate seamlessly, avoiding any potential integration issues.
The standard simplifies leveraging of the Ethernet platform’s native performance attributes, such as networking flexibility, scalability, high throughput, long-distance reach, and full-duplex, dedicated connections. It also makes it easier to implement applications on affordable and widely available Ethernet network elements, such as switches, network interface chips/cards, and Cat-5/6 or fiber cabling. Moreover, its standardized environment will deliver new-generation, networked video applications based on switched Ethernet architectures.
In addition to the interface circuitry, most modern machine-vision digital cameras also build-in the veritable frame-grabber board—a unit that’s usually plugged into a slot on the host computer. Frame grabbers typically accept and process camera image signals from an analog or digital camera. They often contain an analog-to-digital converter (for analog cameras), buffer memory, and interface circuitry through which the host processor can control data acquisition and access.
The notion of a frame-grabber board has become blurred as more cameras go the digital route, which means including acquisition and processing circuitry with the same housing that holds the image sensor. Adoption of higher-speed interface links like CameraLink and 10 GigE is eating into the frame-grabber board market, according to many machine vision experts. They question whether or not frame grabbers can survive in their present form and function.
Other machine-vision experts disagree, though, pointing out that there’s still a demand for analog cameras. So as long as the demand persists, frame grabbers will continue to provide a useful function. Moreover, frame-grabber products are adapting to the times by offering more specialized functions as cameras evolve into the digital domain. These include buffering, control, processing, bandwidth, and determination. The general feeling among these experts is that many OEMs continue to use older technology frame grabbers because they work efficiently and are well understood.
It should be noted that an inherent limitation exists in some interface standards that don’t require frame grabbers, such as GigE, IEEE 1394, and USB 2.0. Although they’re network- or bus-compatible, allowing multiple machine-vision cameras to connect to the same interface, some deterministic mechanism must be in place so that the cameras can share the same bus and avoid data collisions.
This means artificially induced latency or indeterminism, which may not be acceptable for some applications. For that reason alone, frame grabbers will continue to coexist with advances in machine-vision cameras and interfaces. That said, newer frame grabbers are keeping pace with machine-vision demands.
One example is the GrabLink line from Belgium’s Euresys A.B. (Fig. 6). Versions include GrabLink Base, Grab-Link Dual Base, and GrabLink Full configurations to suit base, medium, and full-framed CameraLink interfaces. They offer on-board processing such as three lookup-table operators and a Bayer CFA decoder. Moreover, their extensive set of I/O lines is compatible with a wide range of sensors and encoders.
The GrabLink Full board supports the other two versions. It includes support for a 10-tap CCD camera and features a four-lane PCI Express bus with 64-bit addressing. The board is aimed at high-end, high-speed, and high-resolution area-scan and line-scan sensors for applications such as printing, Web and flat-panel display inspection, and 3D and manufacturing inspection.
Successes Mount In The Machine-Vision Market
A new report titled “Global Machine Vision and Vision-Guided Robotics Market 2010-2015,” published by MarketsandMarkets, forecasts a total global machine-vision system and component market of $15.3 billion by 2015, with a compound annual growth rate (CAGR) of 9.3% (see the figure). Camera as well as smart-camera components will account for more than a quarter of this upswing. According to the report, the Asia-Pacific region dominates the global machine-vision market with double-digit growth, followed by the North American region.
A study performed by the Automated Imaging Association (AIA), the world’s only global machine-vision trade group, concurs with this assessment. The study notes that machine-vision sales should increase by 2.6% to 4.6% depending on the current economic recovery and rate of change in industrial production. This follows a 29.2% decline in 2009, which was more a reflection on the aftermath of the recent “great recession.”
Demands for more machine-vision applications will grow even louder thanks to the recent Food Safety Modernization Act, enacted into law this past January. It requires new levels of product traceability in the food-processing industry—for both domestic and overseas suppliers of food products—that will ultimately help improve public health.
Provisions of this act include giving the Food and Drug Administration (FDA) expanded access to records for food-production facilities, upon request. The goal is to improve tracking of food sources, should a public health issue arise from suspected foods. The FDA, in coordination with the produce industry, will create a new method to effectively track and trace fruits and vegetables to ensure that any contaminated produce is located in a safe and timely manner. Machine-vision software will play a key role in this process.