Machine vision certainly is not new, but for first-time users, the wide range of unfamiliar products and technologies can be daunting. One way that vision system vendors have addressed the problems of component selection and integration has been to develop complete off-the-shelf vision solutions.
These take the form of compact vision systems as well as smart cameras with integrated image processing. Some manufacturers differentiate between vision systems and image sensors, the latter being a form of limited-performance smart camera with integral lighting and an emphasis on ease of use.
For many straightforward applications, pre-engineered vision solutions reduce cost, save time, and empower in-house system developers. Of course, a ready-to-deploy solution does not suit all situations. If your application requires multiple cameras or an especially high frame rate or special lighting, you need a custom-designed system.
In the past, machine vision projects may have been avoided because of the risk involved in their development. You needed the services of an experienced system integrator who also understood your application in depth. That combination can be hard to find. Add to it budget and time-scale constraints, and machine vision easily became a nice-to-have rather than a key part of a manufacturing or inspection process.
Today�s products are easier to use partly because several industry-wide standards have been adopted. GigE, IEEE 1394 FireWire�, and CameraLink� camera standards clearly specify signal connections, levels, and protocols as well as the physical connectors and cabling. Also contributing to ease of use are consistent software interfaces and better customer support.
Nevertheless, once you have determined a need for a custom vision system, several questions must be addressed. Given a budget and time scale, are you going to develop the system in-house or engage a specialist? If you undertake it yourself, is there a local supplier with experience you can draw upon?
Gaining the Knowledge
Reluctance to adopt new technologies is largely caused by uncertainty, and user education is a common method companies employ to address it. For example, semiconductor datasheets contain circuit diagrams to show the advantages of a new device. Similarly, catalogs may include a technical appendix relevant to the use or specification of a company�s products.
The recently released Vision Elements: Machine Vision Handbook takes the technical appendix idea several steps further. In this third issue of the handbook, detailed technical material precedes product offerings in each of seven sections: illumination, optics, cameras, cabling, interface, software, and systems. Extensive use is made of full-color illustrations, many created or adapted specifically for the handbook. Although the product offerings are impressive, it�s the depth and clarity of the technical material that distinguish the handbook from being just another well-presented, 200-page glossy catalog.
Illumination
No matter how fine a camera�s resolution or how fast the frame grabber, an image�s quality depends on the light that was presented to the camera. Because illumination is fundamental to all machine vision applications, it is explored in detail in the handbook. Many apparently simple inspection tasks depend entirely on correct illumination to achieve the desired results. As shown in Figure 1, there are several types of lighting.
Figure 1. Angle of Incident Illumination
Courtesy of Firstsight Vision
Dark-Field vs. Bright-Field
The characteristics of the object and features being imaged determine the most appropriate illumination angle, color, intensity, structure, and duration. For example, dark-field illumination often is used to highlight surface defects, scratches, or engraving, especially on reflective objects. Lighting the object from the side causes most of the light to reflect at an oblique angle away from the camera. The image is formed by light reflecting into the camera from feature edges.
Dark-field illumination results in most of the light falling outside of a camera�s field of view (FOV). In contrast, bright-field illumination directs much more of the light into the camera�s FOV. As Figure 1 indicates, the distinction between the two techniques is a matter of degree. In fact, the same ring-lights, spotlights, and line lights that may be appropriate for bright-field illumination also can achieve dark-field illumination if they are suitably repositioned to provide a low angle of incidence.
Diffuse vs. Collimated
The striking difference between the two images of a clear plastic pushpin at the bottom of Figure 1 is intriguing. Diffuse illumination provides a very uniform light intensity across the entire object with light striking it from many angles. Light at a sufficiently wide range of angles is striking the back of the pushpin so that, despite reflection and refraction caused by the cylindrical shape, the camera sees a transparent object with well-defined edges. High-intensity diffuse back lighting improves the contrast between the object�s edges and clear portions.
Creating truly diffuse illumination is somewhat of a technological quest, attracting many very experienced companies. For example, CCS, a Kyoto-based lighting specialist, has developed the conical LKR and flat LFR series of ring lighting.
As the company described the technology, �The LEDs are arranged on a flexible board in a straight line and then wrapped around the perimeter of a light diffusion plate. This introduces the light directly from the LEDs into the plate. In addition, a reflective film is applied to the surfaces of the plate to refract and scatter the light. The light then spreads evenly through the entire light diffusion plate and produces a very even light distribution.�
In another CCS design targeted at applications requiring a very thin illumination source, an array of on-chip LEDs is used. Distributing the LEDs evenly on a large square base means that only a thin diffusion plate is needed to complete the product. Products in this LDL-TP/LDL Series based on LED arrays are intended for silhouette inspection.
If light from only one angle illuminates the backside of the pushpin, almost all of it will be refracted or reflected away from the camera, resulting in the completely dark image shown. Such collimated on-axis light typically is provided by a light source and collimating lens or parabolic mirror. A 90� beam splitter is needed if the source must be in line with the camera lens for front lighting.
Collimated illumination also is termed telecentric lighting. Collimated light is the common factor both in this type of lighting and in a telecentric lens. In contrast to an ordinary lens that accepts light from an angular field of view, a telecentric lens only accepts parallel light rays. This restriction requires that the first lens element be at least as large as the object being imaged and accounts for the bulk and high cost of a typical telecentric lens.
The image magnification or reduction provided by such a lens is independent of the object�s distance from the lens. The handbook gives an application example of measuring the diameter of a vibrating wire. Regardless of the plane of the vibration, the image size doesn�t vary. Similarly, multiple objects within the FOV but different distances from the lens all can be measured from the same image.
Circular vs. LinearCamera lenses are circular, but many applications such as web inspection involve linear geometry. In fact, in this case, the camera sensor is linear as well. This type of application requires a thin, highly focused beam of light across the width of the web. One way to achieve this is to use an array of high-intensity LEDs. A long glass bar rounded on the front surface and flat on the back serves as a lens to focus the LED illumination into a thin line. Color, Filtering
A camera�s CCD sensor wavelength sensitivity typically includes longer wavelengths that we cannot see (Figure 2). The human eye is most sensitive to green 555-nm light with the overall range being from about violet 380-nm to deep red 760-nm. Ultraviolet (UV) and especially infrared (IR) light beyond these wavelengths still may be well within a camera�s sensitivity.
Figure 2. Comparison of Camera Sensor and Human Eye Spectral Responses
Courtesy of Firstsight Vision
IR illumination reduces the contrast among similar objects with different colors. For example, colored pencils may have identical shapes but an individual item can be almost any color. Because the color of an object is the color of the light it reflects, these objects only appear to be red, green, or blue because those colors are present in ordinary lighting. When the objects are illuminated by an IR light source, they all tend to reflect similar amounts of light, making each item appear a uniform grey.
Because common forms of indoor lighting contain little energy at IR wavelengths, there is little interference between ambient light and IR illumination. You still must choose a camera with good IR sensitivity as well as a suitable lens and IR filter. But with those elements, variations in ambient lighting can largely be eliminated.
The objects shown in Figure 1 as well as their features are all large in comparison to a semiconductor device�s dimensions. Visible light can be used to view features down to about 0.5 �m. Beyond that, the wavelength of the light is too long. UV illumination extends the resolution of an imaging system to smaller dimensions. Not all cameras have sufficient UV sensitivity, and lenses must be specially chosen.
Illumination by a particular color of light can be achieved by applying an appropriate filter. In this case, the filter will pass the desired color light and absorb all other colors. The choice of color depends on the color of the object or feature of interest. Otherwise, limiting the imaging system to a single color has similar advantages to using IR illumination: The importance of ambient light variation is minimized.
There are two color systems, an additive one and a subtractive one, and this can lead to confusion. The subtractive system comprises red, blue, and green pigments that are combined to create printed images. Each color of pigment absorbs its own color light, hence the term subtractive.
Machine vision applications commonly use the additive red, blue, green (RGB) designation. As shown in Figure 3, combinations of the primary colors of light taken two at a time produce the secondary colors of cyan, magenta, and yellow. You can think of white being the combination of red, blue, and green or of cyan, magenta, and yellow. Because yellow light already is the combination of red and green, mixing yellow with blue produces white light. This is true of any primary color mixed with the secondary color opposite it in the figure.
Figure 3. Additive Light Color System
Courtesy of Firstsight Vision
Both gel and interference filters are available, gel being by far the most common. Typically, the filter consists of a sheet of colored plastic in a holder. Higher-quality gel filters are made from thoroughly colored optical glass. In either case, the filter transmits those colors it doesn�t block. A red filter passes red light and blocks other colors.
An interference filter contains a tuned gap between two plates that acts as a bandpass filter. Only a certain range of wavelengths can pass through. Interference filters provide very accurate color control and are used in laser imaging and fluorescent inspection applications.
Minimizing glare from shiny surfaces also can be accomplished by filtering. In this case, a polarization filter is needed for the camera. When glare occurs, some of the incident light has been reflected from the object�s surface and some absorbed. Reflection from a shiny surface polarizes the reflected light at an angle ranging from completely horizontal to completely vertical. A polarizing filter turned at right angles to the reflected light will block it, significantly reducing glare.
The illumination source itself can be polarized by passing the light through a polarizing filter. Polarized lighting often is used in microscopy in the study of materials such as crystals, and a corresponding polarizing filter for the camera also may be required.
On/Off, Bright, Structured
Many of the lighting products used today in machine vision applications are based on LEDs. Even, distributed illumination is relatively easy to provide from an LED array. Some ring lights that mount directly around a camera lens contain large and equal numbers of intermixed red, blue, and green LEDs to ensure a true white illumination.
By themselves, LEDs are a direct light source. That is, to the degree the light rays are focused, parallel, or oriented at multiple angles, they will strike the object in the same way. Diffusing material positioned between the source and object makes the illumination more uniform and increases the range of light directions. An opaque diffuser can eliminate the bright spot or spots common in direct lighting based on LEDs with narrow viewing angles.
Operating temperature also plays a significant role in LED lifetime and obviously will be lower if the LED is off part of the time. According to the handbook, �Increased temperature has an exponential effect in reducing LED lifetimes. Even running an LED at an ambient temperature of 40�C, the MTBF is halved compared to running at 25�C.�
In addition to freezing motion, light sources are strobed for other reasons. In a related application, the motion of particles in a dynamic fluid system can be studied by capturing two successive images corresponding to two pulses of light. The change in the position of the particles relative to the time between pulses corresponds to particle velocity. In a very different application, a pulse of light excites a biofluorescence response a short time later. The camera exposure must be delayed precisely from the light pulse to capture the peak emission.
A lighting controller of some form is needed to achieve the specialized types of illumination described. A controller also can automatically accommodate changes in ambient light levels. Further, you can program a controller to drive the illumination source in the optimum way for each of several different types of objects being imaged. Using a controller in this way extends the range of applications for which a single camera/lighting setup may be suitable.
Sometimes, multiple cameras are used to image different views of the same object. A controller can sequence the illumination setup to correspond to each camera exposure. At a very basic level, feedback within a controller can ensure constant intensity.
Structured lighting is a recent addition to the machine vision practitioner�s toolbox. A laser source produces an intense and sharply focused line or grid of lines. The illuminated object is viewed obliquely by the camera. As the handbook explains, �Structured lighting is used in many applications to obtain depth information and for 3-D inspection�.The distortions in the [projected] line can be translated into height variations� (Figure 4).
Figure 4. Laser-Generated Grid Used in Structured Lighting Application
Courtesy of Firstsight Vision
Lasers bring with them a classification system based on the potential to cause eye damage. For example, Class I describes �lasers with a low output power (approximately 0.39 mW or less) that do not harm the human body.� In contrast, Class IV relates to much higher power industrial lasers with more than 500-mW power that not only can cause permanent eye damage, but also may burn skin and clothing.
Technically, LEDs are not lasers, but LEDs also can be sufficiently bright to damage your eyes. Compact LED light sources now are available with a power output equivalent to a 100-W halogen source. The handbook states, �LED illumination falls under the category of laser product that is defined by IEC and Japanese Industrial Standards (JIS).� In practice, this means that a warning label is permanently attached to some LED lighting, stating the maximum power, wavelength, and class. LED safety classifications correspond broadly to laser classifications.
Summary
Rather than deal with each handbook section superficially, the illumination section has been discussed in depth. By combining the detailed product descriptions with technical information in each of the seven sections, you can gain a sound understanding of machine vision principles.
After reading the handbook, will you be a competent machine vision system designer? Probably not. However, you will understand the subject sufficiently that you can intelligently discuss your application requirements with an expert.
The handbook explains how many elements making up a vision system work together. For example, you should be able to choose the type of lighting most appropriate for the object features you need to image. Very difficult applications still may require help, but knowing what questions to ask ensures that your project stays on track.
Acknowledgement
This article is based on technical material from Vision Elements: The Machine Vision Handbookwww.firstsightvision.co.uk published by Firstsight Vision, The Old Barn, Grange Court, Tongham, Surrey, GU10 1DW, U.K., +44 1252 780000,
October 2007