Applying Automated Optical Inspection

The goal is to develop an accurate, fast AOI system as flexible and easy to train as a human.

Automated optical inspection (AOI) uses lighting, cameras, and vision computers to make precise, repeatable, high-speed evaluations of a wide range of products. Human vision has limited accuracy and is slow but very flexible and easy to train. Mechanical gauging is accurate and precise but slow and cannot be used to evaluate changes in visual appearance.

A machine vision or AOI system can take millions of data points (pixels) in a fraction of a second. These data points are used for visual inspection and precision measurement. With modest effort and cost, an AOI system can resolve about 25 microns. With increasing effort and cost, measurement resolution can approach a micron.

Typical applications for AOI include the following:

• Gauging the diameters and concentricity of holes in automotive parts.
• Ensuring that lids and labels are properly applied to food and pharmaceutical products.
• Evaluating molded parts against three-dimensional (3-D) CAD data.
• Ensuring that all parts are present in a product assembly.
• Checking for cracks, flaws, contamination, scratches, and other defects.
• Optical character recognition (OCR).
• Grading agricultural products such as seed corn or fruit.

From these applications, you see that AOI systems are used for inspecting parts that have limited and known variations. For defect or flaw detection, the AOI system looks for differences from a perfect part. Agricultural inspections might check for variations in part color, perhaps to find ripe fruit. To successfully apply AOI, you need to set up the AOI system for specific types of parts and limit the visual appearance of those parts.

An AOI system also must be set up or trained to inspect visual features of the parts. For example, you must tell the AOI system what features to measure on an automotive part or teach it the color of ripe fruit for sorting agricultural products. We are working on making setup and training easier, but current technology is nowhere near a human's ability to understand and quickly learn what parts and features to inspect.

What Is in an AOI System?
Figure 1 is a schematic diagram of a typical AOI system. This particular system inspects automotive bearings for cracks and flaws, but the components and methods are similar for other AOI applications.

Figure 1. Schematic Diagram of an AOI System

In this example, bearings are released by a feed cogwheel and slide down an inclined track. The track has rails that limit the part's side-to-side movement. This kind of mechanical restraint is known as staging or fixturing. Staging positions the part in a known location and decreases variability in where the parts are and how they look. This reduces the computation required by the vision computer so that parts are quickly inspected.

As the bearing slides down the track, it interrupts a laser beam. A part-in-place (PiP) sensor detects this interruption and signals the vision computer that the part is in a known location. The vision computer then triggers the five cameras to simultaneously acquire images of the bearing. When a PiP sensor cannot be used to trigger image acquisitions, the vision computer must detect when a part is present by analyzing the images, and this slows down the system.

Lighting the part is critical for AOI. Obviously, the AOI system must be able to see the parts and features to do the inspection. Beyond this, lighting amplifies features of interest and suppresses visual features that are noise.

For example, many products reflect the light sources, causing bright highlights in the image. Highlights can obscure features in the image that we want to inspect. In this example, we use a large, diffused red LED light directly above the part. The cameras are set at an angle so they can see both the top and sides of the part, but there is no highlight. This allows the visualization and detection of fine cracks in the part as well as chip-outs along the top edge.

Staging and lighting are critical for an AOI system because they reduce variability in part images and act as preprocessors to select image data for the vision computer. Without this preprocessing, the vision computer would be too slow or unable to do the inspection.

You may be able to use an AOI system that has built-in staging and lighting, but often these have to be designed for your AOI task. A variety of standard lights and mechanical components helps with this task.

The camera's lens forms an image of the bearing on the camera's sensor, typically a CCD or CMOS image array sensor. Inexpensive machine vision quality lenses are used in this inspection, but inspecting small parts or high precision and accuracy measurements requires more expensive lenses. Again, the optics may be included in the AOI system or chosen for your specific task.

The camera translates the pattern of light from the part into an electronic image. Cameras designed for AOI systems have square (1:1 aspect ratio) pixels to simplify measurements, progressive scanning rather than interlaced scanning, a fast shutter, and an asynchronous trigger for acquiring images.

The progressive scanning and fast shutter reduce blurring of the part's image due to movement of the part. The trigger is necessary to synchronize the image acquisition with the presence of the part.

The brains of an AOI system are a vision computer and its software. This computer analyzes the images to extract measurements, counts, colors, and other visual features needed to do the inspection. The results of the inspection are used to reject defective parts.

In this example, a compressed air kicker is activated to remove defective bearings from the line. The vision computer also sends statistics and process data to a database.

Another Example: Grading Corn
The AOI task in Figure 2 is to find the ratio of bad (dark) corn kernels to the total of good (yellow and orange) and bad kernels. This ratio is used to grade seed corn lots; lots with fewer bad kernels sell for more.

Figure 2. Image of Corn Kernel Inspection

A typical AOI measurement task assumes ridged, well-defined parts and that any variation beyond some tolerance is a defect. Here, the parts are not well defined in size and shape, so trying to use a caliper tool to measure kernel size would be useless and frustrating. Instead, we use the known colors of good and bad kernels to approximate the desired ratio.

The staging and lighting consist of the operator taking a scoop of corn from a lot and spreading the kernels on a light table so the kernels are not overlapping. Since this is done on a sample evaluation basis, automating the staging is not worth the cost and effort.

The ratio measurement must be consistent over time and across operators. The evaluation task must apply objective standards to classify bad and good corn kernels.

We teach the AOI system the color and color variation of known good and bad corn kernels and the color of the background. The ratio we want is approximated well enough by taking the ratio of pixels with bad colors to the pixels with bad or good colors, ignoring the background color. While not as exact as if we had counted each kernel, it is a lot faster and removes the operator's subjective judgment from the evaluation.

Advances in Applying AOI
Many of the problems in applying AOI arise from the limited intelligence and flexibility of an AOI system. We can pick up a part, examine it with various views and lighting, do a lot of neural processing, and draw conclusions based on our knowledge about objects and the materials they are made from. An AOI system has to rely on staging to present the part and has a limited time to examine the part. It doesn t understand objects and materials and has very limited processing capabilities.

Improvements in lighting, computing capability, and vision software have made AOI systems smarter and more flexible, though still far from human visual intelligence.

Lighting preprocesses the image to amplify features you want to in-spect and suppress noise. Advances in lighting have improved the capabilities of vision systems, in part by reducing the computation required by the vision computer.

The adoption of standard LED-based lighting has improved AOI systems because it is very stable and easily controlled when compared to the older incandescent and fluorescent lighting solutions. For example, we can strobe an LED light source to give a brief and intense flash of light that stops part motion. This is difficult to do with older lighting technology.

Another lighting method projects a pattern of light on an object, often by using a laser with a holographic lens. The distortions of this structured light pattern can be measured and processed to recover the object's 3-D structure, at least what we can see of it. AOI systems using structured light can, for example, compare complex objects such as engine blocks to the designed shape in CAD files.

Another major boost to the intelligence of AOI systems comes from the rapid improvement in PCs. AOI tasks that previously required special computing hardware now are done with generic PCs along with hardware for image acquisition, communications, and synchronization. Demanding inspection tasks, such as inspecting LCD flat-panel screens, still require the horsepower of a dedicated vision processor.

The biggest advance in applying AOI is the improvement in the vision computer's software. In the not-so-good old days, you could expect to spend many months laboriously programming the vision computer for your task. The thrust of recent software development makes this task much easier by providing interfaces to hide the hardware details and incorporating the specialized knowledge needed to do AOI tasks.

The mantras of AOI vision computer vendors currently are ease of setup and ease of use. With a specialized AOI system, perhaps for 3-D measurement using structured light, the setup and operator interfaces can be very easy to use because the task domain is very limited and well specified.

If you need a custom AOI system, then you, an integrator, or a vision component vendor have to write the AOI software. Rapid application development (RAD) packages, such as ipd's Sherlock , make this relatively easy. These packages typically have an easy-to-use interface with features such as drag-and-drop selection of tools and operations and online help.

If you need extra computing power or find the RAD package limiting, there are many mature software libraries. Just be prepared for a long learning curve.

Many AOI tasks can be solved with a good set of general vision tools. These tools include visual search, measurement, defect detection, and bar-code and OCR reading.

Vision computer vendors have developed packages that bundle these tools inside a graphical user interface. No programming is required, and most of the specialized knowledge needed to solve an AOI task is incorporated in the software.

Summary
AOI has many applications but is limited to well-specified parts in well-controlled settings. It would be nice to have an AOI system as flexible and as easy to train as a human but with the speed, accuracy, and resolution of a computer vision system. Such systems are many years off, but that doesn t discourage us from continually improving existing AOI systems.

The three major efforts in putting together an AOI system are building the part staging, getting the right lighting, and programming the vision computer. Improvements and standardization of lighting and mechanical fixtures have made the first two tasks much easier. The improvement in computing power and vision software, particularly the focus on easy-to-setup and easy-to-use vision software, continues to make it simpler to program the vision computer.

Developing a fully custom vision system using traditional software libraries can take many months of work. Using a RAD package reduces the time to weeks.

For many common AOI applications, new programming-free software packages can cut development time to a few days. In all cases, get help with staging, lighting, optics, and the camera choice.

About the Author
Ben Dawson, Ph.D., is director of strategic development at DALSA Coreco, ipd Group. He earned M.S.E.E. and Ph.D. degrees from Stanford University and also was on the staff of M.I.T. Dr. Dawson has written more than 50 scientific and technical papers on human and machine vision. DALSA Coreco, ipd Group, 900 Middlesex Turnpike, Building 8, Second Floor, Billerica, MA 01821-3929, 978-670-2050, e-mail: [email protected]

FOR MORE INFORMATION
on automated optical inspection
www.rsleads.com/507ee-222

Sponsored Recommendations

Near- and Far-Field Measurements

April 16, 2024
In this comprehensive application note, we delve into the methods of measuring the transmission (or reception) pattern, a key determinant of antenna gain, using a vector network...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Empowered by Cutting-Edge Automation Technology: The Sustainable Journey

April 16, 2024
Advanced automation is key to efficient production and is a powerful tool for optimizing infrastructure and processes in terms of sustainability.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!