Technology Insights: Machine vision thrives outside the box

Oct. 26, 2018

Machine vision, the technology that provides imaging-based automatic inspection and analysis, is the part of the post-Industrial Revolution era that eases human beings out of inspection, process control, and much more. Identification, detection, measurement and inspection are often areas of deployment for the technology, as well as process control and guidance for autonomous robotics. For those who think “outside the box,” the possibilities for machine vision are endless.

Global market for machine vision technologies expected to reach $24.8 billion by 2023

Due to heightened interest from both industrial and non-industrial segments, the global market for machine vision technologies is predicted to grow from $16 billion in 2018 to $24.8 billion by 2023, according to a new report from BCC Research. That would indicate a compound annual growth rate of 9.2%, according to the report Global Markets for Machine Vision Technologies.

According to the report, the current MV market is driven by the factors such as: growing application of Internet of Things in the industrial sector, evolution of computing power in embedded, single board computer systems, improvements in productivity and efficiency, better quality using machine vision systems and a growing manufacturing sector. Demand for MV systems has increased in all industrial applications, including semiconductor, electronics, pharmaceuticals, medical devices, packaging, automotive, printing/publishing and consumer goods.

“Machine vision systems can perform complex repetitive tasks with higher accuracy and consistency than human workers,” the report notes. “Machine vision systems include components such as image sensors, processors, programmable logic controllers (PLC), frame grabbers, cameras and more, which are driven by a software package to execute user defined applications.”1

Farmers test machine vision to predict porcine pugnacity

Apparently, it’s not uncommon for pigs to bite each other’s tails. Scientists are using machine vision to interrupt that pattern.

Scientists in Scotland are using 3D cameras and machine-vision algorithms to automatically detect when pigs will become aggressive with other pigs. With the ability to anticipate porcine aggression, farmers could head off damage to their livestock by deploying distractions into the pigpen such as straw, knotted ropes, or shredded cardboard, which tap into the pigs’ instincts to root and chew.

Apparently, porkers have a tendency to bite one anothers’ tails, which can render up to 30 percent of a farmer’s swine to develop infections severe enough to make their meat unfit for human consumption. The causes for the biting are varied—genetics, diet, overcrowding, temperature variations, lighting, disease, and more—pretty much unpredictable to the average farmer.

In an effort to break the code, researchers monitored 667 pigs on a farm, using both time-of-flight and regular video cameras that recorded continuously for 52 days. Each time-of-flight camera emitted pulses of infrared light from LEDs 25 times a second, and recorded the amount of time needed to detect reflected pulses. This data allowed scientists to track each pig’s position and posture. Machine-vision algorithms from farm-technology company Innovent Technology, in Aberdeenshire, Scotland, then determined which activities might serve as possible early warning signs of tail biting.

The scientists found that before outbreaks of biting, pigs increasingly held their tails down against their bodies. Moreover, the software could detect when these changes in tail posture occurred with 73.9 percent accuracy.2

Robot teaches itself to dress people

More than 1 million Americans are unable to dress themselves, and require assistance with that task, on a daily basis. Robots may be able to do that routinely someday, but first they will have to triumph over the challenge involving cloth and the human body.

In a step in the right direction, a robot at the Georgia Institute of Technology is successfully sliding hospital gowns on people’s arms. The machine doesn’t use its eyes as it pulls the cloth. Instead, it relies on the forces it feels as it guides the garment onto a person’s hand, around the elbow and onto the shoulder.

The machine, a PR2, taught itself in one day, by analyzing nearly 11,000 simulated examples of a robot putting a gown onto a human arm. Some of those attempts were flawless. Others were spectacular failures—the simulated robot applied dangerous forces to the arm when the cloth would catch on the person’s hand or elbow.

From these examples, the PR2’s neural network learned to estimate the forces applied to the human. In a sense, the simulations allowed the robot to learn what it feels like to be the human receiving assistance.

“People learn new skills using trial and error. We gave the PR2 the same opportunity,” said Zackory Erickson, the lead Georgia Tech Ph.D. student on the research team. “Doing thousands of trials on a human would have been dangerous, let alone impossibly tedious. But in just one day, using simulations, the robot learned what a person may physically feel while getting dressed.”

After success in simulation, the PR2 attempted to dress people. Participants sat in front of the robot and watched as it held a gown and slid it onto their arms. Rather than vision, the robot used its sense of touch to perform the task based on what it learned about forces during the simulations.3 EE


1. “Global Market for Machine Vision Technologies to Reach $24.8 Billion by 2023,” Globe Newswire, September 4, 2018.
2. “Scottish Farmers Test Machine Vision to Manage Pig Pugnacity,” IEEE Spectrum, Sept. 21, 2018.
3. “Robot teaches itself to dress people,” Science Daily, May 14, 2018.

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!