Io T Retail Ekkasit919 Dreamstime L 107745072

How 3D Tech Advances Will Impact Robotics Vision Systems in 2022

March 14, 2022
Demand for advanced imaging technology is affecting robotics in scores of industries. What innovations in 2022 will further sculpt this fast-moving space?

This article is part of the 2022 Electronic Design Forecast issue

What you’ll learn:

  • Vision technology is shifting from image capture to object recognition and tracking.
  • Robots with advanced 3D capability can differentiate objects, perceive human form.
  • How autonomous robots are becoming practical for consumer and industrial uses.

“You see, but you do not observe,” remarked Sherlock Holmes to his loyal friend Dr. Watson in their adventure, “A Scandal in Bohemia.” Identifying and understanding what once was only seen has always been a prized goal—and something that’s quickly reaching reality in robotics vision.

Three-dimensional imaging is key to these advances. 3D has always had advantages over 2D for robotics due to its ability to capture and comprehend a much richer set of data. It not only easily recognizes more types of objects, but also enables robots to orient themselves in three-dimensional space.

The increased sophistication of onboard vision systems means robots are now accomplishing more tasks than ever before, without reprogramming. Today, robots are highly adept at pick-and-place tasks, retrieving specific items, distinguishing objects from their surroundings, and overcoming variations in task objectives.

According to marketing research firm Mordor Intelligence, the robotic vision market is expected to grow at a CAGR of 9.86% from 2021 to 2026. Innovation will be a key enabler of this growth, taking robotics vision systems to exceptional levels of utility. Here are just a few of the major advances arriving on the scene for 2022:

Deep-Learning 3D Reconstruction

Metaverse and various AR/VR/MR applications are at the cutting edge of robotics vision, and deep-learning 3D reconstruction is the prime enabler. One exciting non-robotic example of deep-learning 3D is Google’s Project Starline. The remarkable, experimental 3D chat booth allows a caller to see, and interact with, a real-time 3D construct of the person they’re calling.

Conventional forms of 3D reconstruction can’t recognize humans; everything is treated as an object. In deep-learning 3D reconstruction, an algorithm is “taught” to recognize the human form. It removes the black holes or other forms of missing data in the camera field, including the rough and/or missing object edges encountered in less-adept systems.

Project Starline may be a communications system, but the same approach to deep-learning 3D reconstruction can be applied to robotics, combining detailed, reliable vision with the ability to gather full-field information at depth.

While many robotic systems use 3D cameras to identify and avoid obstacles, deep learning of the human form will enable robots to both interact with and, where necessary, avoid people. Service robots, security systems, warehouse and factory bots, autonomous delivery units, and robotic hospital/healthcare aides are just some of the hundreds of applications for this highly advanced, yet practical and affordable vision breakthrough.

Time-of-Flight (ToF) Technology

ToF systems can capture the exact shape and position of moving objects, identifying their size, distance, and rate of movement even in complete darkness. 3D ToF vision systems are highly accurate and extremely valuable for industrial or environmental use.

Already being deployed in simultaneous location and mapping (SLAM), navigation, inspection, tracking, object identification, and obstacle avoidance applications, ToF is finding greater utility than ever before. As with deep-learning 3D reconstruction, ToF is pixel-by-pixel effective, giving it excellent edge perception, yet it doesn’t require a GPU or neural network.

The onboard computing capability of ToF systems allows robots to convert raw data into precise depth images in real-time. ToF vision can be used for advanced human-machine interface (HMI), 3D scanning, surveillance, and gaming, as well as a wide range of robotics applications.

Embedded Imaging

Embedded vision solutions, while not entirely new to robotics, are being leveraged by designers in multiple new ways, thanks to their simplicity and compact form factors. Imaging systems with onboard processing capability eliminate complicated and error-prone external computer hookups. Their high-quality depth perception, combined with the ability to carry out associated computing tasks, are essential to autonomous robots in industrial and consumer applications.

The design and application flexibility of embedded vision systems make them ideal for robotic devices in warehouses, grocery stores, healthcare, security, factories, hospitality, and many other areas. New advances in 2022 will further extend the use of embedded imaging for more complex tasks, and in a wider array of environments.

Eliminating Privacy Concerns

Privacy has always been a concern with image recognition. However, 3D technology can alleviate privacy issues and ensure anonymity. Unlike 2D cameras that capture and record facial images, 3D systems “see” only three-dimensional point clouds that are recognized in the abstract. This point data is used exclusively for authentication and therefore far less likely to raise privacy concerns.

In many parts of the world, robots provide personal services, ensure security for financial transactions, and protect individuals and property without objection. In fact, employees, consumers, and guests more often express their appreciation for the speed, accuracy, and convenience provided by modern vision solutions.

More Uses Anticipated

Advanced robotics vision systems will be deployed in more places, and in more challenging ways, than ever in 2022 and beyond. It’s expected that within five years, 30% of 2D cameras will be enhanced with 3D capability, turning traditional RGB cameras into RGB (+Depth) cameras. Within 10 years, 3D vision systems are expected to cover 80% to 90% of 2D applications.

Robots are being developed for mobile security, as companions for the elderly and disabled, and as assistants in stockrooms, store aisles, operating rooms, and much more. Given such diverse and increasingly demanding uses, basic vision systems will no longer suffice. It doesn’t take a Sherlock Holmes to see that more intelligent powers of observation are needed. But, fortunately, vision technology is on the rise, helping to solve the mysteries of next-generation robotic design.

Read more articles in the 2022 Electronic Design Forecast issue

Sponsored Recommendations

What are the Important Considerations when Assessing Cobot Safety?

April 16, 2024
A review of the requirements of ISO/TS 15066 and how they fit in with ISO 10218-1 and 10218-2 a consideration the complexities of collaboration.

Wire & Cable Cutting Digi-Spool® Service

April 16, 2024
Explore DigiKey’s Digi-Spool® professional cutting service for efficient and precise wire and cable management. Custom-cut to your exact specifications for a variety of cable ...

DigiKey Factory Tomorrow Season 3: Sustainable Manufacturing

April 16, 2024
Industry 4.0 is helping manufacturers develop and integrate technologies such as AI, edge computing and connectivity for the factories of tomorrow. Learn more at DigiKey today...

Connectivity – The Backbone of Sustainable Automation

April 16, 2024
Advanced interfaces for signals, data, and electrical power are essential. They help save resources and costs when networking production equipment.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!