11myths Promo Facial

11 Myths About Face Recognition

May 4, 2020
Advances in AI and, specifically, face recognition have led to a number of embedded-system benefits—and some debates on its use. What’s a myth and what’s real?

The strides made in artificial intelligence over the past decade have made it possible to bring advanced features such as face recognition to embedded systems. Despite the advantages it can bring, the use of face recognition is sometimes seen as problematic and even controversial. What is the reality? NXP’s Rick Bye puts the myths of automated face recognition to rest.

1. Face recognition is expensive.

It’s easy to believe that to give computers the ability to recognize faces, the solution will need to employ high-end hardware. After all, the deep-learning pipelines that have been demonstrating breakthrough results in image classification since the mid-2000s have harnessed the computer power of graphics processing units (GPUs), often using them in closely coupled clusters.

For the face-recognition applications developers of embedded systems such as home-security and access-control products, there’s no need to go to these lengths. By designing pipelines for efficiency, focusing on the features most important for detecting and matching faces to registered images, the processing power required is far lower than in research-grade implementations. 

2. Face recognition is difficult.

One of the key difficulties in machine learning is matching the structure of the pipeline to the application so that it will converge to a useful result when trained. But there’s no need to build these structures from scratch in an application such as face learning. Platforms built on proven machine-learning pipelines that deliver high-performance very quickly are available, but which provide the degree of customizability that will be needed for different target markets?

3. Face recognition requires high-performance processing.

Many people will see the high-performance hardware used in the cloud-computing environment for machine learning and naturally assume that it’s always a heavyweight process. However, these systems need to be able to adapt to many different applications and take advantage of open-source tools that offer support for the full gamut of deep-learning architectures.

The result is that, even for inferencing applications—when the network is used to analyze real-world data—the models have high degrees of data and computational redundancy. An embedded solution can reduce these overheads significantly, making it possible to run sophisticated face-recognition pipelines on a 32-bit MCU.

4. Face recognition isn’t secure.

A key application for face recognition in embedded systems is access control, so you want to be sure that the door can’t be unlocked, or the alarm system overridden, by holding up a selfie to the camera. This is why an integrated vision platform that incorporates machine-learning techniques is important. These can perform checks on the image that ensure viable data is fed to the machine-learning pipeline.

Flexibility also ensures that the pipeline can take account of more than just visible-light data. In this instance, using infrared sensors or image sensors can ensure the system is able to tell fakes from reality.

5. Face recognition invades privacy.

A great deal of the applications familiar to the general public involve uploading raw data to cloud servers and processing them there. This is a concern for many consumers who don’t want their activities in and around their home to be passed over the internet and possibly even exposed after a malicious attack on the servers. A platform such as NXP’s MCU-based EdgeReady solution for face recognition performs all of the image-processing and face-recognition functions locally. The data never has to leave the enclosure, ensuring end products can be designed to maximize user privacy.

6. Face recognition doesn’t work in the dark.

A security system or an electronic door with integrated face recognition will often have to work in lighting conditions that are far from ideal. And, as a technology that seems at first to be highly reliant on visible light to function, night-time or blackout conditions could be a problem.

However, it’s a factor that can easily be addressed by augmenting a visible-light image sensor with secondary devices that work on the infrared spectrum or by using time-of-flight data, which builds up a 3D map of an object within range. In this way, darkness needn’t be a problem and it helps improve usability and power consumption by not demanding the solution incorporate artificial illumination.

7. Face recognition requires expertise in AI.

In general, AI is a very broad and complex area. In deep learning alone, new academic papers exploring different facets of the technology and novel pipelines structures are appearing on arXiv practically every day. But if you use a platform designed for face recognition, such as NXP’s MCU-based solution, which incorporates machine learning and has a complete image-processing toolkit designed for the task, it’s very easy to achieve high-quality results. 

8. Face recognition requires lots of power.

Optimized AI and image processing make it possible to run face recognition on MCUs rather than the high-performance GPUs used on server platforms. This comes with additional benefits, including access to the rich set of power-saving modes supported by today’s MCUs.

An MCU-solution doesn’t need to boot heavyweight operating systems such as Linux, which means it’s possible to shut the main processor when it’s not needed. But it’s still possible to wake up the processor and achieve full face-recognition capability in a tenth of a second if a motion sensor determines there’s enough activity in the field of view to need attention. 

9. Training is cumbersome and a hassle for the end user.

Early implementations of face recognition in embedded systems, such as tablets and smartphones, required a series of different poses to train the neural network effectively on a new user’s face. Advances in techniques such as transfer learning make it possible to have someone simply present their face to the camera once for it to train on their features and add them to the approved-user database.

10. Face recognition applications are limited.

As with any technology, it can be hard to think of how face recognition can be used until innovative companies put it into action. Face recognition may seem limited to security and access-control applications because that’s how it’s so often used today.

However, smart appliances and power tools can make use of it for safety, such as deactivating features so kids can’t get hurt. And, increasingly, devices will be designed not just to recognize faces, but expressions as well. Devices that can read emotional signals like frustration, confusion, or delight can respond appropriately and improve the overall user experience. 

11. Face recognition requires a heavyweight OS.

Because so many of the research-level tools for deep learning are provided as open-source software toolkits that were written for Linux, it’s easy to believe that applications such as face recognition will need Linux. But embedded systems that support the core technology needn’t suffer the memory cost and long boot times of a Linux installation. MCU-based solutions can run with far more lightweight real-time operating systems that consume far less memory, start up quickly, and support advanced power optimizations.

Rick Bye is Senior Product Marketing Manager, IoT solutions, at NXP Semiconductors.

Sponsored Recommendations


To join the conversation, and become an exclusive member of Electronic Design, create an account today!