Sensor Fusion Brings System Inputs Together

May 11, 2010
Sensor fusion typically implies the processing of sensor outputs for improved data for decision-making. However, different levels and interpretations of sensor fusion can make the topic rather confusing.

Dasarathy model

Vehicle active safety

Sensor fusion evolution

Body-motion sensing

MVN motion capture suit

MEMS sensor fusion

Sensor fusion or multi-sensor fusion has its roots in military intelligence. Today, the technology is being employed in several non-military applications including robotics, automotive, medical, and even entertainment. The recent implementation interest stems from the need for more robust performance, increasing system complexity, and the ability of suppliers to deliver increasing sensor and processing capabilities. However, many variations on the fusion theme may distract engineers whose next design could benefit from sensor fusion. When you go beyond the misconceptions, there are real differences that can improve systems.

CON-FUSION FACTORS

It seems that the term fusion is at least part of the problem. According to one of Merriam-Webster’s online dictionary definitions, fusion is the merging of diverse, distinct, or separate elements into a unified whole. How the merging occurs and what gets merged is still evolving.

In Sensor Data Fusion for Context-Aware Computing Using Dempster-Shafer Theory,” Huadong Wu of the Robotics Institute at Carnegie Mellon University recognized that “since it \\[sensor fusion\\] is an interdisciplinary technology independently growing out of various applications research, its terminology has not reached a universal agreement yet. Generally speaking, the terms sensor fusion, sensor data fusion, multi-sensor data fusion, data fusion, and information fusion have been used in various publications without much discrimination.” The misuse of terminology goes beyond these terms.

One of the pioneers in the area of multi-sensor information fusion is Belur Dasarathy, the owner of Information Fusion Technologies Consultants, a consultancy with internationally recognized expertise in information fusion and related technologies. He also is the originator of the Information Fusion (including Multi-Sensor and Multi-Source Sensor Fusion & Data Fusion) group on Linked-In. More importantly, he is the founding and current editor-in-chief of Information Fusion, published by Elsevier. Dasarathy was among the first who brought the phrase “information fusion” to the fore and accordingly titled the journal he founded in 2000. 

As one of the founding directors of the International Society of Information Fusion, he developed a formal definition that has since been quoted at several Web sites: “Information Fusion, in the context of its use by the (Information Fusion) Society, encompasses the theory, techniques and tools conceived and employed for exploiting the synergy in the information acquired from multiple sources (sensor, databases, information gathered by humans, etc.) such that the resulting decision or action is in some sense better than (qualitatively or quantitatively, in terms of accuracy, robustness and etc.) than would be possible if any of these sources were used individually without such synergy exploitation.”

“It used to be called data fusion, sensor fusion, and more, and everybody was using their own definitions,” says Dasarathy. The term information fusion encompasses all the different fusion terms, including sensor fusion.

“Fusion can occur at different levels,” explains Dasarathy. “The inputs can be at one level and the outputs can be at another level.” To avoid ambiguity, he developed what has since come to be known as the Dasarathy model (Fig. 1).

“Fusion involves the input and the output,” says Dasarathy. “You have to characterize a process based on both the input and the output, rather than just one or the other.” In general, the fusion algorithms can range from very simple to very complex.

Information fusion is useful whenever the integration of the information is more than just the sum of the components. “You can oversell information fusion just like any other thing,” cautions Dasarathy. Garbage in still equals garbage out. As part of the information system development, developers have to make sure that they are indeed getting a benefit from the fusion process. “That involves a certain amount of experimentation in the context of the application,” he says.

“You have to show in a qualitative and quantitative way that the decision you are making is going to be better than what you would have done than if you had used any one of the information sources by itself,” says Dasarathy. An approach that Dasarathy developed for determining the value of implementing the fusion process is called an elucidative fusion system.

“By doing fusion I improve the results in some fashion. But you need to know what was the contribution being made by each of the contributing sensors because there is a cost associated with doing this function,” says Dasarathy. For example, in a system with four or five sensors, the contribution of each sensor needs to be determined. If the contribution of one sensor is very small or even negative, it would be counterproductive for it to be part of the fusion processing.

NEW CAPABILITIES, NEW APPLICATIONS

Priyabrata Sinha, principal applications engineer in the High-Performance Microcontroller Division at Microchip Technology, acknowledges that sensor fusion is somewhat of a broad term that encompasses different sensors. “To me, it’s basically a scenario where you combine the data from multiple sensors and combine them in a concurrent sense,” he says.

Instead of sequentially polling the output of the sensors, simultaneous processing is one of the key differences according to Sinha. “The other key to what can be called sensor fusion is adding some value to it,” he says. “Not just taking the raw data that you are getting from sensors but to combine them to perform some additional calculations—basically analyzed and interpreted in different ways.”

Rather than customers coming to Sinha saying they want to implement sensor fusion, more commonly they bring a problem and the solution turns out to be sensor fusion. “Each sensor signal has its own different signal characteristics and also different kinds of noise affecting them, so the types of maybe digital filters you apply to each sensor signal would be different,” says Sinha. “Since you have to do all this, in general, concurrently, that requires quite a bit of processing power.” When customers describe what they are trying to accomplish, the answer falls under a broad umbrella of sensor fusion applications.

Digital signal controllers (DSCs) that implement design capabilities of both microcontroller and digital signal processing are a natural for sensor fusion. A 16-bit DSC such as Microchip’s dsPIC33F would be well-suited for low-cost, space-constrained sensor applications. The DSP portion can implement finite impulse response (FIR), infinite impulse response (IIR) filter, fast Fourier transform (FFT), linearization, and other calculations commonly found in sensor fusion applications. Kalman filters used in sensor fusion are typically implemented in software. The number of additional modules integrated into processors also simplifies the application of advanced sensor fusion. For example, a DSC’s quadrature encoder interface can handle linear encoder pulses in a multi-sensor system.

Paul Zoratti, automotive systems architect for Driver Assistance Platforms at Xilinx, has been working on sensor fusion for many years—even before he worked for Xilinx. “We have been applying FPGAs to your traditional ranging sensors, like radar and lidar, but we also have been doing a lot on image processing for driver assistance,” he says.

Zoratti identifies three different levels of sensor fusion. He calls the first one “sensor stitching.” At this level, typically the sensors are all the same type. “Think about a camera or a set of cameras around a car that all have different fields of view,” he says. All of the cameras are looking at a different area around the car. “You could take those sensors and stitch all of those seams together and make one continuous view around the vehicle,” he explains. Nissan takes this approach in its Around View Monitor system. Other carmakers are developing similar systems.

Zoratti calls the next level information fusion. “You can take multiple sensors, typically different types, and you can fuse, in other words bring together the information from both of them, not really leveraging one sensor’s set of information to influence the processing on another, but you can combine that information. An example of that is ultrasonic park assist systems,” says Zoratti.

In a vehicle with ultrasonic sensors and a rear-view camera, the sensor information from the ultrasonic sensors, the range information, can be overlaid on the output of the camera sensor. “What you have now done is take two sensing domains. Each supplies different information: the camera supplying visual scene information and the ultrasonics supplying range information,” says Zoratti. “You’ve fused that information together and given it to the driver for them to make sense of it.”

Multi-sensor fusion is the top level. “Now we have different sensing modalities,” says Zoratti. “We have a camera and a radar, and they have overlapping fields of view.” Both sensors observe the same scene. The radar determines range and range rate. The camera develops a scene, detects edges in that scene, and identifies and classifies objects. The radar information can be used intelligently to extract better information out of the camera sensor. This allows the focusing of image processing to perform specific tasks.

With three different fusion levels, the more sophisticated approaches are getting a lot of attention. “As more and more driver assistance systems are being fielded, they are looking at higher and higher levels of performance,” says Zoratti. This higher performance and the robust design requirements of safety systems are driving the increased use of sensor fusion. “Everybody goes to fusion now because certain sensors are good at doing some things and not others, and then you’ve got other sensors that sort of fill in those holes,” says Zoratti.

To perform the higher level of processing for sensor fusion, Xilinx has both its high-end Virtex and its cost-effective Spartan series of FPGAs. “As we have gone down the process node curve, we have also taken a lot of the features that used to be only in the Virtex devices and brought those down into the Spartan devices,” says Zoratti. “It’s the Spartan-level devices that are cost-effective for automotive.”

Spartan devices qualified for automotive applications provide an enabling technology for sensor fusion. “When you talk about system enablers, it’s the amount of processing power, or you can think of it as MIPS per penny that you can get out of the silicon now that is also driving this interest in fusion because it is now becoming more and more feasible,” says Zoratti.

Sensor fusion of radar and camera sensing is one of the rapidly developing automotive applications. In his presentation at the PReVENT/ProFusion2 Fusion Forum Workshop8 in 2006, David Schwartz of Delphi provided a generic depiction of a fusion system for vehicle active safety (Fig. 2). In this architecture, sensor fusion of radar enhanced with vision adds precision and confidence for false-alarm reduction and high-resolution position as well as object classification for vehicle/non-vehicle and more.

Jim Grothe, marketing manager for Automotive Sensors at Freescale Semiconductor, has a couple of theories on sensor fusion. “The confusion in the marketplace is probably, to somewhat, on purpose. You know how marketers like to confuse things,” he quips. “At the end of the day, I think there is a spectrum of definitions.”

Grothe explains how the transition from a single-axis sensor to multiple axes and then multiple types of sensors co-packaged provides a type of sensor fusion. This is fusing of the sensors themselves at the packaging level and provides a parallel path for sensor fusion (Fig. 3). This direct sensor fusion is most commonly associated with microelectromechanical-systems (MEMS) sensors made using semiconductor processing techniques.

The next layer is just beginning in automotive systems today. The merging or fusing of multiple systems into one results in reusing the same sense information from one application for another. While this seems similar to multiplexing, Grothe says that it really isn’t. The merging of electronics stability control (ESC) and airbags explains why.

“You could have two low-g axes accelerometers, two angular sensors for the ESC system, and then you have another two or three axes of acceleration for the airbag. That’s seven degrees of freedom that you would require,” says Grothe. “That’s if they were separate systems. By fusing them, you could go back down to three.”

Some sensor parameters change as the result of fusion. For merging the ESC with an airbag system, the transducer has to be sensitive to a wider range of operation—the low-g range for the ESC system as well as the medium-g range required for the airbag system. “The design of our mass and the spring constants and the processing circuitry of the signal from the transducer itself have to be tuned appropriately across a wider bandwidth as well as wider dynamic range from the sensor,” explains Grothe.

Claire Jackoski, marketing manager of consumer and industrial sensors at Freescale, notes that there are both similarities and differences to automotive for sensor fusion in consumer applications. Today, cell phones provide one of the highest uses of inertial sensors. “We are seeing that the demand for multiple sensors doing multiple activities is kind of the path that they are taking,” says Jackoski. The sensors could be magnetic sensors plus accelerometers for six degrees of freedom or gyroscopes plus accelerometers or pressure sensors plus an accelerometer. “The two sensors together can give you a new solution,” she says.

Communication among sensors helps to make the decisions. “The fusion comes from not only the potential of a packaging exercise but a communications exercise that the algorithms start talking to one another to give heading and speed rather than simply providing the usual X, Y, Z output or a north or magnetic field direction,” Jackoski says. The progression from simple sensing to this multi-sensor environment has taken sensors to the direct level of sensor fusion shown in Figure 3.

LET THE MORE COMPLEX GAMES BEGIN

With sensor fusion, games like the Nintendo Wii are providing more sophisticated interaction. In the initial systems, decisions were made in the control unit regarding the player’s movements based on accelerometer and optics inputs and how the game would respond. Now games are becoming more complex.

“What they want to do now is capture a different motion. They capture angular acceleration motion with the gyros and linear motion with accelerometers,” says Jackoski. “Together that becomes six degrees of freedom of motion.” Games aren’t the only area where motion control is advancing thanks to sensor fusion. The technique in Figure 4 can be applied to gaming, medical, and other advanced motion applications.

Xsens is one of the companies taking advantage of the capability of advanced motion sensing. Its MVN inertial motion capture suit (Fig. 5) is a system for full-body human motion capture based on inertial sensors, biomechanical models, and sensor fusion algorithms. The motion capture technique was used in the recent movie Alice in Wonderland, avoiding the use of cameras to capture motion information and saving time and money.

Another company, 24eight, has developed a motion control suit for medical applications that employs MEMS sensor fusion. MEMS sensor fusion (Fig. 6) is also part of security and energy applications. However, the local processing of biometric data including the ability to perform edge processing is applicable to gaming and personal computing as well as medical systems.

Sponsored Recommendations

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!