The modern-day car has evolved into a multifaceted computing platform. It delivers vital information on vehicle status, diagnostics, safety, environment, and other systems and integrates multiple consumer and convenience technologies such as GPS navigation, digital radio, DVD players, parking-assist and camera-based rear-vision systems, and Bluetooth connections to phones and MP3 devices.
The standard instrument panel or dashboard itself is a very complex component. It presents a wide array of information in different modes to the driver and commonly functions as a communications gateway or bridge to other devices in the vehicle.
The functionality of today’s highly desired infotainment systems and the need to integrate them with the rest of the vehicle electronics make it necessary to test these kinds of applications across the entire realm of a vehicle’s electrical and electronics system.
To address this need, dSPACE, a producer of tools for embedded electronic and mechatronic controls development, has integrated vision technology into its automated testing product line to aid OEMs and suppliers in testing the visual elements associated with instrument panels, infotainment devices, and similar applications. The new technology combines hardware-in-the-loop (HIL) simulation with a high-resolution, camera-based system to provide seamless, real-time image processing capability (Figure 1).
The Need for Integration Testing
Today, instrument panels encompass a host of functionality, displaying information from various parts of the vehicle. Even functions perceived as being simplistic, such as the digital odometer reading, have a significantly complex path to travel prior to being displayed correctly.
Consider a safety-oriented feature such as electronic stability control. If a failure occurs in any portion of this system, it typically goes into a diagnostic mode, and a warning light is illuminated to alert the driver. The driver then reacts by altering his driving technique—like reducing speed on a wet surface—to compensate. This chain of events involves multiple electronic control units, data being shared, actions taken, and the driver being informed.
Most consumer electronic devices contain embedded microprocessors and complex software that perform a multitude of tasks. Many of these devices are real time in nature and must conform to strict timing requirements. Moreover, these devices typically are self-reliant, meaning they function on their own resources and power.
When such consumer electronic devices are integrated into a vehicle, their functionality is dependent on communications and interaction with other technologies in the vehicle. For example, a consumer-oriented technology such as a GPS system also would be physically and functionally integrated with another device, such as audio control. It is a common trend to have these converged devices where the driver has a single interface and can control functions like GPS, entertainment, climate control, and phone communications. This integration poses challenges both in development and testing.
To ensure that vehicle electronics perform correctly both at component-level functionality and in overall integration, OEMs and suppliers must perform in-depth testing that takes into account the intended operation, diagnostic modes, interfaces to sensors and actuators, communications networks, power consumption, and distribution. If you consider the total amount of software in the vehicle, this testing has the daunting task of covering up to 100 million lines of code in a high-end car.
HIL simulation and camera-based image processing have the resources to capture, recognize, and analyze visual information from instrument clusters and infotainment screens. This capability is needed to check and confirm the visual output on the displays and to do so in an automated, efficient, and timely way.
HIL Simulation Testing
HIL simulation testing is widely used in the automotive and aerospace industries for software development, testing, and validation of engine, transmission, chassis controllers, and body-control applications. With the use of a HIL simulator, a virtual, real-time test environment is established. This environment is scalable and can test everything from a single consumer electronic device or infotainment system to an electrical subsystem to a complete vehicle embedded electronic system.
The simulator comprises mainly real-time processing hardware interfaced to various I/O boards. Within the networked environment, it emulates the sensor and controller inputs of the various electronic devices, embedded software, and vehicle systems being tested. This allows test engineers to study the interaction and timing behaviors of messages against the overall vehicle communications bus traffic.
The HIL simulation process begins with the creation of a system model that can execute in real time and produce simulation results similar to those of an actual system. The system model is dynamic in nature, providing the capability to change the simulation parameters at any time during the simulation process, and can be programmed to run automated tests 24/7.
Simulation calculations are intended to react to changes imposed by the electronic system under test. For example, the vehicle speed simulation should slow down or increase speed based on driver input to the gas pedal. The model also must execute in real time; that is, be able to react to inputs with the same time response as the actual system.
As a means of validating test cases, the HIL simulation test system takes an intricate look at potential failure conditions. It introduces faults into the system to verify system functionality as well as diagnostic procedures implemented by the embedded electronics systems. Countless testing variables like message timing, bus loads, and power loads can be played out in the simulation process to determine glitches, bugs, and solutions early in the development process.
Adding Camera-Based Image Processing
With the addition of camera-based image processing, even greater simulation testing capability is possible. By integrating a high-resolution camera in a simulator, an image-detection system is added to the test environment mix. This combination enables visual components such as heads-up displays (HUDs) and instrument-cluster panels to be tested together with overall vehicle electronics.
However, as the software and electronics associated with these devices increase in complexity, so does the amount of time required for software development, validation, and testing. Conventional methods of manual verification are inadequate in identifying potential problems in these systems.
In the absence of HIL simulation and camera-based image processing, the only way to test the visual elements of the instrument-panel cluster is through human visual verification. Test engineers have to physically look at visual feedback devices, such as the instrument panel, for extended periods of time to determine that gauge needles are moving correctly, telltale lights are coming on, and messages are displaying correctly. But the human eye is prone to make mistakes during this tedious task and not fast enough to catch minor glitches.
Behind the Technology
Testing visual feedback devices through HIL simulation and camera-based image processing is based on two key functions: capturing image information and processing this data in a timely manner. Efficiency and timing are at the heart of this performance.
To meet image-processing requirements, the testing system must incorporate relevant image-tracking techniques. Visual feedback devices such as HUDs and instrument cluster panels typically use gauge readings, LEDs, LCDs, color-coding, and other similar means to communicate information to the driver. In this case, high-resolution needle position detection, LED detection, color detection, character recognition, and pattern recognition are most commonly used for image tracking.
To satisfy the need for speed, the HIL simulator works on a real-time basis. The camera-based image processing system also must be operated in a time-deterministic manner for meaningful, closed-loop performance. Accordingly, a typical scan rate requirement is in the range of 20 to 50 frames per second.
The tools required to perform HIL simulation and camera-based image processing include the following hardware and software components:
• Real-time HIL simulator
• Image capturing and processing tools
• Test platform
• Visualization, experiment management, and modeling software
• Image processing software
• Test automation software
The camera and test subject are mounted onto a rigid test platform. The platform must be rigid by design to minimize movement. This is significant for image processing because any relative movements can skew results. The camera is interfaced to the HIL simulator by a wiring harness that provides real-time digital data based on image processing results.
The simulator software converts the digital data from the camera to engineering units and displays the results in a GUI. It basically compares the data sent to the test subject and the data received from the camera and displays the results in a meaningful format.
Case Study
A case study focusing on a 2006 Cadillac STS instrument cluster provides more details on how HIL simulation and camera-based image processing are achieved. The following hardware and software tools were used:
• A dSPACE HIL simulator including I/O and processor and communications boards
• A Cognex Insight 5603® high-resolution camera system
• Cognex Insight Explorer image processing programming software
• A customized sheet metal test platform with a camera mount
• A dSPACE Real-Time Interface
• A dSPACE ControlDesk® Visualization/GUI Software
• dSPACE AutomationDesk® Test Automation Software
By integrating a high-resolution camera system to the HIL simulator, we were able to monitor the vehicle’s speed gauge, engine rpm, engine temperature, fuel level, and other telltales of the instrument cluster. Messages displayed to the driver on an LCD screen such as odometer, tire pressure, faults, service information, and outside temperature also were monitored.
Camera Requirements
To facilitate real-time testing, the camera was required to have a built-in image processor as opposed to PC-based image processing. Other key features of the camera included the following:
• 1,600 x 1,200 maximum resolution
• Ethernet programming capability
• Image processing programming capability
• One RS-232 communications port
• 10 discrete I/Os
• 64-MB memory for storing images/job files
Camera Modes of Operation
To test the instrument cluster, the camera was programmed to operate in five image-tracking modes:
• Needle position detection
• Telltale lights (on/off)
• Light intensity detection
• Pattern recognition
• Optical character recognition/verification
An optical fixture point had to be preprogrammed to provide accurate measurements for each of the camera modes. The fixture point is similar to a coordinate system on which all other points of interest are measured. By using a fixture point, even if there are small physical movements to the test subject or camera, the measurements or calibrations are not affected.
To detect needle positions, an edge detection tool was used. It enables the camera to detect the edge of a needle by identifying the difference in light intensity between the needle and the background.
Histogram and blob tools were used to detect telltale lights. A blob is a network of interconnected pixels of the same light intensity. Based on the on/off of a specific telltale, the blob tool returns a score of 100 or 0.
A histogram computes the average pixel intensity in a particular region. A threshold value is specified on the premise that if the average pixel intensity returned by the histogram tool is greater than the threshold, then the light is on.
Light-intensity measurements detected the dimming level of the backlighting in the test subject. The camera outputs a value between 0 and 255 for each pixel, based on the intensity of the pixel. Zero corresponds to black and 255 to white. This value is sent to the HIL simulator, which converts the measured value to a suitable scaled value that gives the dimming level of the test subject in a percentage.
To detect patterns used in graphical indicators, such as a fill bar for miles/gallon or PRNDL, the PATMAX® Pattern Recognition Tool was used. An image of the pattern was preprogrammed to the camera. Whenever the pattern appears within the search area of the camera, the camera returns a score of 100. The pattern recognition tool also was used to recognize text messages such as Service Air Bag or Check Brake Fluid.
Optical character recognition (OCR) and optical character verification (OCV) tools were used to recognize random text messages such as odometer readings and temperature appearing on the LCD screen. The camera had to be preprogrammed to detect the various fonts and sizes used on the LCD screen.
OCR is more computationally intensive than OCV. The camera follows a complex algorithm of counting the number of words and the number of letters in a word and then runs the actual OCR algorithm itself.
OCV is similar to pattern recognition. It verifies whether a particular message appears in the search area or not, but it is more computationally intensive than pattern recognition. Pattern recognition is much faster and less computationally intensive than OCR.
Once the camera was preprogrammed for image tracking and mounted to the test platform and simulator, the actual testing process began. Using the real-time clock signals of the simulator and a DSP integrated into the camera, we generated actual image algorithms. The data gathered from these algorithms then was processed by the DSP to interpret needle positions, the status of telltales, and messages on the LCD.
Signals representative of the data required for testing were produced and sent to the simulator via RS-232 serial communications and digital outputs. The signals from the camera system were decoded by the simulator using SIMULINK® S-functions that convert and scale the information to engineering units.
The I/O functions, standard signal processing, and proper signal conditioning for sensor and actuator interfaces were provided by the HIL simulator. It also facilitated the use of simulated real loads to prevent any diagnostic errors from occurring during testing.
Using ControlDesk Visualization Software, a realistic-looking dashboard of the instrument panel was recreated to exhibit data gathered from the test sequences. The virtual display included animated needles, switches, static text, and event handling.
With the use of AutomationDesk, a tool that generates a test automation environment, we programmed a series of ongoing test sequences to validate various functions for the Cadillac STS instrument cluster. AutomationDesk also generated test results.
Technical Challenges
There are some technical challenges associated with using camera-based image processing. Camera-captured images can suffer from low resolution, blur, and perspective distortion as well as complex layout and interaction of the content and background. Also, the computational load placed on the camera can limit the frame rate at which the image is processed.
One solution is to use distributed computing by deploying multiple cameras for testing. For example, one camera can be used to perform OCR/OCV, and another camera can be used for needle position and telltale detection.
Overall, the integration of HIL simulation and camera-based image processing has proven successful when performing real-time testing of consumer electronic devices. Outside of the automotive sector, the combination of real-time simulation and image processing can be applied to any electromechanical system involving human-machine interfaces.
About the Authors
Mahendra Muli is manager of HIL engineering at dSPACE. He joined the company in 2000 as an applications engineer and has since worked in the area of embedded controls development and automated testing. Mr. Muli is a graduate of Wright State University where he earned an electrical engineering degree and conducted extensive research in the area of robust control theory. 248-295-4660, e-mail: [email protected]
Shreyas C. Nagaraj is a technical support engineer at dSPACE, specializing in the areas of hardware-in-the-loop simulation, image processing, and control systems. Prior to joining dSPACE, he was a control systems engineer at Stamford Polymer Research Labs. Mr. Nagaraj received an M.S. in mechanical engineering from Michigan State University and a B.S. in mechanical engineering from M.S. Ramaiah Institute of Technology, Bangalore, India. 248-295-4663, e-mail: [email protected]
Alicia Alvin is the marketing manager at dSPACE. Ms. Alvin graduated from Central Michigan University with a bachelor’s degree in journalism and graphic design. She has worked in the automotive, engineering, and quality/environmental management industries for more than 20 years. 248-295-4704, e-mail: [email protected]
dSPACE, 50131 Pontiac Trail, Wixom, MI 48393
March 2008