Skip navigation
Figure-1-Renesas-Two_edited-1_promo.png

Teaming Up to Develop Smart Cameras for ADAS Apps

Renesas and StradVision crafted a solution that employs deep learning for object recognition—and minimizes power consumption.

To avoid hazards in urban areas, advanced driver-assistance system (ADAS) implementations require high-precision object recognition capable of detecting so-called vulnerable road users (VRUs), such as pedestrians and cyclists. At the same time, for mass-market mid-tier to entry-level vehicles, these systems must consume low power. A new deep-learning-based object-recognition solution for smart cameras from Renesas and StradVision is said to achieve both, and it’s designed to accelerate the widespread adoption of ADAS applications at Level 2 and above.

StradVision’s deep-learning-based object-recognition software, developed to recognize vehicles, pedestrians, and lane markings, has been optimized for Renesas R-Car automotive system-on-chip (SoC) products R-Car V3H and R-Car V3M. R-Car V3H performs the simultaneous recognition of vehicles, people, and driving lanes. It can process image data at a rate of 25 frames/s. R-Car V3M is an SoC featuring two 800-MHz Arm Cortex-A53 MPCore cores that’s primarily for front-camera applications as well as for surround-view systems or LiDAR.

Figure-1-Renesas-Two_edited_FIG.png

StradVision’s object-recognition software, developed to recognize vehicles, pedestrians, and lane markings, has been optimized for Renesas R-Car products R-Car V3H and R-Car V3M.

Since front cameras are mounted next to the windshield, the rise in temperature caused by the heat generated by the components themselves, as well as direct sunlight, must be considered. Thus, the requirements for low power consumption are especially stringent.

These R-Car devices incorporate a dedicated engine for deep-learning processing called CNN-IP (Convolution Neural Network Intellectual Property). This enables them to run StradVision’s SVNet automotive deep-learning network at high speed with low power consumption.

CNNs are used in variety of areas, including image and pattern recognition, speech recognition, natural language processing, and video analysis. From smartphones to smart watches, and from ADAS to virtual-reality gaming consoles and drone control, the application areas that rely on high-resolution imaging (1080p, 4K, and beyond) are growing.

In addition to the CNN-IP dedicated deep-learning module, the Renesas R-Car V3H and R-Car V3M feature the IMP-X5 image-recognition engine. In addition, the on-chip image signal processor (ISP) is designed to convert sensor signals for image rendering and recognition processing. This makes it possible to configure a system using inexpensive cameras without built-in ISPs, reducing the overall bill-of-materials (BOM) cost.

StradVision’s SVNet deep-learning software is an AI perception solution created for high recognition precision in low-light environments and the ability to deal with occlusion when objects are partially hidden by other objects. It’s also designed to function properly even in poor lighting or weather conditions.

The basic software package for the R-Car V3H platform performs simultaneous vehicle, person, and lane recognition, processing image data at a rate of 25 frames/s, enabling swift evaluation and proof-of-concept development.

Using these capabilities as a basis, if developers wish to customize the software with the addition of signs, markings, and other objects as recognition targets, StradVision says it will provide support for deep learning-based object recognition. Such support will cover all of the steps from training through the embedding of software for mass-produced vehicles.

Beyond having been ported onto Renesas' V3H and V3M, StradVision software is the first deep-learning-based algorithm ever to be ported onto TI’s TDA2x.

“StradVision is excited to combine forces with Renesas to help developers efficiently advance their efforts to make the next big leap in ADAS,” said Junhwan Kim, CEO of StradVision. “This joint effort will not only translate into quick and effective evaluations, but also deliver greatly improved ADAS performance. With the massive growth expected in the front-camera market in the coming years, this collaboration puts both StradVision and Renesas in excellent position to provide the best possible technology.”

By 2021, StradVision will have nearly 7 million vehicles on the world's roadways using SVNet software, which is compliant with standards such as Euro NCAP and Guobiao (GB) in China. StradVision is already deploying ADAS vehicles on Chinese roads.

Separately, StradVision announced a partnership with a leading (but unnamed) global Tier 1 supplier to develop custom camera technology for autonomous buses The project will partner StradVision's SVNet software with the Nvidia XAVIER chipset platform, and will focus on three key areas: object detection (OD), traffic-sign recognition (TSR) and traffic-light recognition (TLR).

These three elements will play a key role in allowing the vehicles that use the technology developed from this partnership to accurately navigate the roadways when in self-driving mode.

Renesas R-Car SoCs featuring the new joint deep-learning solution, including software and development support from StradVision, is scheduled to be available to developers by early 2020.

SourceESB banner with caps

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish