As the automotive industry transitions to autonomous vehicles (AVs) via sensor-fusion information from cameras, radar, and LiDAR, a computer network is needed to process the flood of data in real time as well as run perception algorithms to drive these vehicles efficiently and on a limited energy budget. Even with state-of-the art accelerator architectures, current AV computing systems are still consuming considerable energy, which, given that many AVs will be electrically powered, translates into reduced vehicle range.
Addressing this issue, Recogni, a San Jose, Calif.-based startup company, just recently came out of stealth mode to discuss its approach to perception processing—identifying where the bikes and pedestrians are in real-time—using less power than vision alternatives such as Intel’s Mobileye and NVIDIA’s Drive Xavier.
“These vehicles need datacenter-class performance while consuming minuscule amounts of power,” said Recogni CEO RK Anand. “Leveraging our background in machine learning, computer vision, silicon, and system design, we are engineering a fundamentally new system that benefits the auto industry with very high efficiency at the lowest power consumption." Recogni claims its technology will get the computational and inference work done for the entire car using less than 10 W, while processing the huge volume of data in real time.
Recogni’s integrated module comprises three passively cooled image sensors, an external depth sensor, and a custom chip. An Ethernet cable connects to an external LiDAR or radar sensor, which an onboard AI chip uses to supplement the camera footage to identify nearby vehicles, pedestrians, and other objects of interest.
Here’s an illustration of what Recogni says its Vision Cognition System “sees.” (Source: Recogni)
The custom chip inside the Recogni module is said to be able to perform more than 1,000 teraoperations per second (TOPS)—a quadrillion calculations capturing and analyzing up to three uncompressed 8- to 12-Mpixel streams at 60 frames/s. It’s able to recognize (detect, segment, and classify) objects, fuse depth-sensor information into the objects, and provide the intelligence to the central system within 16 ms for urban settings and 8 ms for highway settings. It’s further said to achieve a claimed 70% compute efficiency in typical vision applications.
Recogni reported that its device can identify small objects, such as traffic lights, from over 200 m away in real time. Unlike LiDAR and radar, the system is said to be able to tell you if the lights are red, yellow, or green because it works on imaging data. The Recogni Vision Cognition Processor uses a diverse set of image sensors to identify significantly smaller objects at a much larger distance compared to competing solutions, according to the company, while consuming a fraction of the power.
One of the reasons behind the efficiency of the Recogni module is that it relies on passive cooling, meaning there’s no need for a power-consuming fan. Another contributor is the onboard chip’s close physical proximity to the three included cameras, trimming the amount of electricity spent on moving sensory data from the cameras to the processor. In all, the system is said to consume about 8 W of power.
Ashwini Choudhary, Recogni co-founder and Chief Business Officer, and Chief Technology Officer Eugene Feinberg, a former Cisco Systems engineer, worked on the idea while running a San Jose-based camera tech startup they co-founded called mPerpetuo. A little over a year ago, they launched Recogni with CEO RK Anand, a founding engineer at Juniper Networks.
In a blog, Choudhary announced the company has received $25 million in Series A financing led by GreatPoint Ventures, along with participation from Toyota AI Ventures, BMW i Ventures, Faurecia, Fluxunit (VC arm of lighting and photonics company OSRAM), and DNS Capital. “We are currently in discussion with multiple auto manufacturers to provide them a full suite of enabling technology, from modules to software,” he said.
Recogni intends to initially target level 2 autonomous vehicles—as defined by the Society of Automotive Engineers—which includes those equipped with advanced driver-assistance systems (like Cadillac’s Supercruise, NVIDIA’s Drive Autopilot, and Volvo’s Pilot Assist) limited to highways and marked roads.
In the near future, it plans to pivot to platforms for level 3 vehicles where the transition is significant, as the driver is no longer required to monitor the environment, even though he must be able to take back control at all times. Following that, they will progress to level 4 cars that can largely drive themselves without constant human intervention, and then, eventually enabling fully autonomous level 5 vehicles.
Recogni executives predict that in the 2024 timeframe we will start seeing AI systems that make robotic taxis feasible from both a cost and capability perspective. Personal self-driving cars will follow in another year or two, according to the company.