4D Radar Advances: Improved Situational Awareness for Autonomous Vehicles
What you’ll learn:
- How advanced radar improves detection accuracy.
- Why multi-sensor systems enhance safety.
- Radar’s strengths in poor visibility.
- How software simplifies integration and boosts performance.
Vehicle safety has been the central design constraint of every serious advanced driver-assistance system (ADAS) and autonomous driving program for the past decade, and the scale of the problem explains why. More than 40,000 people are killed and over 2.5 million injured each year in crashes on U.S. roads alone. Government data from the U.S. and U.K. consistently attribute human error as a contributing factor in more than 80% of serious incidents, with distraction, impairment, and fatigue among the most frequently cited causes.
The evolution from basic ADAS to higher levels of vehicle automation has the potential to reduce the severity and frequency of these outcomes. And advances in multi-modal sensing technology like radar and AI are making that evolution progressively more practical.
The Industry Approach to Radar
The industry's path toward building that capability has split along two lines. Some manufacturers adopted camera-centric systems that are now in mass production in consumer vehicles, while others built multi-sensor arrays that integrate vision, LiDAR, and radar into a combined perception stack. The multi-sensor approach is most visible in fully autonomous robotaxi fleets, where redundancy across sensing modalities is treated as a baseline safety requirement.
Multi-sensor arrays are inherently safer when the weaknesses of one modality are compensated for by the strengths of another. For example, a camera can see whether a traffic light is red or green, read road signs, and distinguish a school bus from a delivery van. Radar, as its complement, can operate in all weather conditions, measure velocity directly through the Doppler effect, and detect objects at longer range.
When these capabilities are fused, the resulting perception system is more robust than either sensor alone, and the addition of radar's immunity to glare, fog, and precipitation can be particularly vital for building safe autonomous systems capable of earning the public’s trust.
The trajectory is increasingly clear. One of the most prominent fully autonomous robotaxi fleets currently in commercial service deploys 14 cameras, four LiDARs, and six radar units, supplemented by external audio receivers to detect emergency sirens.
Cost pressure is a constant factor in how many sensors any production vehicle can carry, and engineers across the industry are seeking ways to reduce hardware complexity while simultaneously enhancing perception quality. Notably, the latest generation of that same fleet reduced its total camera count by more than half and dropped from five LiDAR units to four. The number of radar units remained unchanged. The emphasis is shifting from the quantity of sensors deployed to the quality of data that can be extracted from each one.
Why Radar is Gaining Importance
Camera and LiDAR systems rely on visible or near-infrared light, which is impaired by fog, rain, and snow. As autonomous-vehicle deployments expand from the Sunbelt and the Bay Area into regions with harsher climates and lower average visibility, the inclusion of radar will be increasingly important to maintain safety margins across a wider range of operating conditions.
Radar waves in the 77-GHz automotive band pass through fog, heavy rain, smoke, dust, and snow with minimal attenuation. The technology is unaffected by low-light conditions and doesn’t require adaptation time when a vehicle moves between zones of sharply different illumination, such as entering or exiting a tunnel.
Radar measures the velocity of objects directly through Doppler shift, providing a continuous speed measurement that doesn’t depend on comparing successive image frames. And radar isn’t limited to line-of-sight detection; it can identify otherwise hidden obstacles ahead of the vehicle in front, providing additional reaction time for emergency maneuvers (see table).
Traditional 3D radars measure range, angle, and speed. They lack elevation data, which limits their ability to determine whether a detected object is above the roadway, such as a bridge or overhead sign, or is an obstacle in the driving path. To compensate, many 3D systems are programmed to ignore stationary objects that appear above a certain size or in certain positions. That assumption can have significant consequences when it’s incorrect.
4D radar adds elevation measurement — the ability to resolve objects in the vertical plane. With the full spatial position of each detection point available, the system can separate a bridge from a vehicle beneath it, distinguish an overhead gantry from a stalled truck, and provide height data that’s essential for tall vehicles approaching low-clearance structures.
The richer spatial information also yields denser point clouds. Where traditional 3D radars may produce sparse clusters of detections that are difficult to classify, 4D systems can generate point clouds dense enough to resolve the distinct shapes of objects and road users, bringing radar closer to the kind of spatial detail that was previously available only through LiDAR.
>>Check out this TechXchange for similarly themed articles and videos
In this article, we highlight some of the challenging situations that test the limits of vehicle perception systems and walk through results from a demonstration at CES 2026 that illustrate how 4D radar can be integrated more cost-effectively into production vehicles.
Challenging Scenarios for Radar
Bridges vs. Gantries
A 3D radar that detects a large stationary object ahead of the vehicle can’t determine whether it’s a bridge that can be driven under or a barrier requiring an emergency stop. Without elevation data, the system either brakes unnecessarily, creating a phantom-braking event, or it’s programmed to assume the object is benign, creating a potential safety gap.
4D radar resolves this by providing height measurement through its vertical angular resolution. When testing the Oculii 4D radar system, overhead structures such as highway gantries were measured with Z-axis detection extending to approximately 12 meters above the road surface. This capability is especially important for commercial vehicles, where bridge clearance is a constant operational concern. The system generates a 3D point map with color-coded height data, allowing the perception stack to determine with confidence whether the vehicle can safely pass underneath.
Tunnels
Tunnels present a dual challenge for perception systems. For cameras, rapid transitions between bright daylight and tunnel lighting can cause temporary saturation or underexposure, reducing usable visual data during the most critical moments of the transition. For radar, the enclosed geometry creates multipath reflections from walls and ceilings, generating interference that can make it difficult to distinguish actual objects from reflected signals.
In testing, the Oculii 4D radar system demonstrated clean object detection inside tunnels, with strong rejection of multipath signal reflections. Through its enhanced angular resolution, the system maintains reliable perception in enclosed environments where both cameras and conventional radar systems are significantly challenged.
Small Objects at Long Range
Detecting animals and small objects on the roadway is particularly demanding because the returned radar signals are close to the noise threshold. If detection is inconsistent, the system may disregard the object as interference until it’s too late to react. Animals present especially strong challenges because they’re weak reflectors of radar energy.
To test this capability, Ambarella used a small toy dog, approximately the size of a Pomeranian or a large cat, and ran the test vehicle toward it. Across multiple runs, the Oculii 4D radar produced a consistent detection range of greater than 100 meters with a stable track of the target and no blinking or ghosting. That’s sufficient enough to allow for emergency braking at speeds above 70 mph. Under the same conditions, the vehicle's LiDAR system was unable to detect the target, even at 30 meters.
In further testing, the Oculii 4D radar detected plastic traffic barriers, distinguishing them reliably against background clutter from a distance of 235 meters. Under the same conditions, the vehicle's LiDAR sensors detected the barriers at 52 meters.
Vulnerable Road Users
The detection of pedestrians, cyclists, and other vulnerable road users is the most consequential capability in any perception system. In many jurisdictions, fault is automatically assigned to the driver or vehicle operator in the event of a collision, unless it can be demonstrated otherwise. Pedestrians and cyclists are weak reflectors whose radar signatures can be masked by the much larger returns from surrounding vehicles and infrastructure.
At closer range, the Oculii 4D radar's angular and distance resolution enable the system to resolve the human body shape with sufficient fidelity to support classification in a sensor-fusion architecture. In longer-range testing, the system detected a pedestrian standing beside a vehicle at 350 meters. When the pedestrian was purposefully occluded from direct line of sight, the system was able to distinguish them at 143 meters.
Separate road testing demonstrated that the system's point-cloud density is sufficient to resolve the wheels of a bicycle and the distinct shape of its rider, including head, body, and legs, from a distance of 150 meters. Traditional radar systems would be expected to dismiss a cyclist-sized return as noise at that range.
A Software-Defined Approach to Integration
For 4D radar to reach deployment at scale, it must integrate into existing vehicle architectures without requiring disruptive redesigns. At CES 2026, Ambarella demonstrated an approach built around its Oculii 4D imaging radar software running on a single CV3 system-on-chip (SoC).
The demonstration vehicle was a standard rental car, selected deliberately to illustrate that the system requires no significant modification to integrate. The CV3 SoC, located in the vehicle's long-range front radar unit, processes raw data from that unit along with data from four additional short-range radar units positioned at each corner of the vehicle. It generates a single point cloud with a 360-degree field of view. The point cloud is then sent to the zone controller responsible for sensor fusion.
This architecture enables OEMs to migrate to a processor-less monolithic microwave integrated circuit (MMIC) radar head design, where raw radar data streams directly to a central domain controller, such as the CV3. The front radar geometry remains nearly identical to existing installations, ensuring modularity without bumper redesign or significant integration cost.
Ambarella's patented Virtual Aperture Imaging techniques and AI algorithms adapt waveforms dynamically in real-time. They deliver 0.5-degree angular resolution and detection ranges up to 350 meters, without additional physical transceiver elements.
The partitioning centralizes radar processing in the front-mounted unit, reducing system cost and enabling 4D radar integration without the disruptive changes to a vehicle's electrical and electronic infrastructure that would be required by a full sensor overhaul.
For OEMs still operating with traditional 3D radar, that offers a practical migration path to 4D capability. For those already committed to 4D radar, the centralized software-defined approach, running on a single SoC with AI-driven waveform adaptation, delivers angular resolution and range performance that can meet the requirements of L2+ through L4 perception stacks.
What Comes Next for Radar?
Radar has served as a required component in multi-sensor perception architectures for years, though its contribution was largely confined to forward-collision warning and adaptive cruise control. The point clouds generated by 3D radar were too sparse to support the object classification and tracking accuracy demanded by higher levels of autonomy, and that constraint kept radar in a secondary role within the perception stack.
4D radar, processed centrally through AI-driven waveform adaptation on a single SoC, has substantially improved the density equation. The point clouds produced by these systems are now rich enough to resolve individual road users, separate overhead structures from ground-level obstacles, and maintain stable object tracks in cluttered environments.
High-end LiDAR generates denser point clouds in absolute terms, but much of that additional density exceeds what the perception pipeline requires for confident decision-making at highway speeds. Moreover, LiDAR carries environmental-sensitivity limitations in fog, rain, and snow that radar does not share.
The practical question for OEMs is whether 4D radar's density has reached the threshold of sufficiency for their target autonomy level. The CES 2026 results suggest it has for a widening range of L2+ through L4 scenarios.
Those results were demonstrated in a standard rental vehicle equipped with a single CV3 SoC processing raw data from five Oculii radar heads. The system detected small, weakly reflective objects where the vehicle's LiDAR could not, maintained reliable perception inside tunnels, and tracked pedestrians at 350 meters. The software-defined architecture requires no redesign of the vehicle's electrical infrastructure to integrate, which addresses one of the persistent barriers to radar adoption at higher autonomy levels.
For engineering teams evaluating their next-generation sensor configurations, the performance and integration profile of centralized 4D radar has improved enough to warrant a thorough reexamination.
>>Check out this TechXchange for similarly themed articles and videos
About the Author

Jason Huang
Vice President of Systems, Ambarella
Jason Huang is Vice President of Systems at Ambarella, where he leads the company’s software development, automotive solutions, and automotive marketing initiatives. As one of Ambarella’s earliest engineering team members, he played a foundational role in shaping the company’s technology roadmap and system development.
Today, Jason oversees strategic engineering and market development efforts focused on bringing Ambarella’s advanced AI SoCs to commercial deployment across the automotive sector. With a deep understanding of both product innovation and market strategy, he helps bridge technical excellence and customer success as Ambarella accelerates commercialization of its long-term R&D investments.
Voice Your Opinion!
To join the conversation, and become an exclusive member of Electronic Design, create an account today!

Leaders relevant to this article:

