Firm unveils centralized 4D imaging radar architecture
EP&T MagazineAutomation / Robotics Electronics Semiconductors Engineering 4D Imaging RADAR semiconductor
Ambarella’s centrally processed architecture serves autonomous mobility systems
Ambarella Inc., a Santa Clara CA-based edge AI semiconductor firm, has rolled-out what it is calling the ‘first’ centralized 4D imaging radar architecture, which allows both central processing of raw radar data and deep, low-level fusion with other sensor inputs—including cameras, lidar and ultrasonics.
The breakthrough architecture provides greater environmental perception and safer path planning in AI-based ADAS and L2+ to L5 autonomous driving systems, as well as autonomous robotics. It features Ambarella’s Oculii radar technology, including the only AI software algorithms that dynamically adapt radar waveforms to the surrounding environment—providing high angular resolution of 0.5 degrees, an ultra-dense point cloud up to 10s of thousands of points per frame and a long detection range up to 500+ meters.
All of this is achieved with an order of magnitude fewer antenna MIMO channels, which reduces the data bandwidth and achieves significantly lower power consumption than competing 4D imaging radars. Ambarella’s centralized 4D imaging radar with Oculii technology provides a flexible and high performance perception architecture that enables system integrators to future proof their radar designs.
To create this unique, cost-effective new architecture, Ambarella optimized the Oculii algorithms for its CV3 AI domain controller SoC family and added specific radar signal processing acceleration. The CV3’s industry-leading AI performance per watt offers the high compute and memory capacity needed to achieve high radar density, range and sensitivity. Additionally, a single CV3 can efficiently provide high-performance, real-time processing for perception, low-level sensor fusion and path planning, centrally and simultaneously, within autonomous vehicles and robots.
Capabilities for both radar and camera technologies
“No other semiconductor and software company has advanced in-house capabilities for both radar and camera technologies, as well as AI processing,” said Ambarella president and CEO Fermi Wang. “This expertise allowed us to create an unprecedented centralized architecture that combines our unique Oculii radar algorithms with the CV3’s industry-leading domain control performance per watt to efficiently enable new levels of AI perception, sensor fusion and path planning that will help realize the full potential of ADAS, autonomous driving and robotics.”
The data sets of competing 4D imaging radar technologies are too large to transport and process centrally. They generate multiple terabits per second of data per module, while consuming more than 20 watts of power per radar module, due to thousands of MIMO antennas used by each module to provide the high angular resolution required for 4D imaging radar. That is multiplied across the six or more radar modules required to cover a vehicle, making central processing impractical for other radar technologies, which must process radar data across thousands of antennas.
Software-defined centralized architecture
By applying AI software to dynamically adapt the radar waveforms generated with existing monolithic microwave integrated circuit (MMIC) devices, and using AI sparsification to create virtual antennas, Oculii technology reduces the antenna array for each processor-less MMIC radar head in this new architecture to 6 transmit x 8 receive. Overall, the number of MMICs is drastically reduced, while achieving an extremely high 0.5 degrees of joint azimuth and elevation angular resolution. Additionally, Ambarella’s centralized architecture consumes significantly less power, at the maximum duty cycle, and reduces the bandwidth for data transport by 6x, while eliminating the need for pre-filtered, edge processing and its resulting loss in sensor information.
The software-defined centralized architecture also enables allocation of the CV3’s processing resources, based on real-time conditions, both between sensor types and among sensors of the same type. For example, in extreme rainy conditions that diminish long-range camera data, the CV3 can shift some of its resources to improve radar inputs. Likewise, if it is raining while driving on a highway, the CV3 can focus on data coming from front-facing radar sensors to further extend the vehicle’s detection range while providing faster reaction times. This can’t be done with an edge-based architecture, where the radar data is being processed at each module, and where processing capacity is specified for worst-case scenarios and often goes under-utilized.