
//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
Sensor fusion has been discussed for years for a diverse array of applications. However, it acquires a highly specialized design premise when it comes to automotive applications like advanced driver assistance systems (ADAS) and autonomous vehicles (AVs).
Perception and sensor fusion systems are among the highly complex areas in ADAS and AV designs from a computational standpoint as they crunch all the data and determine what a vehicle is seeing. More specifically, sensor fusion provides the ability to merge information from radars, lidar (light detection and ranging) and cameras to produce a single model of the space around a vehicle—a crucial capability for ADAS and AV designs. This model is created as a result of balancing the strengths of the various sensors to formulate a more accurate picture of vehicle surroundings.
As a result of its vital influence in ADAS and AV systems, the global automotive sensor fusion market size is expected to reach $7.3 billion by 2028, rising at a market growth of 22.8% CAGR during the forecast period, according to Report Linker.

The importance of sensor fusion
Two key dimensions make sensor fusion crucial in ADAS and AV designs. First, sensor fusion combines data from different types of sensors while utilizing specialized software algorithms. While radars are effective in calculating distance and speed, cameras are adept at interpreting signs and recognizing objects, such as humans, bikes and other vehicles. Cameras, however, can be dazzled by light, rain and snow, in which case lidar comes to the rescue by offering precise object detection.
In short, sensor fusion acquires information from different sensors to produce an accurate and complete picture of the vehicle’s surroundings.
“Sensor fusion combines multiple sensors to help deliver a complete picture of what is happening in an environment, helping overcome the individual weak spots of different sensing technologies,” said Giovanni Campanella, general manager at Texas Instruments (TI).
“Having only a lidar in a vehicle is not going to be enough to enable autonomous navigation,” he added. “Adding other sensors like vision and radar and then implementing AI and ML algorithms will allow the vehicle to recognize and learn from new situations and quickly adapt to them.”

Second, while the notion of sensor fusion isn’t new, its union with artificial intelligence (AI) and machine learning (ML) technologies is quite recent. Sensor fusion, or the data produced by sensor fusion, enables the AI and ML algorithms to efficiently interpret the mountains of data coming from the sensors.
“It’s no secret that AI and ML algorithms are enhancing sensor fusion, but it is still at a nascent stage,” said Ron Lowman, strategic marketing manager for IoT at Synopsys. The fundamental challenge remains to be the underlying software, according to Lowman.
Designers still need to figure out where to run their software and navigate complex algorithms and concepts to achieve end-to-end implementation, while also accounting for miniaturization. “Just like how cell phones miniaturized in past decades, sensors using AI and ML technologies will face a similar trend and challenge going forward,” he said.
TI expressed similar views regarding the need for robust software algorithms to ensure accurate models of vehicle surroundings. “As more and more sensors are added to the system, the algorithm needs to be refined and improved so that the overall decision process is improved, and the correct actions can be taken to solve a problem or overcome a situation identified by the sensors.”
The future of sensor fusion
Emerging ADAS and AV designs mandate data fusion and perception solutions that can facilitate better detection rates and fewer false alarms compared to legacy solutions. For that, sensor fusion also requires specialized hardware, especially accelerators that enable software algorithms to perceive sensor data quickly and precisely.
“As with any technology, corresponding chip architectures will change iteratively, and we expect to see continued development over generations,” Synopsys’ Lowman said.
As Lowman noted, sensor fusion has been discussed for years, and today we’re seeing its implementations in far more complex sensing applications in ADAS and AVs.
“We are also seeing the trend of multiple sensors being integrated into different solutions,” he added. “While there are still some design challenges with addressing voltage and incorporating emerging technologies, we’ve seen a lot of progress along the way and expect the push for intelligent sensors to continue.”

Hardware and software components go hand in hand in sensor fusion for ADAS and AV systems—and AI and ML technologies will likely be at the heart of sensor fusion hardware and software designs. Power consumption will be another critical design consideration for sensor fusion, as well as bolstering computational capabilities.
ADAS and AV designs are witnessing a rapid design progression, and sensor fusion is a vital part of this design equation. Adding AI and ML technologies to sensor fusion designs makes it an important area to watch in the future.