SENSOR FUSION
Sensor fusion refers to the computational methodology which aims at combining measurement from multiple sensors such that they jointly give more information on the measured system than any of the sensor alone.
The main objective of sensor fusion systems are that they take measurements from different sensors and estimate or infer one or more quantities of interest. Sensor fusion acts by using one or more sensors that measure an observable quantity then models or constructs models that best fit the observable quantity of interest with an estimation algorithm that estimates the quantity of interest by combining the model and the measured data.
Sensor fusion combines multiple sensors types. Examples of common sensors used includes: Accelerometers (which measures gravity and acceleration) and are used for inertial navigation, activity tracking, screen rotation etc.; Gyroscopes (which measures rotational velocity) and are used for inertia navigation, activity tracking etc.; magnetometer ( which measures magnetic field strength) and are used for inertia navigation, digital compass, object tracking etc.; radar ( which measures range, bearing, speed) and are used for target tracking, autonomous vehicles etc.); Lidar ( which measures range, bearing, speed) and are used for target tracking, autonomous vehicles, robotics etc.; ultrasound ( which measures range) and are used in robotics etc.; camera ( which measures or record visual scenes) and are used for security systems, autonomous vehicles, robotics etc.; barometer ( which measures air pressure) and are used for inertial navigation, autonomous vehicles, robotics etc.; GNSS ( which measures position) and are used for autonomous vehicles, aerospace application etc.; strain gauge ( which measures strain) and are used for condition monitoring, scales etc.
The most popular combination of sensor fusion are; camera and Lidar, camera and radar and multi-sensor fusion.
Camera and Lidar fusion sensors combine’s visual data with 3D point clouds. They possess accurate depth of perception and can detect objects accurately. They are however limited by range and weather in addition to the exorbitant cost of lidar.
Camera and radar fusion sensors combine visual data and radio wave sensing. They work excellently in various weather conditions and can detect distance and velocity accurately. However their main drawbacks are lower resolution and limited object classification.
Multi-sensor fusion combines data from multiple sensors. They are robust, accurate and can handle complex scenarios. Their major drawback is their complexity and integration challenges.
The advantages of sensor fusion is as follows; it provides for improved accuracy and reliability, for instance by combining inputs from multiple sensors, fusion algorithms can cross validate readings and cancel out individual sensor errors. If one sensor fails or is occluded the system can continue to operate without loss of accuracy. Sensor fusion enhances expanded sensing coverage since different sensors perceive different aspect of the environment. Their use drastically reduces noise across sensors yielding smoother, more stable measurement over time. Fusing semantic data from cameras with spatial data from Lidar or radar, allows systems to both detect and classify objects simultaneously.
The disadvantages of sensor fusion are as follows; integrating different sensor type requires careful calibration, time synchronization and spatial alignment. Processing and fusing high bandwidth data streams in real time demands significant computer resources with increased cost, power consumption and latency. Sensor conflicts are a major problem in sensor fusion so algorithms are necessary to be built in to resolve these potential conflicts. Sensors drift over time due to temperature, vibration and wear; hence a sensor fusion system requires frequent calibrations to maintain accuracy. Deploying multiple sensor types increases the hardware cost. Multi-modal sensing systems capture richer environmental data making them vulnerable to adversarial attacks from hackers and bad state actors.
Sensor fusion find application finds widespread applications in the following domain; autonomous driving (as LIDAR + camera + radar + GPS) employed for object detection, navigation and collision avoidance. Robotics (as IMU + VISION + force sensor) employed for manipulation, SLAM and locomotion. Healthcare/wearable (as accelerometer + PPG + temperature) employed for activity recognition and health monitoring. Aerospace (as GPS + INS + barometer) employed for navigation, and attitude estimation. Smartphones (as gyroscopes + accelerometer + magnetometer) employed for orientation, AR and step counting. Smart- environments (as acoustic + thermal + motion sensors) employed for occupancy detection, energy management etc… Military/ defense (as radar + infrared + acoustic) employed for surveillance, target tracking etc. Agriculture (as multispectral + LIDAR + soil sensors) employed for crop health monitoring, precision farming etc.
The future of sensor fusion is based on the advances and development of the following; AI-driven adaptive fusion, neuromorphic and event based sensing, distributed and collaborative fusion, edge AI integration, development and advances in self-calibrating systems and quantum and Nano sensors.
SOURCES:
- Multi-sensor data fusion by Hassen Fourati.
- Sensor fusion and its applications edited by Waldermar Rakowski.
- Multi-sensor data fusion by Jitendra R. Raol.
- Data fusion: Concept and ideas by H.B Mitchell.
- Advances in multi-sensor information fusion: theory and applications edited by Xue-Bo Jin et al.