What is Sensor Fusion : Working & Its Applications

At present, sensors are used in a wide range of applications from smartphones, industrial control, automotive systems, climate monitoring, healthcare & oil exploration. Generally, an individual sensor has both advantages and disadvantages. To overcome the limitations of individual sensors by collecting information from different sensors to generate more consistent data with less hesitation, sensor fusion is used. So this robust data can be used to take certain actions & make decisions. Let us discuss sensor fusion and its types with working.


What is Sensor Fusion?

Sensor fusion definition: The process of data merging from different sensors to make a more precise conceptualization of the object or target is known as sensor fusion. So this process simply allows you to merge inputs from several sensors to get a more accurate & complete understanding of target direction, surroundings & location.

Sensor Fusion
Sensor Fusion

Sensor fusion is mainly necessary for resolving challenges in between different sensors, sensors synchronizing, expecting the positions of the future objects, using the heterogeneous sensors strengths, sensor’s malfunction detection & attaining automated driving security requirements.

How does Sensor Fusion Work?

Sensor fusion works by bringing all the inputs of many sensors together to form a single image or model of the surroundings around a platform. So the output is very accurate as it balances various sensors’ strengths. The sensor fusion process uses software algorithms to provide a more comprehensive & precise environmental model. This is a complex method of collecting, filtering & aggregating the information of the sensor to maintain the environmental awareness required for smart decision making:

Types of Sensor Fusion

There are three types of sensor fusion competitive, complementary and co-operative where each type is discussed below.

Types of Sensor Fusion
Types of Sensor Fusion

In a complementary sensor fusion, if the sensors don’t depend directly on each other, however, they can be merged to provide a complete image of the phenomenon under observation. This decides the incompleteness of sensor information.

PCBWay

The best example of this configuration is the employment of several cameras where everyone monitors distinct elements of a room. Usually, complementary data fusing is very simple, while the sensor’s data can be added to each other.

In competitive fusion sensors, each type of sensor delivers independent measurements of a similar property. These types of fusion sensors are mainly used for robust & fault-tolerant systems. The best example of this is the noise reduction through merging two camera images that are overlaying.

A cooperative fusion sensor network utilizes the data provided by two separate sensors to derive information that would not be obtainable from only sensors. The best example of this is a stereoscopic vision which mixes two-dimensional images using two cameras. At little dissimilar viewpoints, a three-dimensional observed scene image can be derived.

Sensor Fusion Algorithms

Sensor fusion algorithms are mainly used by data scientists to combine the data within sensor fusion applications. So these algorithms will process all sensor inputs & generate output through high reliability & accuracy even when individual measurements are defective. There are different types of sensor fusion algorithms which are discussed below.

Central Limit Theorem

The CLT or central limit theorem is an arithmetical concept that states that in mathematics & statistic, the sampling distribution of the mean is an important concept. Generally, a mean refers to the most common or the average value within a collection of approaches to a normal distribution when the size of the sample increases, apart from the original population distribution shape.

Central Limit Theorem
Central Limit Theorem

For example: if we have two types of sensors like an ultrasonic & an IR sensor. Once we get more samples from their readings, then more closely the sample averages distribution will look like a bell curve and as a result, it approaches the set’s accurate average value. Once we approach closer to a precise average value, then low noise will cause into the algorithms of sensor fusion.

Kalman Filter

A Kalman filter is one kind of sensor fusion algorithm that uses data inputs from different sources to estimate unknown values which are frequently used in navigation & control technology. These filters are capable of calculating unknown values very accurately than individual predictions with single measurement techniques.

Kalman Filter
Kalman Filter

Kalman-filter algorithms are the most frequently used sensor fusion application & provide the base for understanding the theory itself. The most common applications of Kalman filters are, it is applicable in positioning & navigation technology. Since Kalman filtering is repetitive, and then we need to know the last recognized position of a car & its speed is able to expect its present & future condition.

Convolutional Neural Networks

This type of fusion algorithm can process various sensor data channels simultaneously. By combining of this data, they can produce classification results depending on image recognition. For instance, a robot uses sensor data to inform traffic signs at a distance that depends on convolution neural-network-based algorithms. A sensor fusion system with a convolutional neural network (CNN) is used to monitor in healthcare applications for transition movements.

Convolutional Neural Networks
Convolutional Neural Networks

Bayesian Network

In sensor fusion, a Bayesian network algorithm is a graphical model that signifies a set of variables & their conditional dependencies through a directed acyclic graph. For instance, a Bayesian network signifies the main probabilistic relationships between diseases & symptoms. So the network can be simply used to calculate the probabilities of the existence of different diseases.

Bayesian Network
Bayesian Network

Efficient algorithms can execute inference & learn within Bayesian networks. These networks that form a series of variables like protein sequences or speech signals are known as dynamic Bayesian networks. Simplifications of these networks can signify & resolve decision problems in uncertainty are known as influence diagrams.

MPU6050 Sensor Interfacing with Arduino Uno

Here, the interfacing of the MPU6050 sensor with the Arduino UNO board is discussed below.

MPU6050 Sensor Interfacing with Arduino Uno
MPU6050 Sensor Interfacing with Arduino Uno

MPU6050 is an inertial measurement unit sensor (IMU) that includes a MEMS Accelerometer & MEMS Gyroscope on a chip. IMU includes three sensors like accelerometer, gyroscope, and magnetometers. The main function of an IMU sensor is to measure the particular force with the help of an Accelerometer, angular rate with Gyroscope & magnetic field using Magnetometers.

These sensors are applicable in aircraft, self-balancing robots, spacecraft, satellites, drones, tablets, mobile phones, unmanned aerial vehicles (UAVs) for detection of position, guidance, detection of orientation, flight control & tracking motion.

The common IMU sensors are MPU 6050 & ADXL 335 Accelerometer. The ADXL 335 IMU sensor includes 3-axis accelerometer whereas the MPU-6050 sensor is a 6-axis motion tracking device that unites a three-axis Accelerometer & Gyroscope on a chip.

The required components to interface Arduino UNO with MPU6050 sensor mainly include an MPU 6050 sensor, connecting wires, and Arduino Uno board. The connections of this interfacing follow as;

  • Connect VCC pin of MPU6050 sensor to VCC of Arduino.
  • Connect SDA pin of MPU6050 sensor to IN (A4) pin of Arduino.
  • Connect SCLpin of MPU6050 sensor to Analog IN (A5) pin of Arduino.
  • Connect GND pin of MPU6050 sensor to GND pin of Arduino.
  • Connect INT pin of MPU6050 sensor to Digital PWM (pin2) of Arduino.

After that, the data from the MPU6050 sensor can be obtained by installing the wire.h library. These two libraries can be downloaded through the internet.

From this library code, we can obtain three main parameters like roll, Pitch & Yaw. When you are traveling in a car, then the motion can be measured as right, left, forward & backward. However, for flying vehicles or drones, it will not be considered the same. Because the terminologies for Flying control boards are different like pitch, yaw, & roll.

The pitch axis can be defined as the axis that runs from the left side to the right side of the drone. So the revolution around this axis is known as pitch motion.

The roll axis can be defined as the axis that runs from the front side to the back of the drone. So the revolution around this axis is known as roll motion.

The yaw axis can be defined as the axis that runs from topside to bottom of the drone. So the revolution around this axis is known as yaw motion.

Project Code

#include <Wire.h>

const int MPU = 0x68; // MPU6050 I2C address

float AccX, AccY, AccZ;

float GyroX, GyroY, GyroZ;

float accAngleX, accAngleY, gyroAngleX, gyroAngleY, gyroAngleZ;
float roll, pitch, yaw;

float AccErrorX, AccErrorY, GyroErrorX, GyroErrorY, GyroErrorZ;
float elapsedTime, currentTime, previousTime;

int c = 0;

void setup() {

Serial.begin(19200);

Wire.begin(); // Initialize communication

Wire.beginTransmission(MPU); // Start communication with MPU6050 // MPU=0x68

Wire.write(0x6B); // Talk to the register 6B

Wire.write(0x00); // Make reset – place a 0 into the 6B register

Wire.endTransmission(true); //end the transmission

/*

// Configure Accelerometer Sensitivity – Full Scale Range (default +/- 2g)

Wire.beginTransmission(MPU);

Wire.write(0x1C); //Talk to the ACCEL_CONFIG register (1C hex)

Wire.write(0x10); //Set the register bits as 00010000 (+/- 8g full scale range)

Wire.endTransmission(true);

// Configure Gyro Sensitivity – Full Scale Range (default +/- 250deg/s)

Wire.beginTransmission(MPU);

Wire.write(0x1B); // Talk to the GYRO_CONFIG register (1B hex)

Wire.write(0x10); // Set the register bits as 00010000 (1000deg/s full scale)

Wire.endTransmission(true);

delay(20);

*/

// Call this function if you need to get the IMU error values for your module

calculate_IMU_error();

delay(20);

}

void loop() {

// === Read acceleromter data === //

Wire.beginTransmission(MPU);

Wire.write(0x3B); // Start with register 0x3B (ACCEL_XOUT_H)

Wire.endTransmission(false);

Wire.requestFrom(MPU, 6, true); // Read 6 registers total, each axis value is stored in 2 registers

//For a range of +-2g, we need to divide the raw values by 16384, according to the datasheet

AccX = (Wire.read() << 8 | Wire.read()) / 16384.0; // X-axis value

AccY = (Wire.read() << 8 | Wire.read()) / 16384.0; // Y-axis value

AccZ = (Wire.read() << 8 | Wire.read()) / 16384.0; // Z-axis value

// Calculating Roll and Pitch from the accelerometer data

accAngleX = (atan(AccY / sqrt(pow(AccX, 2) + pow(AccZ, 2))) * 180 / PI) – 0.58; // AccErrorX ~(0.58) See the calculate_IMU_error()custom function for more details
accAngleY = (atan(-1 * AccX / sqrt(pow(AccY, 2) + pow(AccZ, 2))) * 180 / PI) + 1.58; // AccErrorY ~(-1.58)

// === Read gyroscope data === //

previousTime = currentTime; // Previous time is stored before the actual time read

currentTime = millis(); // Current time actual time read

elapsedTime = (currentTime – previousTime) / 1000; // Divide by 1000 to get seconds

Wire.beginTransmission(MPU);

Wire.write(0x43); // Gyro data first register address 0x43

Wire.endTransmission(false);

Wire.requestFrom(MPU, 6, true); // Read 4 registers total, each axis value is stored in 2 registers

GyroX = (Wire.read() << 8 | Wire.read()) / 131.0; // For a 250deg/s range we have to divide first the raw value by 131.0, according to the datasheet

GyroY = (Wire.read() << 8 | Wire.read()) / 131.0;

GyroZ = (Wire.read() << 8 | Wire.read()) / 131.0;

// Correct the outputs with the calculated error values

GyroX = GyroX + 0.56; // GyroErrorX ~(-0.56)

GyroY = GyroY – 2; // GyroErrorY ~(2)

GyroZ = GyroZ + 0.79; // GyroErrorZ ~ (-0.8)

// Currently the raw values are in degrees per seconds, deg/s, so we need to multiply by sendonds (s) to get the angle in degrees

gyroAngleX = gyroAngleX + GyroX * elapsedTime; // deg/s * s = deg

gyroAngleY = gyroAngleY + GyroY * elapsedTime;

yaw = yaw + GyroZ * elapsedTime;

// Complementary filter – combine acceleromter and gyro angle values

roll = 0.96 * gyroAngleX + 0.04 * accAngleX;

pitch = 0.96 * gyroAngleY + 0.04 * accAngleY;

// Print the values on the serial monitor

Serial.print(roll);

Serial.print(“/”);

Serial.print(pitch);

Serial.print(“/”);

Serial.println(yaw);

}

void calculate_IMU_error() {

// We can call this funtion in the setup section to calculate the accelerometer and gyro data error. From here we will get the error values used in the above equations printed on the Serial Monitor.

// Note that we should place the IMU flat in order to get the proper values, so that we then can the correct values

// Read accelerometer values 200 times

while (c < 200) {

Wire.beginTransmission(MPU);

Wire.write(0x3B);

Wire.endTransmission(false);

Wire.requestFrom(MPU, 6, true);

AccX = (Wire.read() << 8 | Wire.read()) / 16384.0 ;

AccY = (Wire.read() << 8 | Wire.read()) / 16384.0 ;
AccZ = (Wire.read() << 8 | Wire.read()) / 16384.0 ;

// Sum all readings

AccErrorX = AccErrorX + ((atan((AccY) / sqrt(pow((AccX), 2) + pow((AccZ), 2))) * 180 / PI));

AccErrorY = AccErrorY + ((atan(-1 * (AccX) / sqrt(pow((AccY), 2) + pow((AccZ), 2))) * 180 / PI));

c++;

}

//Divide the sum by 200 to get the error value

AccErrorX = AccErrorX / 200;

AccErrorY = AccErrorY / 200;

c = 0;

// Read gyro values 200 times

while (c < 200) {

Wire.beginTransmission(MPU);

Wire.write(0x43);

Wire.endTransmission(false);

Wire.requestFrom(MPU, 6, true);

GyroX = Wire.read() << 8 | Wire.read();

GyroY = Wire.read() << 8 | Wire.read();

GyroZ = Wire.read() << 8 | Wire.read();

// Sum all readings

GyroErrorX = GyroErrorX + (GyroX / 131.0);

GyroErrorY = GyroErrorY + (GyroY / 131.0);

GyroErrorZ = GyroErrorZ + (GyroZ / 131.0);

c++;

}

//Divide the sum by 200 to get the error value

GyroErrorX = GyroErrorX / 200;

GyroErrorY = GyroErrorY / 200;

GyroErrorZ = GyroErrorZ / 200;

// Print the error values on the Serial Monitor

Serial.print(“AccErrorX: “);

Serial.println(AccErrorX);

Serial.print(“AccErrorY: “);

Serial.println(AccErrorY);

Serial.print(“GyroErrorX: “);

Serial.println(GyroErrorX);

Serial.print(“GyroErrorY: “);

Serial.println(GyroErrorY);

Serial.print(“GyroErrorZ: “);

Serial.println(GyroErrorZ);

}

Output

From the above programming, we can obtain the data for Roll or pitch, or Yaw. These are the manipulated data attained through MPU6050 gyroscopic angles.

The applications of MPU6050 sensor mainly include 3D mice controller, 3D remote controller, robotic arm controlling, hand gesture controlling devices, self-balancing robots, etc.

Advantages

The advantages of sensor fusion include the following.

  • The fusion sensor is used to provide accuracy over a broad range of operating conditions.
  • The integrated sensory data provides multilateral, reliable & high-level recognition devices.
  • Sensor fusion has the capacity to bring the inputs of several LIDARs, cameras & radars to form a single model of the environment in the region of a vehicle. The resulting model is more accurate as it balances different sensors’ strengths.
  • Sensor fusion gives more comprehensive & dependable data as compared to individual sensors
    Sensor fusion will reduce operating costs by expanding the range of different devices like UAVs (unmanned aerial vehicles) & robotics including autonomous features.
  • Sensor fusion mainly targets overcoming the limitations of the individual sensors.
  • A sensor fusion system enhances the strength of the lane detection system to make the system more consistent.

Applications

The applications of sensor fusion include the following.

  • Sensor fusion is used in Global Positioning System (GPS) & inertial navigation system (INS) where data of these systems can be fused with different techniques. For instance, the extended Kalman filter is used in determining an aircraft’s attitude with low-cost sensors.
  • Sensor fusion allows awareness of context, which has enormous potential for the IoT (Internet of Things).
  • It improves the performance of lane detection dramatically when a number of sensors are used & perception capacity is increased.

Thus, this is all about an overview of sensor fusion which includes different algorithms as well as tools used for designing, testing & simulating systems that combine information from several sensors to maintain localization & situational awareness like active, passive radar, LIDAR, EO/IR, sonar, GPS & IMU. Here is a question for you, what are the disadvantages of sensor fusion?