Sensor fusion and Apple's Fall detection Algo!


Hey friends, Happy Wednesday!

Last week, I visited Chinmaya Vidyalaya where I did my schooling for a while. And went out bowling with friends.

Answer to the question I posed last week

Last week, we discussed sensor fusion but didn’t jump into what it actually does. So, I planned for this newsletter issue to be about Sensor Fusion. Several products in the 21st-century use Sensor fusion (Phones, Smartwatches, Autonomous Vehicles, and other Robots). Hence, having a solid grasp of these concepts helps one understand and appreciate how gadgets work under the hood.

This form the base for fully understanding the Fall Detection algorithm as well. The blog post of the same is out, which can be read using the link at the end of the newsletter.

I aim to write my newsletter issues in a way one can follow them while traveling on a bus, having a coffee, waiting for your food, etc. Let’s jump in!

PS - This is my longest newsletter issue yet, I guess!

Sensor Fusion

Imagine this situation. You have a car going on a highway. The GPS signal gives the location of the car as we know but it is not accurate distance in meters as there are inaccuracies. But when you combine this data with the camera information captured on the highways, the location of the car can be identified exactly. This is basically sensor fusion - Combining data from multiple sensors to provide a more accurate and dependable understanding of the system. While an individual sensor may provide useful data on its own, the data that could be extracted from combining output from multiple sensors at once would give us a much better model. Sensor data always have noise, so the sensor fusion algorithms take them into account while computing a dependable estimate.

What are ways in which sensor fusion can help?

  1. It can increase the quality of the data and reduce noise. Example - Use two accelerometers to get the data and average them to reduce the uncorrelated noise
  2. It can increase reliability. Example - To measure air speed when a flight is in the air, multiple airspeed sensors are used. If one of them fails or produces value far away from the other sensors, the data from the other sensors can be used to find the airspeed. GPS data (Used to measure ground speed) can also be fused with the speed sensor data to give a reliable estimate.
  3. It can estimate unmeasured states (Measuring states we directly don’t have a sensor for but are interested to find their values). Example - One camera cannot find the distance of an object. A large object far away can have the same number of pixels as a small object that is closer to the camera. Hence by combining two cameras side-by-side with sensor fusion, the distance or depth at which the object is situated can be determined. Mobile phones we use these days have 2 cameras giving us a sense of depth.

Sensor fusion in Phones?

Let’s take another example - Measuring the orientation of a phone. The accelerometer (measures acceleration) and magnetometer (measures magnetic field and its direction) combinedly give the absolute orientation of the phone (This needs a blog post of its own, but you can google search, for now, to see how this is done. Matlab has a video on the same.). This has common problems like giving faulty values when you bring a magnet nearby. The gyroscope on the other hand gives the relative orientation of the phone by detecting the difference in rotations from the angular rate. But we need additional data on initial orientation to find the absolute orientation if we’re using a gyroscope solely to determine orientation. The gyroscope has problems like the value drifting over time. Combining these 3 sensors using sensor fusion is a popular technique to determine the orientation of the phone accurately.

Kalman Filter algorithm

Sensor fusion is basically a fancy way to average the sensor data and provide an estimate. Imagine a slider with the accelerometer + magnetometer providing orientation data on one end, and the gyroscope providing its own orientation data on the other end. The sensor fusion algorithm basically decides where to place the slider. If the algorithm believes the gyroscope data more than the other, then the slider is more towards it. This means the final estimate will have a bias toward the gyroscope data in this case. There are different algorithms used to decide the bias. One popular example is Kalman Filter. To understand how the algorithm works, watch this basic 7-minute video HERE.

Feature Detection & Extraction

Let’s take the context of images. A feature is a piece of information about the content of an image; typically about whether a certain region of the image has certain properties. Features may be specific structures in the image such as points, edges, or objects. Feature detection includes methods for computing abstractions of image information and making local decisions about whether there is an image feature of a given type at that location or not. Once features have been detected, a local image patch around the feature can be extracted.

Blog posts

My blog post S1E8 on Fall Detection is out! Read it here, happy learning!

I’m thinking of writing about an Electric toothbrush or Sleep rate sensor for the next episode S1E9, haven't decided yet. Also, let me know if you’d like to read about a specific gadget/device.

Question of the week

In which aspect of gaming is Sensor fusion used predominantly in recent times? Hint - There are unicorn companies that are building their futures based on them. Share your thoughts by replying to this email, and we can have a discussion. I’ll answer this in next week’s issue.

New introductions to the Newsletter

As I mentioned last week, I was planning to release new sub-sections and announcements to my Newsletter this week but I’m pushing it to probably next week. Sorry about that, I’m trying to make sure I write complete newsletter issues and blog posts while also doing my personal stuff during my vacation. Thanks a lot for your time in reading!

Have a nice rest of the week, and take care!
Until next Wednesday,
Chendur

instagram

How Do Gadgets Work?

Understand the inner science, tech, and AI of your gadgets with me, a Carnegie Mellon alumnus. Join 205+ readers every other Wednesday to see the world differently in just 3 minutes—regardless of your background! 🚀

Read more from How Do Gadgets Work?

Hey Reader, Happy Wednesday! I'm taking a short break for the first time in the last 2 years of writing this newsletter, so I'm resending one of the most opened and popular newsletters this week. Thank you all for your support! Let’s look at how Airport body scanning technology works this week. It is also called Millimeter wave detection and spot this title written on one of these machines the next time you are at an airport. I aim to write my newsletter issues in a way one can follow them...

Hey Reader, Happy Wednesday! Last week, I finished my swimming classes in India and ‘almost’ learned to swim. Let’s look at how Ships work this week. I aim to write my newsletter issues in a way one can follow them while traveling on a bus, having a coffee, waiting for food, etc. Let's jump in! How it works: 15-second answer Ships float because they are less dense than water. Though made of dense materials like steel, ships have hollow interiors filled with air, reducing their overall...

Hey Reader, Happy Friday! Last week, I came to India to visit my family and friends. I fell sick so I’m sending this newsletter on a Friday instead of the usual Wednesdays. Thank you for the support! Let’s look at how a watch works this week. I aim to write my newsletter issues in a way one can follow them while traveling on a bus, having a coffee, waiting for food, etc. Let's jump in! How it works: 15-second answer A quartz watch uses the piezoelectric effect. A quartz crystal vibrates...