#Robots need a better sense of touch to become dexterous.
We work on fixing this with our new sensor: “Insight” -- it uses a tiny camera and deep learning to enable high-fidelity sensing all-around with normal and shear forces.
Out today: nature.com/articles/s4225… 1/7
2/7 I am super proud of the rest of the team @huanbo_sun and Katherine J. Kuchenbecker.
We set out to create a high-fidelity 3D tactile sensor that is robust, cheap, and easy to make.
Here is a 4-min video explaining how it works:
more below
3/7
So here are a few details:
The mechanical design is pretty unique: we use a soft elastomer that encloses a rigid thin skeleton.
-> it can withstand strong forces
-> it is very sensitive
-> surface has high friction #Haptics#Elastomer#Overmolding
4/7 We create a light pattern inside that allows the camera to detect tiny deformations from a single image.
A masked LED ring with different colors creates a good illumination: few colors overlap, bright enough, no oversaturation. #PhotometricStereo#StructuredLight
5/7 We build a testbed (we means @huanbo_sun 😉) to collect data (camera images and external forces) on many locations with different forces (200k data-points in total) #datasets
6/7 Then we train a ResNet to predict the force distribution.
It works like a charm. The precision is really phenomenal: 0.4mm spatial resolution and 0.03N force magnitude accuracy. #DeepLearning
7/7 Here you see the precision for single touch inference.
The hybrid soft-stiff structure is not affecting the performance much. Quite surprising actually.
Here is the pdf with supplementary etc: rdcu.be/cHCl9
Many thanks to @MPI_IS for the support and @NatMachIntell