My Authors
Read all threads
Since my last experiment with #LeapMotion connected on #OculusQuest, streaming usb data to PC to get hands through WebSocket, #Oculus have opened their Hand API. So I was thinking about doing other things with this configuration.
I have enhanced a little the mount, by using 3D printing and some scratch as you can see in the image below. Pretty easy to mount/unmount the leap on quest, but difficult to position it exactly the way I want...
I was thinking about using the leap in UVC mode to access image data directly on the Quest. To save time, I have used a Unity Asset : USB Camera for Unity Android from Chaosikaros, and after some #OpenGL / FBO problems, I was able to access the YUY2 image interpretation.
But the really difficult thing was to access the raw bytes to compute the left and right images. After some debugging and understanding of render script pipeline, I was able to show that in Quest in monoscopic mode (quad with leap left camera in front of player head).
After that, I was able to display the undistorted images to each eyes and play with some leap parameters as exposure, gain, linear space, ... Now, I'm working on how to merge real and virtual world, and that's not easy because of the alignment problem described int the link below
blog.leapmotion.com/alignment-prob… In resume, there is physical (leap), virtual (unity) and biological (eye) cameras that respectively distant from each other ICD/VCS/IPD. Leap have an ICD of 40mm and Humans have an IPD between 54-68mm.
It seems that aligning ICD and VCS (physical and virtual) is better but I'm still not satisfied with the current result in my Quest because eye accommodation is strange and doesn't always work...
My ideal goal is to have something similar to the Quest passthrough. But It seems that they use a kind of depth mapping to rightly distord the merged image from cameras. For now, I will probably only try to get something sufficiently good even if it's not perfect.
After that, I will work on timewarping to get passtrough images in sync with virtual world based on previous leap motion work. Maybe later, I will try to compute a disparity map or even a depth map from the stereo images to enable occlusion between real and virtual world.
I also really want to try passthrough on #OculusQuest with a #ZedMini from @Stereolabs3D (stereolabs.com/zed-mini/) or #StructureCore from @occipital (store.structure.io/buy/structure-…)... But not enough money for now, so maybe one day!
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Hyro Vitaly Protago

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!