Bart Trzynadlowski Profile picture
Jun 19 9 tweets 3 min read Twitter logo Read on Twitter
Let's talk about the most important feature of the Apple #VisionPro that is getting the least attention right now: high quality spatial audio. Apple understands audio and TDG's VP, Mike Rockwell, was formerly a VP at Dolby. Look the size of those speaker drivers! (1/)
Audio is a huge part of our perception of space. The soundscape around us helps us localize objects both directly -- when they emit sound, esp. beyond our visual field -- and indirectly, when reflections give us a sense of the dimensions and composition of our 3D space. (2/)
Spatial audio is an important cue that grounds virtual objects in our space. This isn't just important for making a dinosaur in our room feel present. It helps orient abstract objects, like app UI, allowing a unified mental model of the real and virtual. Less mental work. (3/) Image
Like most HMDs, AVP has extra-aural speakers that sit above the ears and directionally beam audio into them. The spatiality of a sound is caused by the relative delay between when it is picked up by our biological stereo microphones ("ears") as well as their shape. (4/)
Our heads and ears are all unique and our brains have learned to interpret audio that passes through them. This is modeled by a head-related transfer function (HRTF). In this simple top-down version, it is a function of angle and distance to the sound source per ear. (5/) Image
Reportedly, AVP will allow for personalized HRTFs. I can't think of any other commercial system with this capability. Systems like HoloLens and Meta Quest use a generic HRTF. But wait, there's more! AVP takes things *even* further... (6/)
Sound is affected by the environment: reflected by surfaces and modulated by materials it passes through. With depth sensors and computer vision, AVP can build a simplified model of your space and perform audio ray-tracing from a virtual object to your ears! (7/) Image
All of this excites me for 3 reasons: 1) unrivalled immersion, 2) lower cognitive load tracking 3D UX, and 3) it opens up some awesome pro/creator use cases that I'll talk about more next week! Where are my SFX and music designers at? The spatial future *needs* you! :) (8/8) Image
DISCLAIMER ADDENDUM: This and other threads are me speaking as an unaffiliated and independent AR developer, citing public info only. My knowledge on this is from WWDC but I’ve leveraged spatial audio in my personal HL and Quest work. Excited to hear for myself when it’s out :)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Bart Trzynadlowski

Bart Trzynadlowski Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @BartronPolygon

Jun 21
Apple #VisionPro eye-tracking mega-thread! I'm seeing some concern about eye fatigue. @EricPresidentVR brought up this legitimate worry. I think there is some misunderstanding of how AVP differs from other implementations. I'll also dive into some interesting eye UX work. (1/) Image
Eye gaze is tricky to use as an input because it is so fundamental to our bodies that we don't feel like we actively control it and being made aware of it feels creepy and fatiguing. Doubly so because you literally cannot look away. (2/)
It suffers from the "Midas touch" problem, named after the mythical King Midas, who was granted his wish that anything he touch turn to gold. Midas regretted it when he reached out to comfort his distraught daughter one day, turning her to lifeless gold. Oopsies! (3/) Image
Read 23 tweets
Dec 21, 2022
Natural language interfaces have truly arrived. Here's ChatARKit: an open source demo using #chatgpt to create experiences in #arkit. How does it work? Read on. (1/)
JavaScriptCore is used to create a JavaScript environment. User prompts are wrapped in additional descriptive text that inform ChatGPT of what objects and functions are available to use. The code it produces is then executed directly. You'll find this in Engine.swift. (2/)
3D assets are imported from Sketchfab. When I say "place a tree frog...", it results in: createEntity("tree frog"). Engine.swift implements this and instantiates a SketchfabEntity that searches Sketchfab for "tree frog" and downloads the first model it finds. (3/)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(