Thrilled to share our latest work on @biorxivpreprint demonstrating the first real-time closed-loop ultrasonic brain-machine interface (#BMI)! 🔊🧠-->🎮🖥️
Paper link - biorxiv.org/content/10.110…

A #tweetprint. 🧵1/n Overview of ultrasonic brain-machine interface
First, this work would not have been possible without co first-author @SumnerLN, co-authors @DeffieuxThomas, @GeelingC, @BrunoOsmanski, and Florian Segura, and PIs Richard Andersen, @mikhailshapiro, @TanterM, @VasileiosChris2, and Charles Liu. (2/n)
Brain-machine interfaces (BMIs) can be transformative for people with paralysis from neurological injury/disease. BMIs translate brain signals into computer commands, enabling users to control computers, robots, and more – with nothing but thought.
bit.ly/3EAgl94
(3/n)
However, state-of-the-art electrode-based BMIs are highly invasive, limited to small regions of superficial cortex, and last ~5 years. The next generation of BMIs should be longer-lasting, less-invasive, and scalable to measure from numerous, if not all, brain regions.

(4/n)
An emerging technology, functional ultrasound imaging (fUS), meets many of these attributes (see review - bit.ly/fUSreview). In our current study, we streamed 2 Hz real-time fUS images from the posterior parietal cortex in two monkeys as they made eye movements.

(5/n) Anatomy visible from single coronal imaging plane of functio
Monkeys performed a memory-guided eye movement task in eight directions. After collecting 100 trials to train the fUS-BMI, we enabled BMI control. Now, the monkey controlled the task cursor position with the fUS-BMI.

(6/n) Training and control by the fUS-BMI.
The fUS BMI used 1.5 s of data from the movement planning period. In BMI mode, the movement direction prediction directly controlled the task. If the prediction was correct, we added that trial’s data to our training set and retrained the decoder before the next trial.
(7/n) fUS-BMI algorithm. Neurovascular signals recorded from parie
The monkey successfully controlled the fUS-BMI in eight movement directions! Most errors neighbored the cued direction, keeping angular errors low.

(8/n) Performance across session. Reached significant decoding at Confusion matrix showing errors. Misclassification error for
We used a 200 um searchlight analysis to understand which brain regions were important for the decoding performance. The most informative voxels from the searchlight analysis were in the LIP, supporting its canonical role in planning eye movements.

(9/n) Neurovascular image with searchlight analysis overlaid. Show
But wait, there’s more! An ideal BMI needs minimal or no calibration. Collecting 100 trials of data takes 30-45 minutes to acquire. We developed a pretraining method to eliminate this training period entirely.

(10/n)
For each new session, we would acquire a vascular image and align the previous data to the new session. This allows us to train the BMI using pre-recorded data. Here, we show alignment of data collected over two months apart.

(11/n) Graphic showing the across-session alignment accuracy.
This method greatly reduces, and can even eliminate, the need for calibrating the BMI after the first day.

(12/n) Plot showing performance with pre-trained fUS-BMI. BMI contrConfusion matrix showing performance using pretrained model.
This is just the beginning of the story. For a deeper dive, make sure to check out the full preprint - biorxiv.org/content/10.110…

(13/n)
Finally, thank you to the following institutions and programs for supporting me in this research -
@NINDSnews, @NatEyeInstitute, @CaltechN, @dgsomucla, @uclacaltechmstp, @ChenInstitute, and the Josephine de Karman Fellowship.

(14/14)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Whitney Griggs

Whitney Griggs Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(