Tanishq Mathew Abraham Profile picture
May 30 β€’ 10 tweets β€’ 6 min read Twitter logo Read on Twitter
I'm really excited to share @MedARC_AI's first paper since our public launch πŸ₯³

πŸ§ πŸ‘οΈ MindEye!

Our state-of-the-art fMRI-to-image approach that retrieves and reconstructs images from brain activity!

Project page: medarc-ai.github.io/mindeye/
arXiv: arxiv.org/abs/2305.18274 Image
We train an MLP using contrastive learning to map fMRI signals to CLIP image embeddings.

The generated embeddings can be used for retrieval, & the exact original image can be retrieved among highly similar candidates, showing that the embeddings retain fine-grained information. Image
Scaling up the retrieval to a large database like LAION-5B allows MindEye to output realistic images from brain activity without using any generative model. Image
But we can do classic reconstruction too, with SOTA results!

For this purpose, we found it necessary to train a diffusion prior to further "align" the generated CLIP-fMRI embeddings with standard CLIP embeddings. Image
Once we obtain aligned CLIP image embeddings, we can pass it into any pretrained diffusion model that accepts CLIP image embeddings to perform reconstruction!

We find Versatile Diffusion gives best performance. Better image generation models in the future may give better recons! Image
Low-level features are also appropriately reconstructed by mapping the fMRI signals to Stable Diffusion VAE latents and using that as a starting point for img2img. Image
Using this dual pipeline approach, MindEye obtains SOTA results on both high-level and low-level metrics (table of results in preprint)!

Here is a comparison to previous methods in the literature: Image
I started this project about a year ago, and it originally started out in @laion_ai.

We were lucky that @humanscotti joined and took the lead on this project, he's done a great job moving this project forward!

Check out his thread on the paper:
This project was openly developed via volunteer contributions in the @MedARC_AI Discord server and GitHub.

Open-source/decentralized research initiatives have been successful in AI (@AiEleuther, @laion_ai, @openbioml, @ml_collective) & our project further demonstrates that! ImageImage
This isn't the end of our mind reading projects, we have lots of interesting ideas to explore in this space!

If you are interested in contributing, check out:

Discord server: discord.com/invite/CqsMthn…
More info about our mind reading projects: medarc-ai.github.io/mind-reading

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Tanishq Mathew Abraham

Tanishq Mathew Abraham Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @iScienceLuvr

Mar 28
App-integrated LLMs can be jailbreaked:

@KGreshake showed how prompt injections can be incorporated in webpages or other content that may be retrieved by LLM systems to result in nefarious behavior.

Here, text is embedded in a webpage to direct BingChat to perform a scam.
Here is another example where an injection can be spread via email.
I think LLM applications are super exciting but we certainly should be cautious of any security concerns like this.

Paper: arxiv.org/abs/2302.12173
GitHub: github.com/greshake/llm-s…
Read 5 tweets
Mar 24
How does GPT-4 do in the medical domain?

I got to play around with its multimodal capabilities on some medical images!

Plus a recent Microsoft paper examined its text understanding and got SOTA results on USMLE medical exams!

A quick thread ↓
As I showed earlier, I had the chance last week to play around with GPT-4's multimodal capabilities:
I also tried some medical images too! Here I started with some histopathology. I passed in an H&E image of prostate cancer and asked GPT-4 to describe it. It knew it was an H&E image of glandular tissue but was unable to identify it as low grade prostate cancer. Image
Read 16 tweets
Mar 16
I got to try GPT-4's multimodal capabilities and it's quite impressive! A quick thread of examples...

Let's start out with solving a CAPTCHA, no big deal
It can explain memes quite well! Here it is explaining an AI-generated meme I shared recently.

(The AIs will create their own memes and explain it to us humans πŸ˜‚)
Here is another awesome example
Read 8 tweets
Mar 14
GPT-4 release
Med-PaLM2 announcement
PaLM API release
Claude API release Image
Oh I forgot ChatGLM! πŸ˜…
This meme also relevant today 🀣
Read 4 tweets
Feb 28
Claude, @AnthropicAI's powerful ChatGPT alternative, was trained with "Constitutional AI".

Constitutional AI is particularly interesting since it uses less human feedback than other methods, making it more scalable.

Let's dive into how Constitutional AI works in 13 tweets!
Constitutional AI (CAI) is based on:
1. Supervised Fine-Tuning (SFT)
2. Reinforcement Learning from Human Feedback (RLHF).

If you don't know how SFT & RLHF work, you should first check out my thread on the topic πŸ˜‰ (1/13)
The goal is to build AI assistants that follow certain "constitutional principles" to make models less harmful (generating offensive outputs, reinforcing social biases,etc.)

We can use AI feedback & supervision to follow these principles & limit the human feedback needed. (2/13)
Read 17 tweets
Feb 21
So, I've heard people say anyone could have built ChatGPT. I think this is disingenuous.

ChaGPT isn't just GPT-3 w/ a chat interface on top of it.

The closest base model on the OpenAI API is probably text-davinci-003, but it was only released a day before ChatGPT! (1/9) Image
Maybe someone could have created a model like text-davinci-003?

Well, ChatGPT/text-davinci-003 are trained with lots and lots of human feedback, which is why it does so well. That's not easy for anyone to obtain! (2/9)
OpenAI is clearly a leader in utilizing human feedback for improved models. They invented RLHF, one of the leading approaches, which powers ChatGPT.

On a related note, claiming OpenAI just scaled up existing work is ignoring OpenAI's expertise in utilizing human feedback. (3/9) Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(