Li Junnan Profile picture
May 12 8 tweets 4 min read Twitter logo Read on Twitter
A new member in the BLIP family: 🔥InstructBLIP🔥, a vision-language instruction tuning framework. InstructBLIP achieves SoTA zero-shot performance with various advantages over other multimodal models such as GPT-4!
Github: github.com/salesforce/LAV…
Paper: arxiv.org/abs/2305.06500 Image
Our paper conducts a systematic study on vision-language instruction tuning. InstructBLIP substantially outperforms both BLIP-2 and the largest Flamingo on zero-shot evaluation. It also has SOTA finetuning performance when used as the model initialization on downstream tasks.
In addition, we introduce instruction-aware visual feature extraction, a new method that enables the model to extract informative features tailored to the given instruction, leading to enhanced generalization performance.
We open-source a suite of InstructBLIP models using two family of LLMs: FlanT5 and Vicuna. Using our LAVIS library, you can run these models with two lines of code!
github.com/salesforce/LAV…
Great work from our intern @Wenliang_Dai at @SFResearch and collaborators @DongxuLi_ @AlbertBoyangLi @stevenhoi!
InstructBLIP demonstrates a variety of strong multimodal capabilities including complex visual scene understanding and reasoning, knowledge-grounded image description, multi-turn visual conversation, etc. Checkout this demo video!
InstructBLIP demonstrates strong visual reasoning capability of complex scenes, generalizing beyond its training data to OOD images. Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Li Junnan

Li Junnan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @LiJunnan0409

Feb 3
🔥BLIP-2🔥 demo is live! Come play with LLMs that can understand images and share your examples!
huggingface.co/spaces/Salesfo…
Project page: github.com/salesforce/LAV…
BLIP-2 knows mass–energy equivalence! More examples in the 🧵
BLIP-2 knows the landmarks of Singapore
How to get out of this house?
Read 9 tweets
Jan 31
Can LLMs understand images? We introduce 🔥BLIP-2🔥, a generic and efficient vision-language pre-training strategy that bootstraps from frozen❄️image encoders and frozen❄️LLMs. BLIP-2 outperforms existing SoTAs with only 188M trainable parameters!
Github: github.com/salesforce/LAV…
BLIP-2 beats Flamingo on zero-shot VQAv2 (65.0 vs 56.3), establishing new SoTA on zero-shot captioning (121.6 CIDEr vs previous best 113.2). Equipped with powerful LLMs (e.g. OPT, FlanT5), BLIP-2 also unlocks the zero-shot instructed vision-to-language generation capabilities!
Why is BLIP-2 effective? Previous methods (e.g. Flamingo) uses a image-to-text generative loss. However, a generative loss is insufficient to bridge the modality gap. We instead train a Querying Transformer in two learning stages: representation learning and generative learning.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(