Gabriele Berton Profile picture
Postdoc @Amazon working on MLLM - ex @CarnegieMellon @PoliTOnews @IITalk
Jun 27 4 tweets 2 min read
Video-XL (CVPR25) is a really cool paper, which allows to do video understanding (with a VLM) on hour-long videos

The idea is to extract visual tokens (individually from N frames of a video with a visual encoder), and then instead of passing all these tokens ... [1/4] Image to the LLM (which would blow up the memory if the sequence is too long) they sequentially compress them (with a factor of M) into smaller representations, in the form of KV-cache [2/4] Image
Jun 10 6 tweets 3 min read
Want to try a SOTA image localization model, on your own images?

We'll be at #CVPR presenting a demo of MegaLoc!

With our demo you can localize photos from San Francisco using MegaLoc, a SOTA image localization model, and it works in real time! MegaLoc is trained on ~10M images from 5 different datasets, combining best practices from Visual Place Recognition models

It is SOTA on countless datasets on multiple tasks (landmark retrieval, VPR, visual localization), and is robust to OOD images like night and underwater! Image
Image
Image
Image
May 15 9 tweets 3 min read
HuggingFace released a nice blog post about the current state of VLMs

Here's a summary, covering recent trends, specialized capabilities, agents, video LMs, new alignment techniques, and HF's fav VLMs [1/8]

Recent trends: Image 1) any-to-any models, with multi-modal input and output. An example is Qwen 2.5 Omni
2) reasoning models: pretty much a ViT with a reasoning LLM on top. Some models can reason and crop the image accordingly, o3 style
3) Small VLMs, like HF's SmolVLM2, with ~1B parameters [2/8] Image
Image
May 14 11 tweets 4 min read
While everyone is hating on Meta for the Llama 4 debacle, they dropped some very impressive CLIP-like models and VLMs

They came out in two twin papers, released on the same day

Here's a summary, some honest thoughts, and some things I personally liked and disliked of them [1/n] Image
Image
Results are impressive. In both papers.

The CLIP-like models are an engineering feat, trained with standard CLIP-style image-text alignment with known best practices: progressively increasing resolution, LAMB optimizer, strong augmentation, and lots of data. [2/n] Image
Apr 28 5 tweets 2 min read
Ok there's a new paper in my top 3 favorites

Vision transformers need registers

Clear problem, elegant solution, well written, easy to understand, good results, limitations included.

No fancy losses or layers. No equation (at all!)

Here's a short summary: (1/4) Image ViTs benefit from using tokens that encode global information, like the CLS. Having multiple of such "global tokens" helps the transformer, however there is only one CLS: the ViT then "secretly" chooses some low-content patches/tokenes (for example patches of sky) to ... (2/4) Image
Apr 27 4 tweets 2 min read
I'm fascinated by similarities between papers on seemingly unrelated tasks

For example LightGlue (image matching paper from ETH) and LayerSkip (LLM paper from Meta)

Both papers do Early Exit: if an intermediate layer is confident about its prediction, skip the final layers Image
Image
I do believe the two papers evolved independently, though there's a chance that LayerSkip's authors (October 2024) got the idea from LightGlue (April 2023)

Obviously the differences between the two papers are countless, but I like that the underlining idea is similar
Apr 22 5 tweets 2 min read
How to select pre-training data for LLMs?

Two papers came out last week from AllenAI and Nvidia that do it in a similar way, building on the intuition that good data is good regardless the size of the LLM.

This intuition can be used to select good data in a cheap manner... Image
Image
(training a large LLM on many subsets would be unfeasibly expensive).

Here some similarities and differences between these two papers:

Both papers split the whole available training data into subsets, train a small LLM on the subsets, and see how this performs: its...
Dec 19, 2024 9 tweets 3 min read
Libraries and tools that every deep learning project should use: loguru, tqdm, torchmetrics, einops, python 3.11, black. Optional: prettytable. Good for debugging: lovely_tensors. Any other ones I've missed?

Below a few words on each of them: Image loguru: a nice logging library. With a few lines of initialization you can call info() and debug() functions that print to stdout and log files without having to pass logger objects around. Also, you can set it to log the error traceback in case your code crashes (last line) Image