Bob West Profile picture
Associate professor at EPFL, Data Science Lab (dlab)
Feb 20 9 tweets 3 min read
In our new preprint, we ask: Do multilingual LLMs trained mostly on English use English as an “internal language”? - A key question for understanding how LLMs function.

“Do Llamas Work in English? On the Latent Language of Multilingual Transformers”
arxiv.org/abs/2402.10588
Image What do we mean by “internal language”? Transformers gradually map token embeddings layer by layer to allow for predicting the next token. Intermediate embeddings before the last layer show us what token the model would predict at that point #LogitLens lesswrong.com/posts/AcKRB8wD…
Image