Anna Ivanova Profile picture
Language and thought in brains vs machines. New Assistant Prof @ Georgia Tech Psychology. Previously: postdoc @MIT_Quest & PhD @mitbrainandcog. She/her
May 17 14 tweets 4 min read
💡New work!

Do LLMs learn foundational concepts required to build world models? We address this question with 🌐🐨EWoK (Elements of World Knowledge)🐨🌐, a flexible cognition-inspired framework to test knowledge across physical and social domains



🧵👇ewok-core.github.io We leveraged the cognitive science literature to select concepts from 11 domains of knowledge, from social interactions (HELP, HINDER, CHASE, EVADE) to spatial relations (LEFT, RIGHT, CLOSE, FAR) social interactions social properties social relations physical interactions physical dynamics physical relations material dynamics material properties agent properties quantitative properties spatial relations
Nov 8, 2023 5 tweets 1 min read
I have started to receive emails from prospective lab members (yay!). Some words of advice:

1. mention which position you are interested in
2. state clearly what research topics you’re interested in going forward (beyond just "language" or "LLMs")

(contd) 3. when summarizing your past experience, emphasize the skills you have acquired (coding/modeling, experimental design, reviewing the literature, knowledge of a specific discipline, etc), with a focus on those that might be relevant in your proposed future work
Aug 23, 2023 13 tweets 3 min read
Finally out and open to access - “The language network is not engaged in object categorization”


Co-led with @BennYael, co-senior-authored by Rosemary Varley and @ev_fedorenko

Thread 👇academic.oup.com/cercor/advance… We test the claim that language can augment visual categorization in humans. This augmentation might take many forms, but the hypothesis we focus on here is:
real-time activation of language labels helps sort objects into low-dimensional (LD) categories.
Jan 18, 2023 14 tweets 4 min read
Three years in the making - our big review/position piece on the capabilities of large language models (LLMs) from the cognitive science perspective.

Thread below! 1/

arxiv.org/abs/2301.06627 The key point we’re making is the distinction between *formal competence* - the knowledge of linguistic rules and patterns - and *functional competence* - a set of skills required to use language in real-world situations.

We ground this distinction in cognitive neuroscience. 2/
Dec 6, 2022 13 tweets 5 min read
My co-lead @KaufCarina and I present: an in-depth investigation of event plausibility judgments in language models.

A 🧵 1/

arxiv.org/abs/2212.01488 Knowledge of event schemas is a vital component of world knowledge. How much of it can be acquired from text corpora via the word-in-context prediction objective?

2/
Aug 24, 2022 4 tweets 1 min read
During the first year of grad school, I asked why researchers always use linear regression to map between brains and predictor features. Today, our article discussing the use of linear vs nonlinear mappings in cogneuro is finally out at @NBDT_journal

nbdt.scholasticahq.com/article/37507-… In short, if you plan to train an encoding/decoding model of the brain, you should determine which properties of your mapping are essential to your research question. Does it need to be simple? Biologically plausible? Explainable in terms of neuro/psych terms?
Apr 25, 2022 5 tweets 2 min read
The project co-led by my very first research mentee @yotaros_ramen and the awesome Alex Paunov is now out as a preprint! We find synchronized language network activity across people as they watch movie clips / audio event sequences with no one speaking in them.
1/n An example movie clip is Partly Cloudy! So, another way to pose the question is “Do people consistently recruit the language network when watching Pixar shorts?” 😉 2/n
Mar 25, 2021 8 tweets 3 min read
Finally accepted, proofed and published - our work on the role of the language network in combinatorial event semantics when the input is pictures, not words.



Thread below:direct.mit.edu/nol/article/2/… Combinatorial semantics is often considered to be a hallmark feature of language. But is the language network responsible for combinatorial semantic processing even when the input is non-linguistic?
Apr 18, 2020 7 tweets 2 min read
New preprint! We set out to answer: what happens in the brain when we read computer code? See thread below for details.

Thanks to my amazing PI @ev_fedorenko, our collaborators @ShashankSrikant @marinabers @UnaMayMIT & Riva Dhamala and my labbies @YotaroSueoka @HopeKean Does the brain treat code like natural language? Or like logic and math? We addressed this question by measuring fMRI responses to two very different programming languages - Python and ScratchJr (graphical programming language for kids). 1/n