🪼 policy dev & strategy @GoogleDeepMind | vinyl junkie, subculture explorer, deep ArXiv dweller, interstellar fugitive, uncertain | 🇮🇷🇱🇺🇬🇧🇫🇷
Feb 28, 2024 • 10 tweets • 5 min read
🤖 Incredibly cool Google DeepMind paper: Concordia is a library for building agents that leverage language models to simulate human behavior with a high degree of detail and realism. The agents can reason, plan, and communicate in natural language, interacting with each other in grounded physical, social, or digital environments.
So many cool use cases (with obvious limitations): studying complex social phenomena, generating synthetic data, evaluating AI systems etc. Some excerpts I really liked below:ar5iv.labs.arxiv.org/html/2312.03664
Concordia allows researchers to integrate digital components like apps, social networks, and AI assistants into their simulations. I can imagine this being super useful given that it’s very hard to study the societal impacts of stuff like algorithmic feedback loops etc.
Jun 2, 2023 • 20 tweets • 10 min read
A lot of people in AI policy are talking about licensing in the context of AI risk. Here’s a little thread exploring what this means, what it could look like, and some challenges worth keeping in mind. 🏛
NB: it's worth noting that I’m not covering agreements between AI developers and users on how an API or software can be used. Instead I'm focusing on regulatory licenses awarded by governments to control and regulate certain activities or industries.
Sep 22, 2021 • 21 tweets • 14 min read
🚀 National AI Strategy summary thread + some preliminary thoughts! 🔎 (1/21)