Jack Clark Profile picture
@AnthropicAI, ONEAI OECD, co-chair @indexingai, writer @ https://t.co/3vmtHYkaTu Past: @openai, @business @theregister. Neural nets, distributed systems, weird futures
Brendon Unland Profile picture Eli Tyre Profile picture Stantron Profile picture Sage Fiorentino-Lange Profile picture 4 subscribed
Jun 25, 2023 22 tweets 5 min read
Will write something longer, but if best ideas for AI policy involve depriving people of the 'means of production' of AI (e.g H100s), then you don't have a hugely viable policy. (I 100% am not criticizing @Simeon_Cps here; his tweet highlights how difficult the situation is). I gave a slide preso back in fall of 2022 along these lines. Including some slides here. The gist of it is if you basically go after compute in the wrong ways you annoy a huge amount of people and you guarantee pushback and differential tech development.



Feb 12, 2023 5 tweets 2 min read
A mental model I have of AI is it was roughly ~linear progress from 1960s-2010, then exponential 2010-2020s, then has started to display 'compounding exponential' properties in 2021/22 onwards. In other words, next few years will yield progress that intuitively feels nuts. There's pretty good evidence for the extreme part of my claim - recently, language models got good enough we can build new datasets out of LM outputs and train LMs on them and get better performance rather than worse performance. E.g, this Google paper: arxiv.org/abs/2210.11610
Jan 29, 2023 5 tweets 2 min read
Modern AI development highlights the tragedy of letting the private sector lead AI invention - the future is here but it's mostly inaccessible due to corporations afraid of PR&Policy risks. (This thought sparked by Google not releasing its music models, but trend is general). The 21st century is being d... There will of course be exceptions and some companies will release stuff. But this isn't going to get us many of the benefits of the magic of contemporary AI. We're surrendering our own culture and our identity to the logic of markets. I am aghast at this. And you should be too.
Nov 5, 2022 6 tweets 2 min read
If you want a visceral sense of how different development practices and strategies can lead to radically different performance, compare and contrast performance of the BLOOM and GLM-130B LLMs.

huggingface.co/bigscience/blo…

huggingface.co/spaces/THUDM/G… Feels kind of meaningful that an academic group at Tsinghua University (GLM-130B) made a substantially better model than a giant multi-hundred person development project (BLOOM).
Aug 6, 2022 81 tweets 13 min read
One like = one spicy take about AI policy. A surprisingly large fraction of AI policy work at large technology companies is about doing 'follow the birdie' with government - getting them to look in one direction, and away from another area of tech progress
Jul 17, 2022 12 tweets 3 min read
In late May, I had back spasms for 24 hours, then couldn't walk for a week, then spent a month+ recovering. It was one of the worst experiences of my life and I'm glad I seem to now be mostly recovered. Here are some things that happened that seemed notable during that time: Being genuinely disabled is really frightening. I couldn't feed myself for the first few days. Any time I needed to go to bathroom I had to do it predominantly using upper body strength. Sometimes I'd fall over and not be able to get up for hours.
Jul 1, 2022 6 tweets 2 min read
As someone who has spent easily half a decade staring at AI arXiv each week and trying to articulate rate of progress, I still don't think people understand how rapidly the field is advancing. Benchmarks are becoming saturated at ever increasing rates. Here's MINERVA which smashes prior math benchmarks by double digit percentage point improvements
Oct 21, 2021 7 tweets 2 min read
Here's a Business Insider story about how a load more anonymous Google employees are censoring AI research, allegedly crossing out refs to things like fairness. businessinsider.com/google-ethical…

This recent trend has directly impacted me. A thread... The biggest issue here is legibility - Google's researchers are legible (their names are on the paper), but these censors are illegible. Yet, these censors are themselves changing the substance of the research outputs. This is not a stable situation.
Aug 31, 2021 8 tweets 3 min read
AI is influencing the world and right now most of the actors that have power over AI are in the private sector. This is probably not optimal. Here's some research from @jesswhittles and I on how to change that. arxiv.org/abs/2108.12427 Our proposal is pretty simple - governments should monitor and measure the AI ecosystem, both in terms of research and in terms of deployment. Why? This creates information about AI progress (and failures) for the public
May 25, 2021 5 tweets 2 min read
Help governments classify AI systems! A thread...
Right now, AI systems are broadly illegible to policymakers, which is why AI policy is confusing. At the @OECD , we're trying to make them legible via a framework people can use to classify AI systems. Here's how you can help: First, you could help us test out the framework by classifying an existing AI system (e.g, AlphaGo Zero, C-CORE, CASTER) using our survey (or classifying your own system using it). Survey here: survey.oecd.org/index.php?r=su…
Apr 17, 2021 11 tweets 2 min read
I've been keeping some form of journal for 15/20 years or so - some years have been incredibly sparse and some years have involved writing stuff every day. Through this, I've discovered a meaningful link between journaling and mental health. Here's a thread of what I've learned: 1. The years when I have barely journaled have correlated to years when I've been depressed and/or unsatisfied with my life in some deep way. I can 'see' myself about to change my life just by looking at the length of journals - once lengths start going up, change is coming.
Feb 20, 2021 4 tweets 2 min read
The next five years of AI will see systems diffuse into the world that act on culture which will feed back into human society, changing it irrevocably. Some thoughts done this morning: Geopolitics gets changed by chipwars which get driven by AI. Culture becomes a tool of capital via arbitraging models against human artists. Everyone gets 'roll your own' surveillance. Technological supremacy becomes a corporate theistic belief system. Things are gonna get crazy!
Aug 9, 2020 8 tweets 2 min read
Here's a thread about doing things for yourself vs doing things the world thinks you should do. As I've got older, I've noticed that the more time I spend on the things that make sense to me, the more stable and fulfilled I am. If you do things for yourself, then you'll keep doing them forever. If you do things because the world says they're what success comes from, then you'll continually assess your skill at them against some goal&benchmark you don't believe in.
May 2, 2020 5 tweets 1 min read
Playing around with notion that we'll evaluate 21st century geopolitics through lens of 'information empires'. It's going to be increasingly apparent that AI-driven OODA loops (en.wikipedia.org/wiki/OODA_loop) are going to define competitive dynamics in many areas of life. What does this mean? We'll start to look at states in terms of ability to rapidly understand stuff happening around them, analyze that information, and make policy in response (see: various different responses to COVID usually being function of testing/measurement).
Mar 29, 2020 4 tweets 3 min read
Large companies should employ "system historians" who track and document changes in baroque infrastructures over time. What battles has Google fought over memory allocation during its lifetime? What scheduling systems has Amazon built, refined, retired, and iterated on? It's not superficially obvious, but all around us companies like @stripe @Google @amazon @Facebook @AlibabaB2B @TencentGlobal @Microsoft etc are building the tools to let them operate at fundamentally different speeds relative to the rest of the world.