Mostly on Bluesky & LinkedIn.
Author of Robin Hood Math (summer 2025).
Math Prof @BentleyU, visiting scholar @Harvard.
Jul 17, 2024 • 12 tweets • 1 min read
Last year I got tenure, this year I'll be a visiting scholar at Harvard, and I got a 6-figure advance to publish a book next year. In honor of these accomplishments, here's a list of embarrassing confessions revealing how dumb my brain is in many ways:
I can't alphabetize things without singing the alphabet song in my head.
Oct 16, 2023 • 15 tweets • 3 min read
I present to you a tongue-in-cheek (but nonetheless revealing, I think) vignette of what it would be like if climate science and climate change discourse were like AI doomer/safety/x-risk discourse: 🧵
Instead of complex models of climate, including oceans, clouds, forests, etc. built by thousands of scientists over many years to predict temp increases based on carbon emissions, we'd just survey a bunch of meteorologists and oil execs their climate p(doom).
Oct 7, 2023 • 17 tweets • 3 min read
I just read OpenAI’s blog post about aligning superintelligence and… I have some concerns with the assertions and attitude on display there.
Reading it did not make me feel better about OpenAI’s approach to society & safety. Detailed 🧵 below openai.com/blog/introduci…
Opening line: “Superintelligence will be the most impactful technology humanity has ever invented.”
Not “may” but “will”. Already frames this post as advertising not science.
And I’m biased but I’d call math the most impactful tech since it powers all AI & so much else too 😉
Jun 12, 2023 • 13 tweets • 3 min read
OpenAI centers its narrative on AGI--artificial general intelligence--w/ discussions of timeline to AGI, preparing for AGI, anticipated benefits and risks of AGI, building "the first AGI", etc.
A 🧵on why I find their definition of AGI problematic and the concept a mirage:
In OpenAI's Charter, they define AGI as
"highly autonomous systems that outperform humans at most economically valuable work"
At first blush this sounds reasonable: at some point AGI will do what we do only better, and when it does that's AGI.
But... openai.com/charter
Jun 5, 2023 • 25 tweets • 6 min read
I'm often critical of Effective Altruism (EA) and I'm sure I'll get more pushback for this, but I've been thinking a lot lately about the discourse on AI doomerism, extinction risk, etc., and here's my big take on what's going on and why.
Buckle up, friends, it gets spicy.🧵
The fundamental calculus of EA, at least the version embracing longtermism, necessitates a focus on things that could kill us all, even if exceedingly unlikely, rather than plausible--or even current--things that merely harm lots of people in lots of ways.
May 30, 2023 • 15 tweets • 3 min read
I'm trying to keep an open mind, but I have decidedly mixed--mostly critical--feelings about this. Of course it's just a tiny statement so hard to pin down what it means, but allow me to unpack it with some reactions in this 🧵
At first blush this sounds good--if something could wipe us out, we should try to prevent that from happening! (Duh.)
But there's something very fishy going on in this pithy statement once you look closer:
Mar 25, 2023 • 36 tweets • 8 min read
I found the recent @nytimes opinion piece on AI by @harari_yuval@tristanharris@aza very interesting and I agree with some of the overall thrust and points but object to MANY of the important details. So, time for a 🧵 detailing my critiques: nytimes.com/2023/03/24/opi…
Starting with the opening: They analogize a survey of AI experts estimating AI doomsday to airplane engineerings estimating the probability a plane will crash. This is wildly misleading. Flight safety is based on very well-understood physics and mechanics and data, whereas
Mar 17, 2023 • 20 tweets • 7 min read
There's growing calls for AI regulation, incl. from @elonmusk, @miramurati (lead @OpenAI behind ChatGPT), and Rep @tedlieu. But what? Broad AI reg is tricky, so in a new @SciAm piece I suggest measures we could take immediately to help navigate our new chatbot-infused world.🧵
AI is not one thing: it’s a range of techniques w/ a range of applications. Hard to regulate across autonomous weapons, facial recognition, self-driving cars, discriminatory algorithms, economic impacts of automation, and the slim but nonzero chance of catastrophic AI disaster.
Mar 16, 2023 • 8 tweets • 2 min read
I just had the most vivid dream I can remember in years; the interpretation seems quite obvious but for context my baby woke up in the middle of the night and I read a bunch of newspaper articles on AI while waiting for him to fall back asleep 😂
Here's what I dreamt:
I was at a hotel lobby with a group of guests I didn't know. We were escorted to a shuttle bus to take us to another building with our rooms, but it was a driverless bus that simply called itself the "AI bus". We found it amusing at first, but then
Jan 14, 2023 • 5 tweets • 1 min read
Everyone is amazed at OpenAI and ChatGPT, but don’t forget: the transformer was invented by Google, the first LLM was Google’s BERT, and Google made an apparently impressive chatbot (LaMDA) before ChatGPT but didn’t release publicly since they didn’t feel it was safe to do so.
Since this tweet went further than I expected, some additional clarifications, corrections, and excellent points raised by others in the comments:
Nov 15, 2022 • 8 tweets • 3 min read
Here's the ultimate irony about the EA movement: they just *caused* one of the large-scale harms that they supposedly are trying to circumvent. And this was not a coincidence; if you follow the logic of the movement, you'll see that this was inevitable. Let me explain 🧵 1/7
In a nutshell, EA is about using math to try to maximize the charitable impact one can have. But one quickly realizes this means instead of doing charitable acts, one should just earn as much money as possible then donate. @willmacaskill suggested this to @SBF_FTX early on. 2/7