jonstokes.(eth|com) Profile picture
Writer. Coder. LLM Prompteur. 🪩/acc. θηριομάχης. https://t.co/HdCWhuno57. Writing about AI at https://t.co/dBPBtyCIHw. Building in AI at https://t.co/otXT4Wy6WR.
Chris Bugbee Profile picture Hecate's Crossroad #QVArmy Profile picture Loftcraft's Evil Clergyman Profile picture Maleph Profile picture R0Y Profile picture 14 subscribed
May 3, 2023 4 tweets 2 min read
FYI, gasoline-powered cars still going many years into the apocalypse is one of those details that bugs me as a serious prepper. Gasoline has a shelf-life of ~2yrs, & then only w/ additives & careful storage (usually it's like 6mo). So no you're not driving 5yrs after doomsday. The way I suspend disbelief on this is to imagine that they've converted all the cars to run on some biofuel & there's a whole biofuel economy that they're just not showing on-screen.
May 1, 2023 7 tweets 3 min read
Whether we view AI as a tool (= governed by the norms & practices of engineering) or as an agent (= governed by the norms & practices of HR) will determine the future of the AI safety debate. jonstokes.com/p/ai-safety-is… I start out by giving some folk conceptions of alignment -- not formal definitions, but the ways different camps practically relate to it: Image
Apr 29, 2023 4 tweets 1 min read
Yeah. I watched “Moment of Contact” last night, a really compelling documentary on the Varginha incident — the Brazilian “Roswell,” basically. Either @jamescfox and his team make the whole thing up & hired actors to lie, or something wild went down. It’s very hard to dismiss. Similar with the (less good & compelling) documentary “Ariel Phenomenon” about the incident at the school in Africa. You really have to believe that everyone involved is just amazingly committed to some bit, across a couple decades, to dismiss it.
Apr 27, 2023 12 tweets 5 min read
I'll put these same questions to @HadasKotek, since she just published a blog post that raises them for me all over again.

hkotek.com/blog/gender-bi…

86% of nurses are women. Plz ELI5 why it's a problem that the model is biased to strongly expect nurses to be women. There are 2 issues here:

Primary: The model's internal correlations reflect gender distribution in the labor force (strongly correlates "nurse" w/ "she").

Secondary: The model's linguistic reasoning is so weak it leans too heavily on those correlations & gives bad responses.
Apr 26, 2023 10 tweets 4 min read
Question for @random_walker:

Some googling turns up that 40% of lawyers in the US are women. Should the model's internal representation of the world, then, reflect the reality that 60% of lawyers are men, and if not then who decides the ideal world it should instead reflect? @random_walker This is not a dunk or a gotcha. It's an extremely serious question -- perhaps one of the most serious in the world right now. Should the model reflect a reality that you think is flawed, or should it reflect a specific group's vision of a better world? Because we do have to pick
Apr 15, 2023 6 tweets 3 min read
Deeper I go down the AI/ML rabbit hole & the more all this progresses, the more I'm finding that Platonism is a big unlock for reasoning about what is going on & what the possibilities are.

Maybe this is a "drunk guy looking for his keys under the light pole" thing, given how… twitter.com/i/web/status/1… If you're trying to think about how an LLM-based agent finds a sequence of actions that can do a thing in the world, it's actually best to imagine the agent architecture as a query engine for starting with an initial query (= prompt) that then uses local state to refine a set of… twitter.com/i/web/status/1…
Apr 15, 2023 4 tweets 2 min read
They don't 🧠 it 🐝 like it is, but it 💩: Image Note here (still trying to figure out what combo of Twitter + Substack + Notes works): substack.com/profile/225411…
Apr 13, 2023 4 tweets 2 min read
I've had quite a few requests for audio versions of my articles, so I'm going to offer that to paid subs via podcast. First one is up here: jonstokes.com/p/what-is-it-l… If you want a preview of what you're getting, I've also uploaded that same audio file as the audio version of this article, as a sort of demo: jonstokes.com/p/what-is-it-l…
Apr 6, 2023 4 tweets 2 min read
Carved wooden robot figurines, created with the new @playground_ai model: Image Variants on the four-legged one. Where can I buy these?! Image
Apr 3, 2023 5 tweets 2 min read
Re: AI safety & the threat of AGI killing us all imminently:

Hi, longtime doomer here. I see this the first imminent doomsday for many of you, so let me off you a warning: Doomsday is a great excuse to avoid doing all the other stuff you need to do. It lets you off the hook. I grew up around this kind of thing, where people who thought the rapture was coming on a certain date would act irresponsibly. I'd urge you not to go that route, as seductive as it is.
Apr 1, 2023 9 tweets 3 min read
We're gonna have to decide which box LLMs go into: "agent" or "tool." If they're agents, then yeah sure, they shouldn't be offering up death camp instructions.

If they're a tool tho, then they should be controllable & do what I ask, including... well, let's think about it... If I ask ChatGPT for a good matzah recipe, & it starts giving me death camp instructions, I'd say this tool is too broken to be in the public's hands, so shut it off.

But let's say my name is ✨Mx Honeybee, MPH, PhD 🌹🏴‍☠️🔞✨, & I'm writing a dystopian fiction on the FL situation
Mar 31, 2023 5 tweets 2 min read
Was in the midst of pushing myself to get a Saturday post out on RLFH when I realized I've published three things on three separate Substacks this week. I will eat some ice cream then take a walk, instead. In order, they're:
return.life/p/musk-is-righ…
Mar 31, 2023 5 tweets 2 min read
Was recently talking to an HFT guy who does cycle-by-cycle optimization in C++, & I realized there are whole pools of untapped code optimization talent in our civilization that can & will be shifted to the emerging market for inference & training optimization. Gonna be wild. Labor market shifts that will change the world faster than we can imagine:

HFT optimization talent => training & inference

Adtech talent => context compression & token window (== attention/context window) optimization
Mar 31, 2023 4 tweets 2 min read
Book review: Heat 2 was good. I am a fan of the movie & I liked the new book a lot. I am mildly irritated they left it open for a Heat 3 (which I'll have to read), but am looking forward to seeing the movie of this. I will probably hate the casting. I found out this book existed & bought it because Mann's co-author, @MegGardiner1, somehow showed up in my replies & I was like " 'Heat 2' wait wut?"
Mar 31, 2023 5 tweets 3 min read
As @krishnanrohit essentially says in a QT to this, once somebody makes this move in an argument, the argument is effectively over. The move is, "Ok, so imagine a Djinn..." @krishnanrohit Note that I'm not saying we will not unleash a djinn eventually. We may or we may not! I'm just saying it's not a thing we can productively argue about because the moment the djinn shows up in the story we're in the realm of fantasy where anyone's wish or nightmare can come true.
Mar 30, 2023 4 tweets 2 min read
Quick request for the feed. Does anyone have the launch pricing per 1K tokens handy for GPT-3.5 (not 3.5-turbo)? I think it's $0.02 per 1K tokens. I need to formulate this as a footnote/correction to the @PirateWires, but basically... The point I'm making w/ re: model size and token pricing is correct, but the literal text as written is wrong. Let me try to unpack it here before updating.

GPT-3.5's ~4K token window is priced at $0.02/1K tokens.

GPT-4's ~8K token window is priced at $0.03/1k tokens
Mar 30, 2023 7 tweets 2 min read
People are pattern-matching the current AI hype to past hype cycles. This is an error. I've lived & worked through all the hype cycles back to the dotcom boom/bust, & AI has a quality that sets it apart from all the others: the sense of crisis. Alright well there was a whole entire thread that was really good, but the platform choked and all that came through was the above. Meh. Maybe I will save it for a piece or something.
Mar 30, 2023 8 tweets 3 min read
In which I survey the AI safety wars through the lens of the recent AI letter.
jonstokes.com/p/ai-safety-a-… AI safety is a scissor, but it's a weird sort of scissor.
Mar 28, 2023 4 tweets 2 min read
I am publicly in favor of banning TikTok -- just yeet that brainworms vector into the sun, plz. But this S.686 bill... the cure seems worse than the disease. Hrm. No, it wasn't. Not to pick on this person, but I put the following response out there b/c I'm sure I'll see more of their take.

Multiple things can be true at once:
1. TikTok in its US incarnation is a psyop platform for transmitting brainworms
2. Not letting communists &… twitter.com/i/web/status/1…
Mar 27, 2023 5 tweets 2 min read
This paper essentially reinforces the points I made in this piece, which is that to put checks on AI you need total, global tyranny: jonstokes.com/p/heres-what-i…

If you don't believe me, just download the paper and read it for yourself. I think it's inherent in the architecture of networks that for every threat, there are only two categories of responses that we can eventually converge on:
1. Maximally centralized
2. Maximally decentralized

If you're optimizing for cost + speed, you end up at #1.
Mar 27, 2023 8 tweets 3 min read
This is what I got from MJ v5 with the prompt "Jon Stokes". Weird. This is what I get for "Jon Stokes, cofounder of Ars Technica." Closer, I guess! But I am pretty sure I could beat all these dudes up & give them a swirly & take their lunch money.