I also enjoyed this piece about how @philiplord drove the decision not to include subtitles for the Spanish language dialog to better represent bilingual culture remezcla.com/features/film/…
Here's a tweet from just before the movie came out with a preview of one of the early scenes - but I just noticed that the conversation attached to this tweet has a ton of extra insight from the animator on how he put the scene together
Here's new term of art I like: asynchronous coding agents, to describe the category of software that includes OpenAI Codex and Gemini Jules - cloud-based tools where you submit a prompt and they check out your repo, iterate on a solution and finish by submitting a pull request
Jules used that term in the headline of their general availability announcement today: "Jules, our asynchronous coding agent, is now available for everyone." blog.google/technology/goo…
LangChain's new Open SWE tool, released this morning (and MIT licensed) calls itself an "asynchronous coding agent" too
"Design Patterns for Securing LLM Agents against Prompt Injections" is an excellent new paper that provides six design patterns to help protect LLM tool-using systems (call them "agents" if you like) against prompt injection attacks
I put together an annotated version of the new Claude 4 system prompt, covering both the prompt Anthropic published and the missing, leaked sections (thanks, @elder_plinius) that describe its various tools
It's basically the secret missing manual for Claude 4, it's fascinating!
I just released llm-anthropic 0.16 (and a tool-enabled 0.16a1 alpha) with support for the two new Claude models, Claude Opus 4 and Claude Sonnet 4: simonwillison.net/2025/May/22/ll…
I picked up some more details on Claude 4 from a dive through the Anthropic documentation
The training cut-off date is March 2025! Input limits are still stuck at 200,000 tokens. Unlike 3.7 Sonnet the thinking trace is now summarized by a separate model.
I 'm seeing a lot of screenshots of ChatGPT's new 4o "personality" being kind of excruciating, but so far I haven't really seen it in my own interactions - which made me suspicious, is this perhaps related to the feature where it takes your previous chats into account? ...
Also a potentially terrifying vector for persisted prompt injection attacks - best be careful what you paste into ChatGPT, something malicious slipping in might distort your future conversations forever?