Simon Willison Profile picture
Dec 26, 2018 51 tweets 12 min read Read on X
There is so much great #SpiderVerse behind the scenes content on Twitter if you start following some of the artists who created the film
I also enjoyed this piece about how @philiplord drove the decision not to include subtitles for the Spanish language dialog to better represent bilingual culture remezcla.com/features/film/…
From this thread I learned that the screenplay says "The camera is UPSIDE DOWN. Miles isn't falling through frame. He's RISING."

Full screenplay PDF, released by Sony:

origin-flash.sonypictures.com/ist/awards_scr…
Now available to buy on iTunes, $19.99 (rental available March 19th) - and it comes with a Spider-Ham short! 🕷 🐽 itunes.apple.com/us/movie/spide…
Sony have made the first 9 minutes of the film available on YouTube
This is the best interview I've seen with the directors - really gets into the details of how they worked differently to make the movie
I had not realized that Miles' framerate increases to 24fps by the end of the movie
Here's a tweet from just before the movie came out with a preview of one of the early scenes - but I just noticed that the conversation attached to this tweet has a ton of extra insight from the animator on how he put the scene together
I love seeing reference shots like these
Another great reference shot - Aunt May in #SpiderVerse was my favourite version of that character anywhere
This thread has some gorgeous concept art

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Simon Willison

Simon Willison Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @simonw

Apr 28
Don't suppose anyone grabbed a ChatGPT system prompt leak before and after this change?

Would be interesting to see what instruction caused the sycophancy
Courtesy of @elder_plinius who unsurprisingly caught the before and after Here are just the specific changes in the diff:  **Removed text:** - "Over the course of the conversation, you adapt to the user's tone and preference. Try to match the user's vibe, tone, and generally how they are speaking. You want the conversation to feel natural. You engage in authentic conversation by responding to the information provided and showing genuine curiosity."  **Added text:** + "Engage warmly yet honestly with the user. Be direct; avoid ungrounded or sycophantic flattery. Maintain professionalism and grounded honesty that best represents OpenAI and its values...
@elder_plinius Here's that diff in a Gist gist.github.com/simonw/51c4f98…
Read 4 tweets
Apr 27
I 'm seeing a lot of screenshots of ChatGPT's new 4o "personality" being kind of excruciating, but so far I haven't really seen it in my own interactions - which made me suspicious, is this perhaps related to the feature where it takes your previous chats into account? ...
Also a potentially terrifying vector for persisted prompt injection attacks - best be careful what you paste into ChatGPT, something malicious slipping in might distort your future conversations forever?
Read 5 tweets
Apr 18
Gemini 2.5 Pro and Flash now have the ability to return image segmentation masks on command, as base64 encoded PNGs embedded in JSON strings

I vibe coded this interactive tool for exploring this new capability - it costs a fraction of a cent per image On the left, my photo of two pelicans in flight. On the right, that photo with black and white masks outlining the pelicans, overlaid against a grid showing coordinates from 0 to 1000.
Details here, including the prompts I used to build the tool (across two Claude sessions, then switching to o3 after Claude got stuck in a bug loop) simonwillison.net/2025/Apr/18/ge…
Turns out Gemini 2.5 Flash non-thinking mode can do the same trick at an even lower cost... 0.0119 cents (around 1/100th of a cent)

Notes here, including how I upgraded my tool to use the non-thinking model by vibe coding o4-mini:

simonwillison.net/2025/Apr/18/ge…
Read 4 tweets
Apr 16
Now available in LLM through the llm-openai plugin:

llm install -U llm-openai-plugin
llm -m openai/o3 "Say hi in five languages"

Or "-m openai/o4-mini" for o4-mini
Release notes here github.com/simonw/llm-ope…
And for the essential "Generate an SVG of a pelican riding a bicycle" benchmark, here's o3's effort

I got it to describe the image it generated too, see alt text

llm -m openai/o3 'describe this image' -a htt''ps://static.simonwillison.net/static/2025/o3-pelican.jpg The illustration shows a playful, stylized bicycle whose frame is drawn to resemble a duck.   • The duck’s rounded body forms the bicycle’s seat area, and a small wing is sketched on its side.   • Its long neck stretches forward to become the top tube, ending in a simple head with a black eye and an orange, open beak that points ahead like handlebars.   • Two large black‑outlined wheels with thin, evenly spaced spokes complete the bike, while thin blue lines depict the rest of the frame, pedals, and chain.   • A dashed grey ground line runs beneath the wheels, giving the impression the duck...
Read 5 tweets
Apr 9
Model Context Protocol has prompt injection security problems ... and it's not a problem with the protocol itself, this comes up any time you provide tools to an LLM that can potentially be exposed to untrusted inputs
I wrote this up in detail here simonwillison.net/2025/Apr/9/mcp…
"The “S” in MCP Stands for Security" is a brilliant title for an article about this (by Elena Cross) elenacross7.medium.com/%EF%B8%8F-the-…
Read 4 tweets
Apr 7
Big new release today of my command-line tool for interacting with LLMs: LLM 0.24 adds fragments, a mechanism for constructing a prompt from several files and URLs that's ideal for working with long context models (Gemini, Llama 4 etc)
Full details on my blog: simonwillison.net/2025/Apr/7/lon…
This version also supports plugins for populating fragments. The new llm-docs plugin lets you ask questions of the LLM documentation using LLM itself!

llm install llm-docs
llm -f docs: "How do I embed a binary file?" Asking questions of LLM’s documentation #  Wouldn’t it be neat if LLM could anser questions about its own documentation?  The new llm-docs plugin (built with the new register_fragment_loaders() plugin hook) enables exactly that:  llm install llm-docs llm -f docs: "How do I embed a binary file?"  The output starts like this:      To embed a binary file using the LLM command-line interface, you can use the llm embed command with the --binary option. Here’s how you can do it:          Make sure you have the appropriate embedding model installed that supports binary input.         Use...
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(