Simon Willison Profile picture
Dec 26, 2018 51 tweets 12 min read Read on X
There is so much great #SpiderVerse behind the scenes content on Twitter if you start following some of the artists who created the film
I also enjoyed this piece about how @philiplord drove the decision not to include subtitles for the Spanish language dialog to better represent bilingual culture remezcla.com/features/film/…
From this thread I learned that the screenplay says "The camera is UPSIDE DOWN. Miles isn't falling through frame. He's RISING."

Full screenplay PDF, released by Sony:

origin-flash.sonypictures.com/ist/awards_scr…
Now available to buy on iTunes, $19.99 (rental available March 19th) - and it comes with a Spider-Ham short! 🕷 🐽 itunes.apple.com/us/movie/spide…
Sony have made the first 9 minutes of the film available on YouTube
This is the best interview I've seen with the directors - really gets into the details of how they worked differently to make the movie
I had not realized that Miles' framerate increases to 24fps by the end of the movie
Here's a tweet from just before the movie came out with a preview of one of the early scenes - but I just noticed that the conversation attached to this tweet has a ton of extra insight from the animator on how he put the scene together
I love seeing reference shots like these
Another great reference shot - Aunt May in #SpiderVerse was my favourite version of that character anywhere
This thread has some gorgeous concept art

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Simon Willison

Simon Willison Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @simonw

Nov 12
Wrote up some notes on the new Qwen2.5-Coder-32B model, which is the first model I've run on my own Mac (64GB M2) that appears to be highly competent at writing code
simonwillison.net/2024/Nov/12/qw…
So far I've run Qwen2.5-Coder-32B successfully in two different ways: once via Ollama (and the llm-ollama plugin) and once using Apple's MLX framework and mlx-llm - details on how I ran both of those are in my article.
If you use uv on a Mac with 64GB of RAM try this

uv run --with mlx-lm \
mlx_lm.generate \
--model mlx-community/Qwen2.5-Coder-32B-Instruct-8bit \
--max-tokens 4000 \
--prompt 'write me a python function that renders a mandelbrot fractal as wide as the current terminal'
Read 5 tweets
Nov 4
I deleted my earlier tweet about this because I misunderstood it - this is an interesting new feature for speeding up prompt inference at the expense of paying for additional tokens
Here's my experiment showing that it costs more to use this feature - you're trading cost for improved performance
Confirmation here that if you get your predicted completion _exactly_ right then the cost will stay the same, but you get charged for any delta between your prediction and the final output
Read 4 tweets
Nov 4
Claude 3.5 Haiku is out - two surprises:

1. It's priced differently from Claude 3 Haiku. 3.5 Sonnet had the same price as 3 Sonnet, but 3.5 Haiku costs ~4x more than 3 Haiku did
2. No image input support yet

3.5 Haiku beats 3 Opus though, and Opus cost 15x the new Haiku price!
I released a new version of llm-claude-3 adding support for the new model (and fixing an attachments bug):

llm install --upgrade llm-claude-3
llm keys set claude
# paste API key here
llm -m claude-3.5-haiku 'impress me with your wit'github.com/simonw/llm-cla…
I also added the new 3.5 Haiku to my LLM pricing calculator tools.simonwillison.net/llm-pricesScreenshot of LLM pricing calculator. On the left, form fields for number of input / output tokens and cost per million of those, plus results. On the right are preset buttons for the Gemini, Claude and GPT-4 models.
Read 5 tweets
Nov 3
I was having a conversation with Claude about unconventional things to do in the SF Bay Area and I got a bit suspicious so I prompted "Are you sure all of those are real? I think you made some of those up." You're right to be skeptical! I should carefully review these. Let me separate them into what I'm confident exists and what I need to retract:  Definitely Real: - Musée Mécanique (100% real and awesome) - Winchester Mystery House - Wave Organ - Church of 8 wheels (roller disco in a church) - Tonga Room & Hurricane Bar - Albany Bulb art park - The Flintstone House on I-280 - Marrakech Magic Theater - Audium sound theater - Pacific Pinball Museum  Need to Retract: - Museum of International Propaganda (I may have confused this with something else) - "Sunday Acid Church Dance" - I apo...
(I've actually been to the Gregangelo Museum and can confirm it definitely does exist: )niche-museums.com/14
I haven't visited it yet, but the Museum of International Propaganda definitely exists as well maps.app.goo.gl/x1TV32h2D3rvQk…
Read 4 tweets
Nov 2
I've been trying this one out via LLM and the llm-gguf plugin and it seems VERY impressive for its size

Here's how to chat with the tiniest quantized version (just a 95MB file)

llm install llm-gguf
llm gguf download-model 'https:/''/huggingface.co/lmstudio-community/SmolLM2-135M-Instruct-GGUF/resolve/main/SmolLM2-135M-Instruct-Q8_0.gguf' -a smolsmol
llm chat -m smolsmol
And for the largest 1.7B one - a 1.7GB download, again using a quantized GGUF from lmstudio-community:

llm gguf download-model 'https:/''/huggingface.co/lmstudio-community/SmolLM2-1.7B-Instruct-GGUF/resolve/main/SmolLM2-1.7B-Instruct-Q8_0.gguf' -a smol17
llm chat -m smol17
Published more notes on my blog simonwillison.net/2024/Nov/2/smo…
Read 4 tweets
Oct 29
I added multi-modal (image, audio, video) support to my LLM command-line tool and Python library, so now you can use it to run all sorts of content through LLMs such as GPT-4o, Claude and Google Gemini

simonwillison.net/2024/Oct/29/ll…
Stuff like this works now:

llm 'transcript' \
-a 'https:/''/static.simonwillison.net/static/2024/video-scraping-pelicans.mp3' \
-m gemini-1.5-flash-8b-latest

Cost to transcribe 7m of audio with Gemini 1.5 Flash 8B? 1/10th of a cent. But let’s do something a bit more interesting. I shared a 7m40s MP3 of a NotebookLM podcast a few weeks ago. Let’s use Flash-8B—the cheapest Gemini model—to try and obtain a transcript.  llm 'transcript' \   -a https://static.simonwillison.net/static/2024/video-scraping-pelicans.mp3 \   -m gemini-1.5-flash-8b-latest  It worked!      Hey everyone, welcome back. You ever find yourself wading through mountains of data, trying to pluck out the juicy bits? It’s like hunting for a single shrimp in a whole kelp forest, am I right? Oh, tell me about it. I swear, sometimes I feel like I’m gonna go c...
I still think most people are sleeping on these multi-modal vision/audio LLMs - extracting useful information from non-text media sources used to be almost impossible, now it can be done effectively for fractions of a cent
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(