will depue Profile picture
Jun 22, 2023 22 tweets 9 min read Read on X
do you have any hobbies?
yeah making computers out of things that shouldn't be computers. watch me be the first to bring turing completeness to figma
(edit going to build this tonight so scroll for my live tweeting of a computer) https://t.co/a07l9Ib0Qntwitter.com/i/web/status/1…
ok simple clock working seems promising. add/sub/mult/div already implemented for numbers already by figma, seems like there might be more ops for other types which is great
ok time to test limits and max out these variables. numbers represented as signed 32 bit ints, and will overflow to min int. interesting
conditionals aren't superr powerful, can't nest them and can't do ops within the calls themselves (for example if x * -2 < y) isn't possible just (x > or < or = or != to y). still powerful tho

i can just do preprocessing before each step where i can do ops beforehand and then compare in the conditional but creating new variables is manual so can't dynamically do that.
but this is fine solved by a register system, since figma computers must be single threaded here
it seems "after delay" effects can only occur on top level components, somewhat interesting. maybe they can call overlays which can count as other top level components?
pretty much trying to make a display rn so i can start on bit ops
i am a genius. figma stops you from having multiple after delay effects for one frame and doesn't allow sub components but can call overlays which call their own after delay effects.
trying to figure out how to store my data. there's a pretty cool number type which already supports lots of ops but problem is that if i can't manipulate bits i'll have to resort only to booleans (maybe colors? need to see)
seems numbers are actually floats. if i can get a floor… https://t.co/9GxgfxNCRZtwitter.com/i/web/status/1…
okay so can convert ints to strings in a weird way by appending an empty string to an int and saving to a string var. possible bug.
can't find a way to cast int to string directly so going to go a different route
the reason why this matters btw is that now i can make a directed acyclic graph if you remember that from the cs class that i never went to. now we can make a real program
confused as i don't know to detect if a program is going to halt or not? strange...
ok the problem is that i was hoping i could do more with numbers but not going to work. now going with an 8bit computer (trying to see if it's easier to implement only in binary or using the int unary ops by converting/back).
interesting note: you can pass vars by ref with modes
ok ok i think it might be possible to make addressable memory with references and bit int conversions. gonna be epic if this works.
a big bottleneck here is having to individually create registers for each bit here. i can easily make number variables at 8x (and 32x later).
woah… twitter.com/i/web/status/1…
ok first test failed:
tried to use variable modes () to store different values in one, allowing me to build a doubly linked list for ex while passing by ref.
this would work if overlays would inherit their parent element when in auto mode but we're bit by… https://t.co/4dgu6eYAZo https://t.co/guSASpR6juhelp.figma.com/hc/en-us/artic…
twitter.com/i/web/status/1…
lol i might be the first to get this message haha. figma please i'm trying to build here.
damn ok so not going to work. changing refs is a deep copy and operating only on the mode that's currently enabled. will think about this again tomorrow after i'm rested.
i think i can do this with components in a non ugly way? goal is basically create a singly linked list
ok yep i didn't fully understand the figma state and variable system. i built a component-based linked list that with the current 8 bit system could store 256 vars of possible 32 bits each if packed optimally -> 8.192 kb possible?
looks great though
calling it for now, spent a bunch of time figuring out an algo that only uses subtraction to convert floats to 8 bit signed ints in binary that will work in this structure.
don't think that built in int unary ops will be faster if conversion process is so slow. must test tmr
ok coming back, progress on int -> bin conversion is good but just getting somewhat abstract. going to stop for now as i don't know if this is the right path.
here's the algo i'm building though, pretty solid

full adder donezo. going bit only now for simplicity.
ok big update is going along well now implementing memory system again, have about 4 of 16 operators done.
what's strange is that elements can run things in the background even after navigate to calls (i was trying to use them as a return statement) going to have to rewrite some… https://t.co/SlzT9foiGltwitter.com/i/web/status/1…
long day tomorrow so can't be staying up till 5am building this but addressable memory is 50% done, algos designed and partially implemented.
the float to binary int8 decomposition algorithm is pretty beautiful ngl. very cool tricks to get it to work simply.
ok so nice thing about figma computers is that they're pretty easy to prototype as you can just add more visual displays + make things only move by click and easily step through the 'code'
decomposition almost done (i am procrastinating by building this computer)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with will depue

will depue Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @willdepue

Dec 26, 2024
open source will win, in the end. in the meantime, labs should focus on pushing the frontier as far as possible. packaging and distributing ai to the world for the sake of maximizing access is a secondary, while important, goal.
in the unlikely case where the gap between the public and private model grows over time, instead of shrinks, as it is now, responsible parties should publish research to close the gap.
progress can feel simultaneously fragile and inevitable. so many breakthroughs look so obvious and fated to be discovered in hindsight but looking forward is daunting and uncertain. i lean towards the latter: true innovation is rare, brittle, should be preserved at all costs.
Read 10 tweets
Sep 12, 2024
Some reflection on what today's reasoning launch really means:

New Paradigm
I really hope people understand that this is a new paradigm: don't expect the same pace, schedule, or dynamics of pre-training era.
I believe the rate of improvement on evals with our reasoning models has been the fastest in OpenAI history.
It's going to be a wild year.

Generalization across Domain
o1 isn't just a strong math, coding, problem solving, etc. model but also the best model I've ever used for answering nuanced questions, teaching me new things, giving medical advice, or solving esoteric problems.
This shouldn't be taken for granted!

Safety by Reasoning
The fact that our reasoning models also improve on safety behavior and safety reasoning is very much non-trivial.
For years (a decade?) the boogeyman of the AI world was reinforcement learning agents which were incredibly adept at game playing but completely incapable of reasoning or understanding human values!
This is a strong point of evidence against this.

Scaling inference-time compute can compete with scaling training compute!
The fact that o1-mini is better than o1 on some evals is very very remarkable. The implications of this I'll leave as an exercise for the reader.

Multimodal Reasoning
It's kind of crazy that reasoning improves on multimodal evals as well! See MMMU and MathVista: these aren't small improvements.Image
Image
Image
Image
To be clear I'm not one of the contributors to the o1 project: this has been the absolutely incredible work of the reasoning & related teams.
The rate of progress has just been faster than anything I've ever seen: it's absurd how fast the team has climbed the scaling OOMs just after discovering this paradigm.
Less seriously now:
I do want to also give a word of caution to the schizos, the hypemen, the fans and the haters:
This is a new paradigm. As with all nascent projects will be holes, bugs, issues to fix. Don't expect everything to be perfect instantly!
But you should take the rate of progress, the fact that we're solving problems that seemed miles away in the pretraining scaling laws, the fact that we now have visibility into solving many of the things which people have said LLMs could never do.
There's lots of quirks and benefits of the pretraining paradigm that might not exist in the reasoning paradigm, and vice versa. As a random example, I do believe there will be more examples of inverse scaling here than in the pre-training world (in which there were surprisingly few).
Onwards!
Read 4 tweets
May 13, 2024
i think people are misunderstanding gpt-4o. it isn't a text model with a voice or image attachment. it's a natively multimodal token in, multimodal token out model.
you want it to talk fast? just prompt it to. need to translate into whale noises? just use few shot examples.
every trick in the book that you've been using for text also works for audio in, audio out, image perception, video perception, and image generation.
for example, you can do character consistent image generation just by conditioning on previous images. (see the blog post for more)

Starting from this image prompt:

This is Sally, a mail delivery person: Sally is standing facing the camera with a smile on her face.

Now Sally is being chased by a dog. Sally is running down the sidewalk and as a golden retriever is chasing her.

Uh oh, Sally has tripped!
Sally has tripped over a branch that was blocking the sidewalk, and she is trying to stand up. The dog is still chasing her in the background.Image
Image
Image
Image
Read 7 tweets
Mar 25, 2024
announcing... starlinkmap dot org
real-time map of every starlink satellite. tracks upcoming launches, other constellations, orbital updates, etc.
finally launching this after a while! more details below.
starlink is, imo, one of the most exciting technologies of our generation.
today, only 65% of the world has access to the internet at all (and far fewer have high-speed internet).
with direct-to-cell coming, soon every device, anywhere on Earth, will be connected together. Image
there's lots of stats on the website. here are some of the best:
- over 5,600 starlinks orbiting right now. right under 6000 ever launched.
- as of march: ~2.6 million starlink customers worldwide
- in the last year, there's been a starlink launch on average every 5.2 days!
Image
Image
Read 5 tweets
Sep 23, 2023
I ask DALLE-3 to generate a Pepe but each time I tell it to make it "more rare." Image
"make it more rare" Image
"even rarer" Image
Read 26 tweets
Sep 20, 2023
DALLE-3 is the best product I've seen since GPT-4, super easy to just get sucked in for hours generating images. No need for prompting since GPT-4 does it for you.
Let me know if you have requests for prompts below. Here are some examples of what it can do:


Image
Image
Image
Image
It's shockingly good at styles that require consistent patterning like Pixel Art, mosaics, or dot matrices.

Image
Image
Image
It's quite good at people... and hands (at last).


Image
Image
Image
Image
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(