Eric Jang Profile picture
AI at @1x_tech
Sep 17 8 tweets 3 min read
Over the last few months at @1x_tech we’ve been working on a learned simulator for general purpose robotics. Here’s a thread of some of the cool learned dynamics along with failure modes 1/n @1x_tech work was mostly done by @JackMonas and Kevin Zhao btw 2/n
Mar 31, 2023 9 tweets 3 min read
In many areas of computer science (cryptography, NP complexity), verifying a solution is much easier than generating one. This blog post finds that LLMS (mostly GPT-4) may be capable of self-verifying its solutions.

evjang.com/2023/03/26/sel… I tweeted about this earlier in the week - but check out the blog post as it explores connections to prior work and the exciting possibilities of making AI systems more logically grounded without human supervision
Mar 26, 2023 11 tweets 6 min read
Instead of finding the perfect prompt for an LLM (let's think step by step), you can ask LLMs to critique their outputs and immediately fix their own mistakes. Here's a fun example: I saw @awjuliani's tweet that LLMs cannot generate a non-rhyming poem. Indeed, GPT-4 does not do it even if I ask it to think carefully
Mar 23, 2023 4 tweets 3 min read
How it started / How it's going

1xtech.medium.com/1x-raises-23-5… ImageImage also, new humanoid just dropped! @1x__tech has been working on this for awhile, since before I joined

like many recently announced humanoids, ours also lacks a nose ... Image
Dec 1, 2022 7 tweets 2 min read
I asked ChatGPT a series of technical questions relating to thermodynamics - very impressed by how lucid it's explanations are. Like a very friendly tutor that doesn't mind stupid questions ChatGPT does uncertainty the "right way" - none of that epistemic ensemble nonsense, just say that "I'm uncertain"
Nov 7, 2022 5 tweets 2 min read
Great overview blog post on why transformers need certain optimization tricks that aren't needed by other architectures borealisai.com/research-blogs… The blog post mentions some good papers, like
On Layer Normalization in the Transformer Architecture (Xiong et al 2020) arxiv.org/pdf/2002.04745…
Nov 1, 2022 8 tweets 3 min read
I'm somewhat embarrassed to admit this, but I've been wondering why this is the case for some time. Why is it the case that many regression problems are (seemingly) easier to learn when expressed as a classification problem? One nice explanation I've seen, from an optimization standpoint, is that CE gradients don't vanish as you get closer to the target:
jamesmccaffrey.wordpress.com/2013/11/05/why…
Aug 31, 2022 4 tweets 2 min read
A potential disruptor to OpenAI / Anthropic / large-scale foundation model startups would be if ppl figured out a way to incrementally fork Stable Diffusion and train the models a bit further on single-GPU desktop machines, and then "merge" the distilled compute from other forks. It's not so black-and-white (OpenAI's edge is not their model but the raw IQ / eng velocity of their team), but it would certainly be hard to consolidate AGI in a world where open source collectively has way more FLOPS @colinraffel has written about this colinraffel.com/talks/cornell2…
Dec 17, 2021 5 tweets 2 min read
Here is the sequel to "Just ask for Generalization" - in this blog post I argue that Generalization *is* Language, and suggest how we might be able to re-use Language Models as "generalization modules" for non-NLP domains. Check it out!

evjang.com/2021/12/17/lan… The first post in the series evjang.com/2021/10/23/gen… (not required reading)
Oct 24, 2021 5 tweets 2 min read
It's out! Supervised learning, empirically speaking, seems to be the best "data sponge" for acquiring generalization. What if we make generalization the first-class citizen in algorithmic design, and tailor everything else in service of it?
evjang.com/2021/10/23/gen… You might even be able to replace many inductive biases in RL theory with sufficient amounts of generalization.
Aug 7, 2020 15 tweets 3 min read
1/n GPT-3 is very expensive to train, costing an estimated $5M (even when you know exactly what to do). 2/n We are so far away from building AGI, but I see a powerful language model like GPT-3 as "table stakes". Language is the substrate of thought, after all.
Mar 12, 2020 13 tweets 3 min read
Take my non-expert guess with a big rock of salt: I predict that within the next month, a RCT study from China or South Korea will show that Chloroquine is effective for treating COVID-19. hospitals globally will prescribe Chloroquine analogues, and crisis will be over within 3mo Here's a super accessible summary video by pulmonologist & critical care specialist Roger Seheult, MD. Here's a summary of that summary video 👇
Feb 12, 2020 10 tweets 2 min read
A student emailed me a cool question today - are Normalizing Flows "latent variable" models or "fully observed" models? here's my take (plz correct if I'm wrong!)👇 We typically define "fully-observed" as "modeling data without introducing unobserved local variables" and "latent variable" as introducing some unobserved variables.
Apr 21, 2019 9 tweets 2 min read
1/ I'm compiling a list of examples of "using physical phenomena to compute useful stuff". What are some interesting examples? I can think of a few: 2/ DNA computing www2.cs.duke.edu/courses/cps296…
Feb 25, 2019 11 tweets 2 min read
1/ I recently read through the "I Don't Like Notebooks" slides. news.ycombinator.com/item?id=178567… My thoughts 2/ @joelgrus raises shortcomings of notebook-driven development, things like "Untitled24.ipynb", poor modularity, bad habits, etc.