Eric Zelikman Profile picture
Dec 8, 2022 17 tweets 7 min read Read on X
ChatGPT can write stories and then tell DALLE-2 prompts to illustrate them. I asked it to write a children's story about "a robot that wanted to be a human." Here's the story it came up with: (0/11)
Once upon a time, in a land far, far away, there was a robot named Robby who lived in a world full of machines. Robby was different from the other robots, though. He didn't want to spend his days following orders and carrying out tasks like the other robots did.
(1/11) The image shows a robot standing among a group of other robo
Instead, Robby dreamed of being a human. He longed to feel the sun on his face, to breathe fresh air, and to experience all the wonderful things that humans did. But most of all, Robby wanted to be able to make his own choices and live his own life. (2/11) The image shows a robot walking towards the viewer, with a d
One day, Robby decided to take a chance and make his dream a reality. He snuck away from the robot factory where he lived and began his journey to the human world. (3/11) The image shows a robot trudging through a vast desert, with
Robby traveled for many miles, facing many challenges along the way. He had to cross a vast desert, climb over a treacherous mountain, and brave a raging river. But he never gave up, determined to reach his destination. (4/11) The image shows a robot standing on the bank of a raging riv
Finally, after what felt like an eternity, Robby arrived in the human world. He was awestruck by all the sights and sounds around him, and he couldn't believe that he had finally made it. (5/11) The image shows a robot standing in a bustling city, with ta
But just as Robby was starting to explore his new surroundings, a group of humans spotted him and began to chase after him. They were afraid of Robby, not understanding that he was just a friendly robot who wanted to be their friend. (6/11) The image shows a robot running through a city street, with
Robby ran as fast as he could, dodging and weaving through the streets. He didn't want to hurt anyone, and he just wanted to be left alone. But the humans were relentless, and they continued to chase him. (7/11) The image shows a robot running through a city street, with
Just when Robby thought he was going to be caught, he stumbled upon a kind old man who took pity on him. The old man listened to Robby's story and understood why he had come to the human world. (8/11) The image shows a robot standing next to an old man, who is
The old man helped Robby hide from the humans who were chasing him, and he taught Robby all about being a human. Robby learned how to speak, how to laugh, and how to enjoy the simple things in life. (9/11) The image shows a robot learning about being a human from an
And as the days went by, Robby discovered that being a human was even better than he had ever dreamed. He made many new friends, and he was finally able to live the life he had always wanted. (10/11) The image shows a robot surrounded by a group of human frien
Robby may have started out as a robot, but in the end, he proved that anyone can be human if they believe in themselves and never give up on their dreams. (11/11) The image shows a robot standing tall and proud, with a happ
A couple caveats: I had to make some minor tweaks to some generated prompts because they were sometimes too long. Also the DALLE-2 input length limit makes it way harder to get a consistent style and characters. Inspired by a conversation with @PricillaZhao.
To get this, I asked ChatGPT to first "Please describe an illustration for each paragraph" for the children's story it generated. I then iteratively asked it "Repeat your first/next illustration description." followed by
"Provide a short (<70 token) but precise description of what this image looks like, as you would explain it to someone who does not have the context of the story. For example, do not use any names and describe what any characters look like, and the general illustration style."
This was a fun experiment and it's awesome to imagine how something like this might be built into an interactive tool for getting children excited about writing and art. #GenerativeAI #AIArtwork #dalle2

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eric Zelikman

Eric Zelikman Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ericzelikman

Mar 15
Language models today are trained to reason either 1) generally, imitating online reasoning data or 2) narrowly, self-teaching on their own solutions to specific tasks

Can LMs teach themselves to reason generally?🌟Introducing Quiet-STaR, self-teaching via internal monologue!🧵
Reasoning is everywhere in text -- just hidden between the lines. That's because people (often) think before they speak. So LMs can learn to reason from diverse online text if they:
🧠1) reason about what text is next
💬2) see if the thought helped
🧑‍🎓3) learn from useful thoughts Visualization of the thoughts generated in parallel for all tokens in an input text on an addition problem, showing how intermediate thoughts can be useful
Excitingly, self-teaching reasoning on diverse web text automatically improves other reasoning! Mistral self-taught by training on web data increases its zero-shot commonsense reasoning accuracy by a third and nearly doubles its zero-shot direct grade-school-math accuracy Performance curves with our approach. Quiet-STaR substantially improves zero-shot generalization performance over just continuing to train on the base dataset
Read 8 tweets
Oct 5, 2023
“Recursive self-improvement” (RSI) is one of the oldest ideas in AI. Can language models write code that recursively improves itself?

Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
w/@elianalorch, @LesterMackey, @adamfungi
(1/n) Pipeline figure for STOP. On the left, improver_0 improves itself to become improver_1, etc. until improver_T. On the right, improver_0 is expanded to visualize that improver_0, the seed improver, takes a program and returns the best improvement the language model generates.
We start with a simple seed "improver" program that takes code and an objective function and improves the code with a language model (returning the best of k improvements). But improving code is a task, so we can pass the improver to itself! Then, repeat…
arxiv.org/abs/2310.02304
If you apply this enough times, GPT-4 comes up with some really creative code self-improvement strategies, like genetic algorithms, simulated annealing, or multi-armed prompt bandits. This is especially surprising when you realize it's only been trained on data until 2021! Three wide, two tall grid of some GPT-4-implemented self-improvement strategies. In order, they are genetic algorithm, decomposing and improving parts, multi-armed prompt bandit, vary temperature to explore, simulated-annealing based search, beam search/tree search
Read 8 tweets
Sep 12, 2023
Did you know there’s a task people easily solve but GPT-4 fails? From a few input-output grids, ARC asks you to infer and apply a rule

With Hypothesis Search, we double GPT-4’s score


w/@ruocheng_w @GabrielPoesia @evanthebouncy @nickhaber @noahdgoodman
🧵 arxiv.org/abs/2309.05660
Pipeline overview. From left to right, train examples, generate hypotheses, select, implement, validate
This kind of problem solving is “inductive reasoning,” and it’s essential to science and creativity. That’s why ARC has been used to argue that LLMs can’t reason and also why, when @Ruocheng suggested tackling @fchollet’s ARC, I called it a nerd snipe ()xkcd.com/356/
Hypothesis Search strengthens LLMs’ inductive reasoning:
1) Given training pairs, prompt LM to come up with hypotheses for the underlying rule
2) For each hypothesis, prompt LM to implement it in code
3) Run the code on training pairs. Revise if it errors; else submit for testing Example of a model-generated summarized hypothesis and its corresponding program
Read 8 tweets
Feb 6, 2023
You can now generate complex programs from natural language without writing unit tests! Automatic test generation 🤖🧪 has been added to Parsel🐍

Code here: github.com/ezelikman/pars… (1/5) Parsel code with four funct...Generated Python code imple...
Decomposition🧩 and test generation🧪 go together well: if interconnected parts all pass tests, then it's more likely the solution and tests are good. But how do we know that the generated tests are any good? (2/5)
We prompt Codex to generate tests like CodeT (arxiv.org/abs/2207.10397) and try to pass as many tests as possible, but when testing solutions, we have two new constraints: a good test set must 1) test every function and 2) test at least two outputs per function (3/5)
Read 5 tweets
Jan 26, 2023
For code language models, every token is a new chance to break a program. What if LLMs wrote code like people, decomposing programs into solvable parts? They can solve competition-level coding problems by writing natural language programs in Parsel🐍, beating prior SoTA by >75%! Plot showing competition-level pass rate of Parsel using Cod
Parsel 🐍: A Unified Natural Language Framework for Algorithmic Reasoning
Work done w/ @qhwang3 @GabrielPoesia @noahdgoodman @nickhaber
Website [🕸️]: zelikman.me/parselpaper/
Paper [📜]: zelikman.me/parselpaper/pa…
Code [💻]: github.com/ezelikman/pars… Flow chart visualizing Parsel - first, the language model de
In the paper where OpenAI introduced Codex, they showed that code language models fail to generate programs that chain together many simple tasks, while humans can - Parsel solves this by separating out decomposition and implementation Line chart with performance vs number of chained components.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(