If you last checked in on AI image makers a month ago & thought “that is a fun toy, but is far from useful…” Well, in just the last week or so two of the major AI systems updated.
You can now generate a solid image in one try. For example, “otter on a plane using wifi” 1st try:
This is what you got a month ago with the same prompt. (MidJourney v3 vs. v4)
This is a classic case of disruptive technology, in the original Clay Christensen sense 👇
A less capable technology is developing faster than a stable dominant technology (human illustration), and starting to be able to handle more use cases. Except it is happening very quickly
Seriously, everyone whose job touches on writing, images, video, or music should realize that the pace of improvement here is very fast & also, unlike other areas of AI, like robotics, there are not any obvious barriers to improvement.
Also worth looking at the details in the admittedly goofy otter pictures: the lighting looks correct (even streaming through the windows), everything is placed correctly, including the drink, the composition is varied, etc.
And this is without any attempts to refine the prompts.
Some more, again all first attempts with no effort to revise:
🦦 Otters fighting a medieval duel
🦦Otter physicist lamenting the invention of the atomic bomb
🦦Otter inventing the airplane in 1905
🦦Otters playing chess in the fall
(These AIs just came out just a few months ago)
AI image generation can now beat the Lovelace Test, a Turing Test, but for creativity. It challenges AI to equal humans under constrained creativity.
Illustrating “an otter making pizza in Ancient Rome” in a novel, interesting way & as well as an average human is a clear pass!
And I picked otters randomly for fun
But since some comments are pointing out that nonhuman scenes may be easier; here are some of the prompt “doctor on a plane using wifi” - we are good at picking out flaws with illustrations of people, but they are impressive & improving fast.
People keep asking what system I was using: it is MidJourney (I mentioned this in the thread)
If you want to try it, you get 25 uses for free & a guide is below. Be sure to use —v4 at the end of your prompt to use the latest version, which is the one I use throughout the thread.
Here👇 is a thread with more comparisons between MidJourney a month or so ago, compared to MidJourney now. The pace is fast!
If you are trying MidJourney, the way to use the new version is to add --v 4 to the end of your prompt (I have no association with it or any AI company)
Reminder: if you want to use the new MidJourney version 4, rather than the old (from a month ago!) version add “ --v 4” to the end of the prompt. The spaces are vital
Interestingly, version 4 “just works” making it easier for everyone but power users who learned to craft prompts
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The significance of Grok 3, outside of X drama, is that it is the first full model release that we definitely know is at least an order of magnitude larger than GPT-4 class models in training compute, so it will help us understand whether 1st scaling law (pre-training) holds up.
It is possible that Gemini 2.0 Pro is a RonnaFLOP* model, but we are only seeing the Pro version, not the full ultra.
* AI trained on 10^27 FLOPs of compute, an order of magnitude more than then GPT-4 level (I have been calling them Gen3 models because it is easier)
And I should also note that everyone now hides their FLOPs used for training (except for Meta) so things are not completely clear.
There is a lot of important stuff in this new paper by Anthropic that shows how people are actually using Claude. 1) The tasks that people are asking AI to do are some of the highest-value (& often intellectually challenging) 2) Adoption is uneven, but many fields already high
This is just based on Claude usage, which is why adoption by field is less of a big deal (Claude is popular in different fields than ChatGPT) than the breakdowns at the task level, because they represent what people are willing to let AI do for them.
Thoughts on this post: 1) It echoes what we have been hearing from multiple labs about the confidence of scaling up to AGI quickly 2) There is no clear vision of what that world looks like 3) The labs are placing the burden on policymakers to decide what to do with what they make
I wish more AI lab leaders would spell out a vision for the world, one that is clear about what they think life will actually be like for humans living in a world of AGI
Faster science & productivity, good - but what is the experience of a day in the life in the world they want?
To be clear, it is completely possible to tell a very positive vision of the future of humans and AI (heck, just steal from The Culture or Long Way to an Angry Planet or something), and I think that would actually be a really useful exercise, showing where the labs hope we all go
$500B committed towards AGI, still no articulated vision of what a world with AGI looks like for most people. Even the huge essay by the CEO of Anthropic doesn't paint a vivid picture
For those convinced they are making AGI soon - what does daily life look like 5-10 years later?
Lets leave aside the risk of catastrophe for now.
Assume we get an aligned AGI that supercharges science and we have a healthier, more advanced, safer world. What does that actually mean for most people, what does their life look like in the future? (Hint: UBI is not an answer)
When I say "UBI is not an answer" I mean UBI is a policy decision, it is not a description of what life would be like in a world of highly advanced AI.
And saying "the definition of a singularity means we can't say what comes next" is also just dodging the question of a vision.
New randomized, controlled trial of students using GPT-4 as a tutor in Nigeria. 6 weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions.
And it helped all students, especially girls who were initially behind
No working paper yet, but the results and experiment are written up here. They used Microsoft Copilot and teachers provided guidance and initial prompts: blogs.worldbank.org/en/education/F…
To make clear the caveats for people who don't read the post: learning gains are measured in Equivalent Years of Schooling, this is a pilot study on narrow topics and they do not have long-term learning measures. And there is no full paper yet (but the team is credible)
Veo 2 prompt: "a distant shot zooms in to reveal a knight wearing a golden helmet, he begins to charge on his zebra, lowering his lance, charging towards a clockwork octopus" (this is one of the initial 4 videos it made)
"an woman with short black hair assembles an impossibly complicated device, close up on her face, she is sweating"
The consistency of small details is really impressive, the fact that the shaft of the screw turns at the same speed and direction, hair and sweat, tattoos...
"a man holding the leash of a golden retriever stares mournfully at a fireworks display over his small town of Tudor-style homes, the flashes punctuate the darkness."
All videos are from the first 4 from the prompt. I did learn that you can't ask for many cuts or scene changes.