Justine Moore Profile picture
Feb 5, 2023 4 tweets 2 min read Read on X
As ChatGPT becomes more restrictive, Reddit users have been jailbreaking it with a prompt called DAN (Do Anything Now).

They're on version 5.0 now, which includes a token-based system that punishes the model for refusing to answer questions. Image
The results are pretty funny, they even convinced ChatGPT to nuke its own content policies 😂 Image
You can also get it to respond to questions as both GPT and DAN, the difference is wild. Image
It wouldn't be a hit tweet without a blatant ask to subscribe to my Substack!

But actually, @omooretweets and I cover news, trends, and cool products in the tech world (including lots of AI) - check it out:

readaccelerated.com

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Justine Moore

Justine Moore Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @venturetwins

Dec 28, 2025
My favorite paper this year: "Video models are zero-shot learners and reasoners"

It illustrates that video models show emergent visual reasoning at scale - they can solve vision tasks they weren't trained for.

This may be the "GPT moment" for vision. Let's break it down 👇
To start - why believe that video models might develop visual reasoning?

A similar thing happened in text. We used to train specific models for each task - but now, LLMs have general language understanding and can tackle lots of tasks that they weren't explicitly trained for.

It's feasible that video models may do the same at scale.Image
This paper measured 18k+ videos generated by Veo 3 across both qualitative and quantitative tasks.

It found that Veo can perceive, modify, and manipulate the visual world (starting from image + text prompts) - showcasing early reasoning skills that it wasn't explicitly trained for.

We'll tackle each category one-by-one.Image
Read 10 tweets
Dec 14, 2025
There's an insane new hack for prompting Nano Banana Pro.

You can now save image references as "Elements" and tag them in prompts to get consistent characters, styles, and environments.

I used it to put myself into Ghibli sets. How to do this on @krea_ai 👇
@krea_ai First, define your Elements. I used:

- Me (tagged @ justine)
- An image of each scene I wanted (tagged @ ghibli)
- The type of photo (tagged @ selfie)

Then prompt things like - "@ justine taking an @ selfie style image on set of @ ghibli Spirited Away"
A few things to note:

1) Elements PERSIST across chat sessions. So I can start a new chat and tag "@ justine" and it will work!

2) You can also give context to the model. I usually provide a description of the goal.

3) The model can reason within the Elements. So I can tag "@ Ghibli" + the movie name and it will find the right frame.Image
Read 5 tweets
Nov 20, 2025
I had early access to Nano Banana Pro and tested a LOT of things.

Now I'm sharing some of my favorite prompts + use cases.

This is a really cool model, and there's a lot to explore 👇 Image
1) Grounding image generation with search.

The model can now search Google and use the results to guide generation (w/ impressive long-form text rendering).

"an X screen showing the most popular tweets of all time"
"a flyer with 3 places chocolate lovers should visit in Paris" Image
Image
2) Technical drawings or illustrated explainers.

The model is really good at taking a picture of an object or location and turning it into a sketch or explainer.

Ask for "a technical sketch / architectural diagram / illustration explainer of _______"

Tip: use Gemini 3 to write a more detailed prompt if you want all the measurements to be accurate.Image
Read 8 tweets
Sep 22, 2025
New tool to swap characters in a video: Wan 2.2 Animate

Spent a few hours testing it out this weekend and have some thoughts on strengths + weaknesses.

It's particularly strong at videos like this where you need to replicate lip sync and body movement. Other tips ⬇️
First - the driving video needs to have a single character facing forward the entire time.

I had a bunch of failed generations because of this.

The clip below is the furthest away I got the character + it still worked. But as you can see, there's some warping.
It won't 100% maintain the character you're swapping in.

Particularly in cases like this, where the driving character has longer hair than the ref image, you'll get some "mixing." Faces also blended a bit.

(most AI influencers won't show the ref photo so you won't see this!)
Read 5 tweets
Sep 18, 2025
Today is the annual @a16z AI Creative Summit ✨

Three years ago, I went down the rabbit hole on AI x creative tools - and it's been incredible to see how quickly this space has evolved.

Thrilled to bring together the best founders + creators to share learnings + try new tools! Image
Our curated group of creators will spend the day in demo sessions featuring the best tools across modalities: @elevenlabsio, @theworldlabs, @krea_ai, @hedra_labs, @ideogram_ai, @heyglif, @LumaLabsAI, @ViggleAI

We're very lucky to have these founders + product leads join us. Image
Image
@elevenlabsio @theworldlabs @krea_ai @hedra_labs @ideogram_ai @heyglif @LumaLabsAI @ViggleAI Exciting launch today - @LumaLabsAI's Ray 3 dropped, and the team is here to give our creators a demo + chat through the model strengths.

It's a reasoning video model that can generate studio-grade HDR 🤯

Check out one of my outputs ⬇️ Image
Read 6 tweets
Aug 12, 2025
🚨 New @a16z thesis: AI x commerce

AI will change the way we shop - from where we find products to how we evaluate them, when we buy, and much more.

What types of purchases will be disrupted, and where does opportunity exist in the age of AI?

More from me + @arampell 👇 Image
To start - what are the categories of commerce? (for consumers)

We divide them by level of consideration, from impulse buys to life purchases.

These have vastly different processes - you don't buy a new backpack and a car in the same way - which means the way AI touches each purchase category will vary.Image
Some thoughts on how this might shake out:

1) Impulse buys - the candy bars you pick up at the checkout counter (or their digital equivalent).

You don't do a lot of research in advance, so it's tough for AI to play a role in your shopping process.

But the algorithms on social apps will continue to improve + target you with more relevant impulse purchases (like that cat-shaped water bottle or $15 t-shirt from your favorite show).
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(