Jim Fan Profile picture
Apr 3, 2023 12 tweets 8 min read Read on X
Just got access to Adobe Firefly! How does the world's leading creative tool maker fare against MidJourney, a self-funded 11-person team?

Let's check it out. Left is Firefly and right is MidJourney V5. Prompt in "ALT" button on lower-left corner.

Deadpool posing on a car. 1/🧵 Deadpool wide angle pose on top of a car outside an apartmenMidJourney V5 image credited to @LinusEkenstam
Super Mario in a dim lit street with a big reflection in a puddle. Firefly's interpretation of "Super Mario" is ... exotic (?) 😅

Prompt and image credits to @LinusEkenstam @vitomotiv.

2/ A photograph capturing Super Mario in a pose in a dim lit st
Same prompt as above but for Pikachu. Again, somehow Firefly does not fully get these famous characters. Maybe a training data copyright issue?

Prompt and MJ image credits to @LinusEkenstam @vitomotiv.

3/ A photograph capturing Pikachu in a dim lit street and a big
Next, who is the better portrait photographer?

Photo of a large crowd of commuters in Tokyo, sharply focused faces, but it's the woman in red that commands your attention. Warm glow, elegance.

Prompt & MJ image credit: @nickfloats

4/ Modern street style photo from above shot on Fujifilm captur
How about some sci-fi?

Abstract fractal circular mosaic city architecture.

Prompt & MJ image credit: @chetbff @BambuuArt

5/ Abstract Fractal circular mosaic city architecture made of m
Now let's do some mobile app icon design. Does Firefly even know what an app icon is?

iOS app icon, Sci-fi planet landscape with skeuomorphic style.

Prompt & MJ image credit: @followmarcos

6/ App Icon Design: iOS, Sci-fi planet landscape with skeuomorp
The "human finger" test is becoming the new visual Turing Test. It's the final moat that Diffusion needs to conquer to become truly sentient 🤣.

A stunning young Jamaican woman wearing white retrofuturistic sequin Gucci gown, standing in the desert.

Credit: @nickfloats

7/ editorial style photo, medium-full shot, afga vista film sti
Finally, a landscape photo. It turns out to be an easy task that both Firefly and MJ excel.

Red Ferrari F40 in Dandelions at the Lake Seealpsee.

Prompt & MJ image credit: @heyBarsee

8/ Red Ferrari F40 in Dandelions at the Lake Seealpsee, shot wi
Note: these prompts are heavily optimized for MidJourney, so that may give it an unfair advantage. However, I did try a few variations but still couldn't get better results. I'm not a prompt ninja, so your mileage may vary.

Still, I'm grateful for Adobe's early beta access! /🧵
Note 2: Firefly is only trained on Adobe Stock and fully licensed images. The data curation is very conservative, which may cripple its performance.

I also included examples without copyrighted characters in the thread.
Note 3: Adobe research scientist @vdeschaintre has a good point: it may be a significant plus for companies who must ensure the IP copyright of the output image. They may be more than willing to sacrifice quality for legality, which makes MJ a less appealing option.
Thanks for all your feedback. I wrote a summary note to give Firefly's approach fair and proper credits:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jim Fan

Jim Fan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DrJimFan

Nov 1, 2024
I don’t know if we live in a Matrix, but I know for sure that robots will spend most of their lives in simulation. Let machines train machines. I’m excited to introduce DexMimicGen, a massive-scale synthetic data generator that enables a humanoid robot to learn complex skills from only a handful of human demonstrations. Yes, as few as 5!

DexMimicGen addresses the biggest pain point in robotics: where do we get data? Unlike with LLMs, where vast amounts of texts are readily available, you cannot simply download motor control signals from the internet. So researchers teleoperate the robots to collect motion data via XR headsets. They have to repeat the same skill over and over and over again, because neural nets are data hungry. This is a very slow and uncomfortable process.

At NVIDIA, we believe the majority of high-quality tokens for robot foundation models will come from simulation.

What DexMimicGen does is to trade GPU compute time for human time. It takes one motion trajectory from human, and multiplies into 1000s of new trajectories. A robot brain trained on this augmented dataset will generalize far better in the real world.

Think of DexMimicGen as a learning signal amplifier. It maps a small dataset to a large (de facto infinite) dataset, using physics simulation in the loop. In this way, we free humans from babysitting the bots all day.

The future of robot data is generative.
The future of the entire robot learning pipeline will also be generative. 🧵
Here’s one example: imagine asking a human to repeat this task 1000s of times to gather enough data variations — they’d be bored out of their mind. Just ask a simulator to do the hard work!!

2/🧵
Real world experiments on a humanoid robot at GEAR Lab, NVIDIA HQ.

3/🧵
Read 5 tweets
Jul 30, 2024
Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down:

1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data.

2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation.

3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset.

To sum up, given 1 human trajectory with Vision Pro
-> RoboCasa produces N (varying visuals)
-> MimicGen further augments to NxM (varying motions).

This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits.

Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are building tools to enable everyone in the ecosystem to scale up with us. Links in thread:
RoboCasa: our generative simulation framework. It's fully open-source! Here you go:

robocasa.ai
MimicGen: our generative action framework @AjayMandlekar. The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands.

Read 6 tweets
Feb 23, 2024
Career update: I am co-founding a new research group called "GEAR" at NVIDIA, with my long-time friend and collaborator Prof. @yukez. GEAR stands for Generalist Embodied Agent Research.

We believe in a future where every machine that moves will be autonomous, and robots and simulated agents will be as ubiquitous as iPhones. We are building the Foundation Agent — a generally capable AI that learns to act skillfully in many worlds, virtual and real.

2024 is the Year of Robotics, the Year of Gaming AI, and the Year of Simulation. We are setting out on a moon-landing mission, and getting there will spin off mountains of learnings and breakthroughs.

Join us on the journey: research.nvidia.com/labs/gear/Image
Here's a highlight thread on the exciting research that we spearheaded!

Eureka: GPT-4 writes reward functions to teach a 5-finger robot hand how to do pen spinning tricks better than I can. Trained with GPU-accelerated physics simulation at 1000x faster than real-time!

Voyager: the first LLM-powered agent that plays Minecraft proficiently. Voyager bootstraps its own capabilities as it explores the open-ended world continuously.

Read 10 tweets
Jan 4, 2024
What did I tell you a few days ago? 2024 is the year of robotics. Mobile-ALOHA is an open-source robot hardware that can do dexterous, bimanual tasks like cooking a meal (with human teleoperation). Very soon, hardware will no longer bottleneck us on the quest for human-level, generally capable robots. The brain will be.

This work is done by 3 researchers with academic budget. What an incredible job! Stanford rocks! Congrats to @zipengfu @tonyzzhao @chelseabfinn

Academia is no longer the place for the biggest frontier LLMs, simply because of resource constraints. But robotics levels the playing field a bit between academia and industry, at least in the near term. More affordable hardware is the inevitable trend. Advice for aspiring PhD students: embrace robotics - less crowded, more impactful.

Website:
Hardware assembly tutorial (oh yes we need more of these!):
Codebase: mobile-aloha.github.io
docs.google.com/document/d/1_3…
github.com/MarkFzp/mobile…
Linking the great explanatory threads from the authors, @zipengfu 1/3

Read 4 tweets
Dec 13, 2023
I confirmed with friends at the team that they did not speed up the video. Having such smooth motions at real-time, especially in hand dexterity, will unlock LOTS of new capabilities down the road. Regardless of how well you train the model in the world of bits, a slow and unreliable hardware will always be the fundamental bottleneck in the world of atoms.

The tactile sensing on fingers is the obvious right path forward. Now we can train truly multimodal robot transformers that take in text, video, audio, touch, proprioception (position, orientation, motion sensing) and some day, even smell and touch. The output is humanoid motor controls.

Can Optimus spin pens? Someone please try out our Eureka method and let me know? @Tesla_Optimus 👏
Btw, this is Eureka from my team at NVIDIA Research!

Typo in the original post: I meant "... and some day, even smell and taste". Can't wait for robot chefs in 3 yrs!
Read 4 tweets
Nov 30, 2023
This is the coolest Diffusion work I've seen in a while! It generates Visual Anagrams, a type of optical illusion where an image looks like one thing, but changes appearance when transformed.

It works with any orthogonal transformation matrices, which luckily include rotation, permutation (jigsaw puzzles), and color negation.

Intuitively, the method first inverts the noise from multiple image transforms (with different text prompts), and then average them. After taking a diffusion step in the averaged noise, the resulting image becomes an anagram that aligns with the texts in different views. It does very little computation, using pre-trained Stable Diffusion.

Simple, elegant, and inexpensive technique for non-professionals to create some interesting art!

Paper:
Website:
It's open-source!
Authors: Daniel Geng, Inbum Park, Andrew Owens.arxiv.org/abs/2311.17919
dangeng.github.io/visual_anagram…
github.com/dangeng/visual…
More examples, jigsaw:
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(