Jim Fan Profile picture
Apr 3, 2023 12 tweets 8 min read Read on X
Just got access to Adobe Firefly! How does the world's leading creative tool maker fare against MidJourney, a self-funded 11-person team?

Let's check it out. Left is Firefly and right is MidJourney V5. Prompt in "ALT" button on lower-left corner.

Deadpool posing on a car. 1/🧵 Deadpool wide angle pose on top of a car outside an apartmenMidJourney V5 image credited to @LinusEkenstam
Super Mario in a dim lit street with a big reflection in a puddle. Firefly's interpretation of "Super Mario" is ... exotic (?) 😅

Prompt and image credits to @LinusEkenstam @vitomotiv.

2/ A photograph capturing Super Mario in a pose in a dim lit st
Same prompt as above but for Pikachu. Again, somehow Firefly does not fully get these famous characters. Maybe a training data copyright issue?

Prompt and MJ image credits to @LinusEkenstam @vitomotiv.

3/ A photograph capturing Pikachu in a dim lit street and a big
Next, who is the better portrait photographer?

Photo of a large crowd of commuters in Tokyo, sharply focused faces, but it's the woman in red that commands your attention. Warm glow, elegance.

Prompt & MJ image credit: @nickfloats

4/ Modern street style photo from above shot on Fujifilm captur
How about some sci-fi?

Abstract fractal circular mosaic city architecture.

Prompt & MJ image credit: @chetbff @BambuuArt

5/ Abstract Fractal circular mosaic city architecture made of m
Now let's do some mobile app icon design. Does Firefly even know what an app icon is?

iOS app icon, Sci-fi planet landscape with skeuomorphic style.

Prompt & MJ image credit: @followmarcos

6/ App Icon Design: iOS, Sci-fi planet landscape with skeuomorp
The "human finger" test is becoming the new visual Turing Test. It's the final moat that Diffusion needs to conquer to become truly sentient 🤣.

A stunning young Jamaican woman wearing white retrofuturistic sequin Gucci gown, standing in the desert.

Credit: @nickfloats

7/ editorial style photo, medium-full shot, afga vista film sti
Finally, a landscape photo. It turns out to be an easy task that both Firefly and MJ excel.

Red Ferrari F40 in Dandelions at the Lake Seealpsee.

Prompt & MJ image credit: @heyBarsee

8/ Red Ferrari F40 in Dandelions at the Lake Seealpsee, shot wi
Note: these prompts are heavily optimized for MidJourney, so that may give it an unfair advantage. However, I did try a few variations but still couldn't get better results. I'm not a prompt ninja, so your mileage may vary.

Still, I'm grateful for Adobe's early beta access! /🧵
Note 2: Firefly is only trained on Adobe Stock and fully licensed images. The data curation is very conservative, which may cripple its performance.

I also included examples without copyrighted characters in the thread.
Note 3: Adobe research scientist @vdeschaintre has a good point: it may be a significant plus for companies who must ensure the IP copyright of the output image. They may be more than willing to sacrifice quality for legality, which makes MJ a less appealing option.
Thanks for all your feedback. I wrote a summary note to give Firefly's approach fair and proper credits:

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jim Fan

Jim Fan Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DrJimFan

Feb 23
Career update: I am co-founding a new research group called "GEAR" at NVIDIA, with my long-time friend and collaborator Prof. @yukez. GEAR stands for Generalist Embodied Agent Research.

We believe in a future where every machine that moves will be autonomous, and robots and simulated agents will be as ubiquitous as iPhones. We are building the Foundation Agent — a generally capable AI that learns to act skillfully in many worlds, virtual and real.

2024 is the Year of Robotics, the Year of Gaming AI, and the Year of Simulation. We are setting out on a moon-landing mission, and getting there will spin off mountains of learnings and breakthroughs.

Join us on the journey: research.nvidia.com/labs/gear/Image
Here's a highlight thread on the exciting research that we spearheaded!

Eureka: GPT-4 writes reward functions to teach a 5-finger robot hand how to do pen spinning tricks better than I can. Trained with GPU-accelerated physics simulation at 1000x faster than real-time!

Voyager: the first LLM-powered agent that plays Minecraft proficiently. Voyager bootstraps its own capabilities as it explores the open-ended world continuously.

Read 10 tweets
Jan 4
What did I tell you a few days ago? 2024 is the year of robotics. Mobile-ALOHA is an open-source robot hardware that can do dexterous, bimanual tasks like cooking a meal (with human teleoperation). Very soon, hardware will no longer bottleneck us on the quest for human-level, generally capable robots. The brain will be.

This work is done by 3 researchers with academic budget. What an incredible job! Stanford rocks! Congrats to @zipengfu @tonyzzhao @chelseabfinn

Academia is no longer the place for the biggest frontier LLMs, simply because of resource constraints. But robotics levels the playing field a bit between academia and industry, at least in the near term. More affordable hardware is the inevitable trend. Advice for aspiring PhD students: embrace robotics - less crowded, more impactful.

Website:
Hardware assembly tutorial (oh yes we need more of these!):
Codebase: mobile-aloha.github.io
docs.google.com/document/d/1_3…
github.com/MarkFzp/mobile…
Linking the great explanatory threads from the authors, @zipengfu 1/3

Read 4 tweets
Dec 13, 2023
I confirmed with friends at the team that they did not speed up the video. Having such smooth motions at real-time, especially in hand dexterity, will unlock LOTS of new capabilities down the road. Regardless of how well you train the model in the world of bits, a slow and unreliable hardware will always be the fundamental bottleneck in the world of atoms.

The tactile sensing on fingers is the obvious right path forward. Now we can train truly multimodal robot transformers that take in text, video, audio, touch, proprioception (position, orientation, motion sensing) and some day, even smell and touch. The output is humanoid motor controls.

Can Optimus spin pens? Someone please try out our Eureka method and let me know? @Tesla_Optimus 👏
Btw, this is Eureka from my team at NVIDIA Research!

Typo in the original post: I meant "... and some day, even smell and taste". Can't wait for robot chefs in 3 yrs!
Read 4 tweets
Nov 30, 2023
This is the coolest Diffusion work I've seen in a while! It generates Visual Anagrams, a type of optical illusion where an image looks like one thing, but changes appearance when transformed.

It works with any orthogonal transformation matrices, which luckily include rotation, permutation (jigsaw puzzles), and color negation.

Intuitively, the method first inverts the noise from multiple image transforms (with different text prompts), and then average them. After taking a diffusion step in the averaged noise, the resulting image becomes an anagram that aligns with the texts in different views. It does very little computation, using pre-trained Stable Diffusion.

Simple, elegant, and inexpensive technique for non-professionals to create some interesting art!

Paper:
Website:
It's open-source!
Authors: Daniel Geng, Inbum Park, Andrew Owens.arxiv.org/abs/2311.17919
dangeng.github.io/visual_anagram…
github.com/dangeng/visual…
More examples, jigsaw:
Read 4 tweets
Nov 20, 2023
My team at NVIDIA is hiring. We 🩷 you all from OpenAI. Engineers, researchers, product team, alike. Email me at linxif@nvidia.com. DM is open too. NVIDIA has warm GPUs for you on a cold winter night like this, fresh out of the oven.🩷

I do research on AI agents. Gaming+AI, robotics, multimodal LLMs, open-ended simulations, etc. If you want an excuse to play games like Minecraft at work - I'm your guy.

I'm shocked by the ongoing development. I can only begin to grasp the depth of what you must be going through. Please, don't hesitate to ping me if there's anything I can do to help, or just say hi and share anything you'd like to talk about. I'm a good listener.
Image
Sharing appetizers with my distinguished guests: here are my team's research highlights!

Voyager: the first LLM-powered agent that plays Minecraft proficiently. Voyager bootstraps its own capabilities as it explores the open-ended world continuously.

Eureka: GPT-4 writes reward functions to teach 5-finger robot hand how to do pen spinning tricks better than I can. The robots are trained with GPU-accelerated physics simulation at 1000x faster than real-time!

Read 8 tweets
Oct 20, 2023
Can GPT-4 teach a robot hand to do pen spinning tricks better than you do?

I'm excited to announce Eureka, an open-ended agent that designs reward functions for robot dexterity at super-human level. It’s like Voyager in the space of a physics simulator API!

Eureka bridges the gap between high-level reasoning (coding) and low-level motor control. It is a “hybrid-gradient architecture”: a black box, inference-only LLM instructs a white box, learnable neural network. The outer loop runs GPT-4 to refine the reward function (gradient-free), while the inner loop runs reinforcement learning to train a robot controller (gradient-based).

We are able to scale up Eureka thanks to IsaacGym, a GPU-accelerated physics simulator that speeds up reality by 1000x. On a benchmark suite of 29 tasks across 10 robots, Eureka rewards outperform expert human-written ones on 83% of the tasks by 52% improvement margin on average. We are surprised that Eureka is able to learn pen spinning tricks, which are very difficult even for CGI artists to animate frame by frame!

Eureka also enables a new form of in-context RLHF, which is able to incorporate a human operator’s feedback in natural language to steer and align the reward functions. It can serve as a powerful co-pilot for robot engineers to design sophisticated motor behaviors.

As usual, we open-source everything! Welcome you all to check out our video gallery and try the codebase today:
Paper:
Code:

Deep dive with me: 🧵
In robot learning, LLMs are good at generating high-level plans and mid-level actions like pick and place (VIMA, RT-1, etc.), but fall short of complex, high-frequency motor controls.

The Eureka! moment for us (pun intended) is that reward functions through coding is the key portal where LLMs can venture into dexterous skills.

2/
Eureka achieves human-level reward design by evolving reward functions in-context. There are 3 key components:

1. Simulator environment code as context jumpstarts the initial "seed" reward function.
2. Massively parallel RL on GPUs enables rapid evaluation of lots of reward candidates.
3. Reward reflection produces targeted reward mutations in-context.

3/
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(