Angry Tom Profile picture
Consultant & AI educator | On a mission to build an empire with artificial intelligence
13 subscribers
Sep 30 13 tweets 4 min read
This is crazy...

Open AI just dropped Sora 2, the best AI video model in the world.

It handles physics and motion like no other model while also enabling realistic, controllable video with sound.

Here are 10 insane examples: Sora 2 can do things that are exceptionally difficult and in some instances outright impossible for prior video generation models...
Sep 18 7 tweets 2 min read
AI is getting crazier...

Higgsfield just dropped Lipsync Studio, an all-in-one space for creating the most realistic lip-sync videos yet.

Unlimited Kling Speak in 720p, Kling Lipsync and InfiniteTalk until September 22.

6 insane examples: 2/
Sep 9 7 tweets 2 min read
There is no way to tell anymore.

Nano Banana is NOW LIVE alongside FLUX Premium in LTX Studio.

Two of the best image models on the market right now.

Here's what's possible now: (with examples) 1. What’s the difference?

- FLUX Premium is optimized for high-fidelity image generation.
- Nano Banana is a state-of-the-art image editing model.
Aug 30 6 tweets 2 min read
omg... this is crazy!

Higgsfield just unlocked Start & End Frame in 1080p powered by MiniMax.

Set the first shot, set the last, it builds everything in between.

Cinematic camera moves and VFX stacked with preset.

5 examples: 2.
Aug 29 10 tweets 4 min read
Vibe coding is cool until you need authentication, payments, or AI features.

Then it's a nightmare.

This new app changes it all - from idea to fully functional app in minutes, with everything built in.

Here's everything you need to know: Meet @get_mocha - The world's first truly all-in-one AI app builder designed for anyone. No coding required, no technical background needed.

What used to take weeks and thousands of dollars now happens in minutes.

Let's dive in:
Aug 22 7 tweets 3 min read
This is crazy...

LTX just launched multi-reference in LTX Studio.

Blend objects, scenes, and characters from different images into a single frame with more control precision.

Here's how it works: (plus examples) Step 1

- Open
- Select "Generate images" from the menu, add your prompt and hit "Generate" LTX.studio
Aug 22 7 tweets 2 min read
This is wild...

The fastest, most advanced open-source model WAN 2.2 is now available on Higgsfield.

30+ viral presets to go with it and total control.

6 examples: 2.
Aug 21 7 tweets 2 min read
9 months ago, building an app required...

- 6 months of development
- Team of 5 devs
- $400K budget and luck

Now AI lets you create fully functional mobile apps in just minutes with a single prompt.

6 examples: (plus link to try) 2/ Budget tracker
Aug 18 12 tweets 3 min read
This is Yan, China's answer to Google Genie 3.

Given a text prompt or image, it can generate dynamic worlds that you can navigate in real time.

10 wild examples:

1. Image to Interactive Video 2. Text to Interactive Video
Aug 11 10 tweets 3 min read
AI is getting crazier.

Pika’s new AI model creates some of the most realistic lip sync videos yet.

Perfect lip-sync. Full-body motion. Any length you want.

Ready in 6 seconds or less, in HD...

6 insane examples:
1.
Aug 10 11 tweets 3 min read
AI videos are taking over the internet.

One clip hit 203M views in just days—and it’s nothing like you’d expect.

From hidden cities to monsters caught on tape, TikTok’s never seen anything like this.

Here are 10 viral videos that will leave you speechless: 👇

1. Amazon - (203m views) 2. Meet Titan, the world's largest dog - (60m views)
Aug 6 13 tweets 5 min read
This is crazy...

Alibaba just dropped Wan 2.2, the world's first open-source MoE-architecture video model with cinematic control!

A major upgrade in cinematic quality, smoother movements, and prompt following.

10 mind-blowing examples: 1. Cinematic Vision Control

Achieve professional cinematic narratives through a deep command of shot language, offering fine-grained control over lighting, color, and composition for versatile styles with delicate detail.
Aug 5 12 tweets 5 min read
We're so cooked.

This is Genie 3, the most advanced world simulator ever created.

Given a text prompt, it can generate dynamic worlds that you can navigate in real time.

10 wild examples: 1. Modelling physical properties of the world

Experience natural phenomena like water and lighting, and complex environmental interactions.
Jul 16 10 tweets 3 min read
UGC creators, you're in big trouble!

Higgsfield just dropped UGC Builder. It's a powerful new tool that gives you complete authorship over motion, emotion, voice, and style.

There is no way to tell anymore...

8 mind-blowing examples: (please unmute) 2. Selfie
Jul 16 9 tweets 3 min read
This is wild...

The first-ever text-to-film AI agent is here.

It can automatically generate an entire film, from script and storyboard to consistent characters, video, voice, lip-sync, LoRA, and music.

Here's how it works: (step-by-step tutorial) 1. Novel - Getting started

Start from your own script or novel or let SkyReels generate one for you. Out of ideas? Click to explore more templates Image
Jul 11 7 tweets 2 min read
Higgsfield just integrated Google Veo 3.

One subscription for Higgsfield Products + Google Veo 3

Too good to be true… until now.

5 crazy examples: 1.
Jul 10 14 tweets 4 min read
RIP Hollywood!

Moonvalley just dropped Marey, the world’s first AI video model built for professional production.

Step into the director’s chair with precision control, clean data, and cinematic output.

10 examples below + link to try:👇 1. Prompt adherence
Jul 7 13 tweets 4 min read
Robots are taking over human jobs faster than you think.

Here are 12 real-life examples:

1. A traffic security robot that immediately moves illegally parked cars with ease. 2. A Robot barber giving a perfect haircut.
Jul 1 9 tweets 4 min read
this is crazy...

Alibaba just announced OmniAvatar, a new audio-driven model that takes full-body, expressive human animation to a whole new level.

natural movement, controllable emotions, and ultra-accurate lip-sync.

10 examples: OmniAvatar can generate lifelike speaking avatar videos that the characters' actions and expressions are natural and rich, with audio perfectly synchronized to their lip movements.
Jun 24 8 tweets 2 min read
Photoshop is dead.

Higgsfield just dropped Canvas, a state-of-the-art image editing model.

With just two clicks, you can swap any object in an image.

Logos, text, texture and scale stay exactly as they are...

6 examples: 1. Image
Jun 22 12 tweets 4 min read
AI is getting crazier.

MeiGen’s new AI model, MultiTalk, creates some of the most realistic lip sync videos yet.

Nothing is real anymore...

10 insane examples: 2. Given a audio input, a reference image and a prompt...