Sign up and get 7 minutes ($0.13) of free credits.
(this is not sponsored)
Step 3: Settings
Checkpoint: dreamshaper_6
Sampling: DPM++ 2M Karras
Scroll down, upload your QR code to ControlNet 0, and enable.
Preprocessor: select "inpaint_global_harmonious."
Model: select the model that has "brightness" at the end.
Decrease Control Weight to 0.35
Step 4: Settings Continued
Switch to ControlNet 1, upload your QR code, and enable.
Preprocessor: select "inpaint_global_harmonious."
Model: select the model that has "tile" at the end.
Decrease Control Weight to 0.65, Starting to 0.35, Ending to 0.75.
Step 5: Enter your prompt
I used:
Prompt: masterpiece, best quality, mecha, no humans, black armor, blue eyes, science fiction, fire, laser canon beam, war, conflict, destroyed city background
TODAY'S AI NEWS: China's Alibaba just open-sourced a new suite of AI video models.
Plus, more news from Google, Anthropic, OpenAI, Microsoft, DeepSeek and Convergence.
Here's everything you need to know:
Alibaba's Tongyi Lab dropped Wan2.1, a suite of advanced AI models for video
—Beats SOTA models, generates at 2.5x speed
—Excels in complex motion, real-world physics, Chinese & English text
—Includes editing tools, video-to-audio, and a 1.3B version
Google launched a free version of Gemini Code Assist for individual developers
Offers AI-powered coding help with a 128K token context window and 180,000 monthly code completions — 90 times more than GitHub Copilot
TODAY'S AI NEWS: Google debuted an AI co-scientist system—and it's already making drug discoveries.
Plus, more from Microsoft, Arc Institute, Clone Robotics, Apple, Convergence, and Sakana AI.
Here's what you need to know:
Google just launched an AI co-scientist to accelerate scientific discoveries!
—Gemini 2.0-based multi-agent system to handle tasks
— 80%+ accuracy on benchmarks (beating AI & humans)
— Identified new drug applications and gene transfer mechanisms in days
Clone Robotics gave a glimpse of Protoclone, the world's first bipedal, musculoskeletal android
The faceless yet anatomically accurate Android has over 200 DoF, 1,000+ Myofibers, and 500 sensors
It's an AI model that can generate minutes of cohesive gameplay from a single second of frames and controller actions.
The implications for gaming are absolutely massive:
Muse is the first World and Human Action Model (WHAM), and the scale of training is mind-blowing:
— Trained on 1B+ gameplay images
— Used 7+ YEARS of continuous gameplay data
— Learned from real Xbox multiplayer matches
From a single second of gameplay + controller inputs, Muse can create multiple unique, playable sequences that follow actual game physics, mechanics, and rules.
The version shown in research was trained on just a single game (Bleeding Edge):
For those wondering my quick take on what's happening right now with R1 and Janus
1. GPU demand will not go down 2. OpenAI is not done for, but Open source and China are showing they're far closer than anticipated 3. There's way too much misinfo being spread by mainstream media right now (almost seems on purpose?) 4. DeepSeek open-sourcing R1 is still a huge gift to developers and overall AI progress
I haven't seen this much confusion and uncertainty on my TL for ages...