We’re announcing a major advance in the study of fluid dynamics with AI 💧 in a joint paper with researchers from @BrownUniversity, @nyuniversity and @Stanford.
Equations to describe fluid motion - like airflow lifting an airplane wing or the swirling vortex of a hurricane - can sometimes "break," predicting impossible, infinite values.
These "singularities" are a huge mystery in mathematical physics.
We used a new AI-powered method to discover new families of unstable “singularities” across three different fluid equations.
A clear and unexpected pattern emerged: as the solutions become more unstable, one of the key properties falls very close to a straight line.
This suggests a new, underlying structure to these equations that was previously invisible.
This breakthrough represents a new way of doing mathematical research - combining deep insights with cutting-edge AI.
We’re excited for this work to help usher in a new era where long-standing challenges are tackled with computer-assisted proofs. → goo.gle/46loOuZ
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We’re helping to unlock the mysteries of the universe with AI. 🌌
Our novel Deep Loop Shaping method
published in @ScienceMagazine could help astronomers observe more events like collisions and mergers of black holes in greater detail, and gather more data about rare space phenomena. 🧵
Astronomers already know a lot about the smallest and largest black holes. ⚫
But we have limited data on intermediate-mass black holes, and the observatories we use to measure their gravitational waves need improved control, and expanded reach. ↓ goo.gle/47oalza
⚡This is where Deep Loop Shaping comes in.
Developed in collaboration with @LIGO Laser Interferometer Gravitational-Wave Observatory, @CalTech and the Gran Sasso Science Institute, it reduces noise and improves control in an observatory’s feedback system - helping stabilize components used for measuring gravitational waves.
Image generation with Gemini just got a bananas upgrade and is the new state-of-the-art image generation and editing model. 🤯
From photorealistic masterpieces to mind-bending fantasy worlds, you can now natively produce, edit and refine visuals with new levels of reasoning, control and creativity.
A quick dive into Gemini 2.5 Flash’s capabilities 🧵
🎯 Character consistency
Give the model reference images and it can produce new visuals that maintain a character, subject or object’s likeness across different poses, lighting, environments or styles - helping you create more compelling, narrative-driven work.
🔄 Design application
Looking to apply a specific artistic style, design, or texture? 2.5 Flash can now easily transfer this from one image to another while preserving the previous subject's form and details.
Our new state-of-the-art AI model Aeneas transforms how historians connect the past. 📜
Ancient inscriptions often lack context – it's like solving a puzzle with 90% of the pieces lost to time. It helps researchers interpret and situate inscriptions in their past context. 🧵
By transforming each ancient text into a unique historical fingerprint, Aeneas can identify similarities across 176,000 Latin inscriptions.
In our study, historians found these ‘parallels’ to be helpful research starting points 9 out of 10 times - improving their confidence by 44%.
We tested Aeneas on the Res Gestae Divi Augusti – one of the most debated inscriptions.
Without prior knowledge, it successfully mapped out the leading scholarly theories on its dating, showing how AI can help model history in a quantitative way. 📊
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖
It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
What makes this new model unique?
🔵 It has the generality and dexterity of Gemini Robotics - but it can run locally on the device
🔵 It can handle a wide variety of complex, two-handed tasks out of the box
🔵 It can learn new skills with as few as 50-100 demonstrations
From humanoids to industrial bi-arm robots, the model supports multiple embodiments, even though it was pre-trained on ALOHA - while following instructions from humans. 💬
These tasks may seem easy for us but require fine motor skills, precise manipulation and more. ↓
Anyone can now use 2.5 Flash and Pro to build and scale production-ready AI applications. 🙌
We’re also launching 2.5 Flash-Lite in preview: the fastest model in the 2.5 family to respond to requests, with the lowest cost too. 🧵
2.5 Flash-Lite now supports:
🔹Thinking: improving performance and transparency through step-by-step reasoning
🔹Tool-use: including Search, code execution and 1 million token context window - similar to 2.5 Flash and Pro
⚡ 2.5 Flash-Lite is our most cost efficient model yet - and with lower latency than 2.0 Flash-Lite and Flash on a broad sample of prompts.
It also has all-around, higher quality than 2.0 Flash-Lite on coding, math, science, reasoning and multimodal benchmarks.
Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery.
It’s able to:
🔘 Design faster matrix multiplication algorithms
🔘 Find new solutions to open math problems
🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵
Our system uses:
🔵 LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
🔵 Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
🔵 Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.
Over the past year, we’ve deployed algorithms discovered by AlphaEvolve across @Google’s computing ecosystem, including data centers, software and hardware.
It’s been able to:
🔧 Optimize data center scheduling
🔧 Assist in hardware design
🔧 Enhance AI training and inference