Congratulations to @demishassabis, John Jumper, and David Baker who will be awarded the Wiley Foundation 20th annual Wiley Prize in Biomedical Sciences on April 1: dpmd.ai/Wiley-Prize 🧵 1/7
Demis and John accept the award on behalf of the @DeepMind team who worked on #AlphaFold, which was recognised as a solution to the “protein folding problem” at CASP14 in Nov 2020: dpmd.ai/casp14_blog 2/7
From the start, we committed to giving broad access to our work and, in July 2021, we published our methods in @Nature along with the open source code.
A week later, we launched the AlphaFold Protein Structure Database, in partnership with @emblebi - more than doubling the number of high-accuracy human protein structures available. Over 400,000 researchers have already used it: dpmd.ai/alphafolddb 4/7
In October 2021 we launched AlphaFold-Multimer, which properly accounts for multi-chain proteins (complexes): dpmd.ai/alphafold-mult… 5/7
And in January 2022 we added 27 new proteomes (190k+ proteins) to the database, 17 of which represent Neglected Tropical Diseases that continue to devastate the lives of more than 1 billion people globally: dpmd.ai/NTD 6/7
A huge congratulations to the whole team who made this breakthrough happen! Check out our AlphaFold timeline for further info: dpmd.ai/AFtimeline 7/7
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We’re rolling out Veo 3.1, our updated video generation model, alongside improved creative controls for filmmakers, storytellers, and developers - many of them with audio. 🧵
🎥 Introducing Veo 3.1
It brings a deeper understanding of the narrative you want to tell, capturing textures that look and feel even more real, and improved image-to-video capabilities.
🖼️ Ingredients to video
Give multiple reference images with different people and objects, and watch how Veo integrates these into a fully-formed scene - complete with sound.
We’re announcing a major advance in the study of fluid dynamics with AI 💧 in a joint paper with researchers from @BrownUniversity, @nyuniversity and @Stanford.
Equations to describe fluid motion - like airflow lifting an airplane wing or the swirling vortex of a hurricane - can sometimes "break," predicting impossible, infinite values.
These "singularities" are a huge mystery in mathematical physics.
We used a new AI-powered method to discover new families of unstable “singularities” across three different fluid equations.
A clear and unexpected pattern emerged: as the solutions become more unstable, one of the key properties falls very close to a straight line.
This suggests a new, underlying structure to these equations that was previously invisible.
We’re helping to unlock the mysteries of the universe with AI. 🌌
Our novel Deep Loop Shaping method
published in @ScienceMagazine could help astronomers observe more events like collisions and mergers of black holes in greater detail, and gather more data about rare space phenomena. 🧵
Astronomers already know a lot about the smallest and largest black holes. ⚫
But we have limited data on intermediate-mass black holes, and the observatories we use to measure their gravitational waves need improved control, and expanded reach. ↓ goo.gle/47oalza
⚡This is where Deep Loop Shaping comes in.
Developed in collaboration with @LIGO Laser Interferometer Gravitational-Wave Observatory, @CalTech and the Gran Sasso Science Institute, it reduces noise and improves control in an observatory’s feedback system - helping stabilize components used for measuring gravitational waves.
Image generation with Gemini just got a bananas upgrade and is the new state-of-the-art image generation and editing model. 🤯
From photorealistic masterpieces to mind-bending fantasy worlds, you can now natively produce, edit and refine visuals with new levels of reasoning, control and creativity.
A quick dive into Gemini 2.5 Flash’s capabilities 🧵
🎯 Character consistency
Give the model reference images and it can produce new visuals that maintain a character, subject or object’s likeness across different poses, lighting, environments or styles - helping you create more compelling, narrative-driven work.
🔄 Design application
Looking to apply a specific artistic style, design, or texture? 2.5 Flash can now easily transfer this from one image to another while preserving the previous subject's form and details.
Our new state-of-the-art AI model Aeneas transforms how historians connect the past. 📜
Ancient inscriptions often lack context – it's like solving a puzzle with 90% of the pieces lost to time. It helps researchers interpret and situate inscriptions in their past context. 🧵
By transforming each ancient text into a unique historical fingerprint, Aeneas can identify similarities across 176,000 Latin inscriptions.
In our study, historians found these ‘parallels’ to be helpful research starting points 9 out of 10 times - improving their confidence by 44%.
We tested Aeneas on the Res Gestae Divi Augusti – one of the most debated inscriptions.
Without prior knowledge, it successfully mapped out the leading scholarly theories on its dating, showing how AI can help model history in a quantitative way. 📊
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖
It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
What makes this new model unique?
🔵 It has the generality and dexterity of Gemini Robotics - but it can run locally on the device
🔵 It can handle a wide variety of complex, two-handed tasks out of the box
🔵 It can learn new skills with as few as 50-100 demonstrations
From humanoids to industrial bi-arm robots, the model supports multiple embodiments, even though it was pre-trained on ALOHA - while following instructions from humans. 💬
These tasks may seem easy for us but require fine motor skills, precise manipulation and more. ↓