Congratulations to @demishassabis, John Jumper, and David Baker who will be awarded the Wiley Foundation 20th annual Wiley Prize in Biomedical Sciences on April 1: dpmd.ai/Wiley-Prize 🧵 1/7
Demis and John accept the award on behalf of the @DeepMind team who worked on #AlphaFold, which was recognised as a solution to the “protein folding problem” at CASP14 in Nov 2020: dpmd.ai/casp14_blog 2/7
From the start, we committed to giving broad access to our work and, in July 2021, we published our methods in @Nature along with the open source code.
A week later, we launched the AlphaFold Protein Structure Database, in partnership with @emblebi - more than doubling the number of high-accuracy human protein structures available. Over 400,000 researchers have already used it: dpmd.ai/alphafolddb 4/7
In October 2021 we launched AlphaFold-Multimer, which properly accounts for multi-chain proteins (complexes): dpmd.ai/alphafold-mult… 5/7
And in January 2022 we added 27 new proteomes (190k+ proteins) to the database, 17 of which represent Neglected Tropical Diseases that continue to devastate the lives of more than 1 billion people globally: dpmd.ai/NTD 6/7
A huge congratulations to the whole team who made this breakthrough happen! Check out our AlphaFold timeline for further info: dpmd.ai/AFtimeline 7/7
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥
We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through @LabsDotGoogle. → goo.gle/veo-2-imagen-3
Veo 2 is able to:
▪️ Create videos at resolutions up to 4k
▪️ Understand camera controls in prompts, such as wide shot, POV and drone shots
▪️ Better recreate real-world physics and realistic human expression
In head-to-head comparisons of outputs by human raters, it was preferred over other top video generation models. → goo.gle/veo-2
We’ve also enhanced Imagen 3’s ability to:
▪️ Produce diverse art styles: realism, fantasy, portraiture and more
▪️ More faithfully turn prompts into accurate images
▪️ Generate brighter, more compositionally balanced visuals
Today in @Nature, we’re presenting GenCast: our new AI weather model which gives us the probabilities of different weather conditions up to 15 days ahead with state-of-the-art accuracy. ☁️⚡
Weather affects almost everything - from our daily lives 🏠 to agriculture 🚜 to producing renewable energy 🔋 and more.
Forecasting traditionally uses physics based models which can take hours on a huge supercomputer.
We want to do it in minutes - and better.
Our previous AI model was able to provide a single, best estimate of future weather.
But this can't be predicted exactly. So GenCast takes a probabilistic approach to forecasting. It makes 50 or more predictions of how the weather may change, showing us how likely different scenarios are.
Introducing AlphaQubit: our AI-based system that can more accurately identify errors inside quantum computers. 🖥️⚡
This research is a joint venture with @GoogleQuantumAI, published today in @Nature → goo.gle/3ZflWMn
The possibilities in quantum computing are compelling. ♾️
They can solve certain problems in a few hours, which would take a classical computer billions of years. This can help lead to advances in areas like drug discovery to material design.
But building a stable quantum system is a challenge.
Qubits are units of information that underpin quantum computing. These can be disrupted by microscopic defects in hardware, heat, vibration, and more.
Quantum error correction solves this by grouping multiple noisy qubits together to create redundancy, into something called a “logical qubit”. Using consistency checks, a decoder then protects the information stored in this.
In our experiments, our decoder AlphaQubit made the fewest errors.
Our latest generative technology is now powering MusicFX DJ in @LabsDotGoogle - and we’ve also updated Music AI Sandbox, a suite of experimental music tools which can streamline creation. 🎵
This will make it easier than ever to make music in real-time with AI. ✨goo.gle/4eTg28Z
MusicFX DJ lets you input multiple prompts and include details on instruments, genres and vibes to create music. 🎛️
We’ve updated and improved the interface using feedback from @YouTube’s Music AI Incubator.
Two key innovations lie at the core of MusicFX DJ.
🔘 We adapted our models to perform real-time streaming by training them to generate the next 2 seconds of music, based on the previous 10 seconds.
🔘 “Style embedding” is steered by the player, which is a mix of text prompt embeddings set by the slider values
Meet our AI-powered robot that’s ready to play table tennis. 🤖🏓
It’s the first agent to achieve amateur human level performance in this sport. Here’s how it works. 🧵
Robotic table tennis has served as a benchmark for this type of research since the 1980s.
The robot has to be good at low level skills, such as returning the ball, as well as high level skills, like strategizing and long-term planning to achieve a goal.
To train the robot, we gathered a dataset of initial table tennis ball states - which included information about position, speed, and spin.
The system practiced using this library and learned different skills, like forehand topspin, backhand targeting, and returning serves.