We’re back with more suggestions from our researchers for ways to expand your knowledge of AI.
Today’s #AtHomeWithAI recommendations are from research scientist Kimberly Stachenfeld (@neuro_kim) (1/7)
She recommends “The Scientist in the Crib” [longer listen] by @AlisonGopnik, Andrew Meltzoff, & Patricia K. Kuhl for those who are interested in what early learning tells us about the mind.
Interested in computational systems neuroscience? @neuro_kim recommends the lecture series from @MBLScience to learn more about circuits and system properties of the brain.
@neuro_kim says Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems [longer read] by Peter Dayan & L.F. Abbott is a must read for anyone looking for an introduction to the topic.
Described as “a classic for anyone who wants to understand the roots of DL”, Kimberly recommends “The Appeal of Parallel Distributed Processing” [longer read] by James McClelland, the late David Rumelhart, & Geoffrey Hinton.
Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery.
It’s able to:
🔘 Design faster matrix multiplication algorithms
🔘 Find new solutions to open math problems
🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵
Our system uses:
🔵 LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
🔵 Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
🔵 Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.
Over the past year, we’ve deployed algorithms discovered by AlphaEvolve across @Google’s computing ecosystem, including data centers, software and hardware.
It’s been able to:
🔧 Optimize data center scheduling
🔧 Assist in hardware design
🔧 Enhance AI training and inference
We’re helping robots self-improve with the power of LLMs. 🤖
Introducing the Summarize, Analyze, Synthesize (SAS) prompt, which analyzes how they perform tasks based on previous actions and then suggests ways for them to get better using the medium of table tennis. 🏓
Large language models like Gemini have an inherent ability to problem solve, without needing to retrain for specific jobs.
Robots can use these models to improve how they operate over time, by interacting with the world, and learning from those interactions. 🦾 goo.gle/4jVFsoE
With the SAS prompt, we can now use language models like Gemini to learn from a robot's history.
This allows the model to analyze parameter effects ⚡, and suggest ways to improve - similar to a real-life table tennis coach. 💡 goo.gle/3GvWQ54
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥
We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through @LabsDotGoogle. → goo.gle/veo-2-imagen-3
Veo 2 is able to:
▪️ Create videos at resolutions up to 4k
▪️ Understand camera controls in prompts, such as wide shot, POV and drone shots
▪️ Better recreate real-world physics and realistic human expression
In head-to-head comparisons of outputs by human raters, it was preferred over other top video generation models. → goo.gle/veo-2
We’ve also enhanced Imagen 3’s ability to:
▪️ Produce diverse art styles: realism, fantasy, portraiture and more
▪️ More faithfully turn prompts into accurate images
▪️ Generate brighter, more compositionally balanced visuals
Today in @Nature, we’re presenting GenCast: our new AI weather model which gives us the probabilities of different weather conditions up to 15 days ahead with state-of-the-art accuracy. ☁️⚡
Weather affects almost everything - from our daily lives 🏠 to agriculture 🚜 to producing renewable energy 🔋 and more.
Forecasting traditionally uses physics based models which can take hours on a huge supercomputer.
We want to do it in minutes - and better.
Our previous AI model was able to provide a single, best estimate of future weather.
But this can't be predicted exactly. So GenCast takes a probabilistic approach to forecasting. It makes 50 or more predictions of how the weather may change, showing us how likely different scenarios are.
Introducing AlphaQubit: our AI-based system that can more accurately identify errors inside quantum computers. 🖥️⚡
This research is a joint venture with @GoogleQuantumAI, published today in @Nature → goo.gle/3ZflWMn
The possibilities in quantum computing are compelling. ♾️
They can solve certain problems in a few hours, which would take a classical computer billions of years. This can help lead to advances in areas like drug discovery to material design.
But building a stable quantum system is a challenge.
Qubits are units of information that underpin quantum computing. These can be disrupted by microscopic defects in hardware, heat, vibration, and more.
Quantum error correction solves this by grouping multiple noisy qubits together to create redundancy, into something called a “logical qubit”. Using consistency checks, a decoder then protects the information stored in this.
In our experiments, our decoder AlphaQubit made the fewest errors.