We’re back with more suggestions from our researchers for ways to expand your knowledge of AI.
Today’s #AtHomeWithAI recommendations are from research scientist Kimberly Stachenfeld (@neuro_kim) (1/7)
She recommends “The Scientist in the Crib” [longer listen] by @AlisonGopnik, Andrew Meltzoff, & Patricia K. Kuhl for those who are interested in what early learning tells us about the mind.
Interested in computational systems neuroscience? @neuro_kim recommends the lecture series from @MBLScience to learn more about circuits and system properties of the brain.
@neuro_kim says Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems [longer read] by Peter Dayan & L.F. Abbott is a must read for anyone looking for an introduction to the topic.
Described as “a classic for anyone who wants to understand the roots of DL”, Kimberly recommends “The Appeal of Parallel Distributed Processing” [longer read] by James McClelland, the late David Rumelhart, & Geoffrey Hinton.
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖
It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
What makes this new model unique?
🔵 It has the generality and dexterity of Gemini Robotics - but it can run locally on the device
🔵 It can handle a wide variety of complex, two-handed tasks out of the box
🔵 It can learn new skills with as few as 50-100 demonstrations
From humanoids to industrial bi-arm robots, the model supports multiple embodiments, even though it was pre-trained on ALOHA - while following instructions from humans. 💬
These tasks may seem easy for us but require fine motor skills, precise manipulation and more. ↓
Anyone can now use 2.5 Flash and Pro to build and scale production-ready AI applications. 🙌
We’re also launching 2.5 Flash-Lite in preview: the fastest model in the 2.5 family to respond to requests, with the lowest cost too. 🧵
2.5 Flash-Lite now supports:
🔹Thinking: improving performance and transparency through step-by-step reasoning
🔹Tool-use: including Search, code execution and 1 million token context window - similar to 2.5 Flash and Pro
⚡ 2.5 Flash-Lite is our most cost efficient model yet - and with lower latency than 2.0 Flash-Lite and Flash on a broad sample of prompts.
It also has all-around, higher quality than 2.0 Flash-Lite on coding, math, science, reasoning and multimodal benchmarks.
Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery.
It’s able to:
🔘 Design faster matrix multiplication algorithms
🔘 Find new solutions to open math problems
🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵
Our system uses:
🔵 LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
🔵 Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
🔵 Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.
Over the past year, we’ve deployed algorithms discovered by AlphaEvolve across @Google’s computing ecosystem, including data centers, software and hardware.
It’s been able to:
🔧 Optimize data center scheduling
🔧 Assist in hardware design
🔧 Enhance AI training and inference
We’re helping robots self-improve with the power of LLMs. 🤖
Introducing the Summarize, Analyze, Synthesize (SAS) prompt, which analyzes how they perform tasks based on previous actions and then suggests ways for them to get better using the medium of table tennis. 🏓
Large language models like Gemini have an inherent ability to problem solve, without needing to retrain for specific jobs.
Robots can use these models to improve how they operate over time, by interacting with the world, and learning from those interactions. 🦾 goo.gle/4jVFsoE
With the SAS prompt, we can now use language models like Gemini to learn from a robot's history.
This allows the model to analyze parameter effects ⚡, and suggest ways to improve - similar to a real-life table tennis coach. 💡 goo.gle/3GvWQ54
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥
We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through @LabsDotGoogle. → goo.gle/veo-2-imagen-3
Veo 2 is able to:
▪️ Create videos at resolutions up to 4k
▪️ Understand camera controls in prompts, such as wide shot, POV and drone shots
▪️ Better recreate real-world physics and realistic human expression
In head-to-head comparisons of outputs by human raters, it was preferred over other top video generation models. → goo.gle/veo-2
We’ve also enhanced Imagen 3’s ability to:
▪️ Produce diverse art styles: realism, fantasy, portraiture and more
▪️ More faithfully turn prompts into accurate images
▪️ Generate brighter, more compositionally balanced visuals