In the first lecture of the series, Research Scientist Hado introduces the course and explores the fascinating connection between reinforcement learning and artificial intelligence: dpmd.ai/RLseries1
In lecture two, Research Scientist Hado explains why it's important for learning agents to balance exploring and exploiting acquired knowledge at the same time: dpmd.ai/RLseries2
In the third lecture, Research Scientist Diana shows us how to solve MDPs with dynamic programming to extract accurate predictions and good control policies: dpmd.ai/RLseries3
In lecture four, Diana covers dynamic programming algorithms as contraction mappings, looking at when and how they converge to the right solutions: dpmd.ai/RLseries4
In part two of the model-free lecture, Hado explains how to use prediction algorithms for policy improvement, leading to algorithms - like Q-learning - that can learn good behaviour policies from sampled experience: dpmd.ai/RLseries6
In this lecture, Hado explains how to combine deep learning with reinforcement learning for deep reinforcement learning. He looks at the properties and difficulties that arise when combining function approximation with RL algorithms: dpmd.ai/RLseries7
In this lecture, Research Engineer Matteo explains how to learn and use models, including algorithms like Dyna and Monte-Carlo tree search (MCTS): dpmd.ai/RLseries8
We’re bringing powerful AI directly onto robots with Gemini Robotics On-Device. 🤖
It’s our first vision-language-action model to help make robots faster, highly efficient, and adaptable to new tasks and environments - without needing a constant internet connection. 🧵
What makes this new model unique?
🔵 It has the generality and dexterity of Gemini Robotics - but it can run locally on the device
🔵 It can handle a wide variety of complex, two-handed tasks out of the box
🔵 It can learn new skills with as few as 50-100 demonstrations
From humanoids to industrial bi-arm robots, the model supports multiple embodiments, even though it was pre-trained on ALOHA - while following instructions from humans. 💬
These tasks may seem easy for us but require fine motor skills, precise manipulation and more. ↓
Anyone can now use 2.5 Flash and Pro to build and scale production-ready AI applications. 🙌
We’re also launching 2.5 Flash-Lite in preview: the fastest model in the 2.5 family to respond to requests, with the lowest cost too. 🧵
2.5 Flash-Lite now supports:
🔹Thinking: improving performance and transparency through step-by-step reasoning
🔹Tool-use: including Search, code execution and 1 million token context window - similar to 2.5 Flash and Pro
⚡ 2.5 Flash-Lite is our most cost efficient model yet - and with lower latency than 2.0 Flash-Lite and Flash on a broad sample of prompts.
It also has all-around, higher quality than 2.0 Flash-Lite on coding, math, science, reasoning and multimodal benchmarks.
Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery.
It’s able to:
🔘 Design faster matrix multiplication algorithms
🔘 Find new solutions to open math problems
🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵
Our system uses:
🔵 LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
🔵 Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
🔵 Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.
Over the past year, we’ve deployed algorithms discovered by AlphaEvolve across @Google’s computing ecosystem, including data centers, software and hardware.
It’s been able to:
🔧 Optimize data center scheduling
🔧 Assist in hardware design
🔧 Enhance AI training and inference
We’re helping robots self-improve with the power of LLMs. 🤖
Introducing the Summarize, Analyze, Synthesize (SAS) prompt, which analyzes how they perform tasks based on previous actions and then suggests ways for them to get better using the medium of table tennis. 🏓
Large language models like Gemini have an inherent ability to problem solve, without needing to retrain for specific jobs.
Robots can use these models to improve how they operate over time, by interacting with the world, and learning from those interactions. 🦾 goo.gle/4jVFsoE
With the SAS prompt, we can now use language models like Gemini to learn from a robot's history.
This allows the model to analyze parameter effects ⚡, and suggest ways to improve - similar to a real-life table tennis coach. 💡 goo.gle/3GvWQ54
Today, we’re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. 🎥
We’re also releasing an improved version of our text-to-image model, Imagen 3 - available to use in ImageFX through @LabsDotGoogle. → goo.gle/veo-2-imagen-3
Veo 2 is able to:
▪️ Create videos at resolutions up to 4k
▪️ Understand camera controls in prompts, such as wide shot, POV and drone shots
▪️ Better recreate real-world physics and realistic human expression
In head-to-head comparisons of outputs by human raters, it was preferred over other top video generation models. → goo.gle/veo-2
We’ve also enhanced Imagen 3’s ability to:
▪️ Produce diverse art styles: realism, fantasy, portraiture and more
▪️ More faithfully turn prompts into accurate images
▪️ Generate brighter, more compositionally balanced visuals