That's a wrap for day 2 of the world's largest consumer tech event, CES 2025.
The top 10 tech and gadget reveals from day 2:
1. A stretchable Micro LED display that turns 2D into 3D by Samsung
2. A multitasking household robot that does everything from vacuuming, organization, air purification, monitoring pets, and even delivering you food while you sit on the couch by SwitchBot
3. An immerse location-based entertainment concept that allows players to use flashlights and guns in an LED environment by Sony
Google just released Veo 2, a new state-of-the-art AI video model.
In testing, Veo beat OpenAI Sora in BOTH quality and prompt adherence.
The video compilation below is 100% created by AI (more details in thread):
Veo can generate 8 second videos in up to 4K resolution (720p at launch).
The model also features:
— Better understanding of physics for more natural movement, lighting, etc.
— Enhanced clarity and sharpness of outputs
— Reduced hallucinated objects and details
The model also excels at a variety of cinematic styles, with better camera control for more creative storytelling.
I've been an early tester + had in-person demos for most of Google’s AI projects announced today.
I found several practical use cases that will benefit everyday people.
12 use cases of Project Astra/Deep Research Agents/Project Mariner (beyond the hype):
Project Astra: Google's AI agent that can 'see the world' using your phone camera
Use cases that stood out to me:
> Summarizing a book page in seconds and chatting with it for follow-ups on complex topics (professor-in-your-pocket)
> Identifying a rash: just seasonal hives or something more serious?
> Real-time translation of languages, sign language, and books (worked great for Japanese writing → English summary).
> Locating landscapes in photos and estimating their distance using the Google Maps integration.
> Remembering cookbook recipes and recommending wine pairings based on the recipe and budget.
> Summarizing thousands of Amazon/Airbnb reviews in seconds using mobile screen sharing, with highlights of any negative feedback.
Deep Research Agent: Google’s new research assistant that create's full reports on any topic and links back to the relevant sources.
Use cases that stood out to me:
> Coming up with interview questions based on what people are curious about across the internet.
> Conducting market research on stocks (e.g., "Why did Google stock go up today?").
> Creating a full Christmas gift plan for my mom (based on current trends and her preferences highlighted in the prompt)
> Creating an analysis and report of my health/fitness and how I can improve based on my Whoop data.
AI NEWS: Meta just unexpectedly dropped Llama 3.3—a 70B model that's ~25x cheaper than GPT-4o.
Plus, Google released a new Gemini model, OpenAI reinforcement finetuning, xAI's Grok is available for free, Copilot Vision, ElevenLabs GenFM, and more.
Here's what you need to know:
Meta just dropped Llama 3.3 — a 70B open model that offers similar performance to Llama 3.1 405B, but significantly faster and cheaper.
It's also ~25x cheaper than GPT-4o.
Text only for now, and available to download at llama .com/llama-downloads
Google launched a new model, gemini-exp-1206 on Gemini's 1 year birthday (today!)
It tops the Chatbot Arena rankings in ALL domains.
It also looks like it's available for everyone in AI Studio for free!
And it's even more advanced than the February demo.
A group from the early beta testers just dropped access to what looks to be Sora's ‘turbo’ variant on Hugging Face, citing concerns about OpenAI's early access program and artist compensation.
The leaked version generates 1080p 10-second clips and seems to be processing WAY faster than the previously reported 10-minute render times.
In September, The Information reported that a new version of Sora was being trained to address long generation times and better physics — and potentially built-in features like inpainting and image generation: