For those wondering my quick take on what's happening right now with R1 and Janus
1. GPU demand will not go down 2. OpenAI is not done for, but Open source and China are showing they're far closer than anticipated 3. There's way too much misinfo being spread by mainstream media right now (almost seems on purpose?) 4. DeepSeek open-sourcing R1 is still a huge gift to developers and overall AI progress
I haven't seen this much confusion and uncertainty on my TL for ages...
That's a wrap for day 2 of the world's largest consumer tech event, CES 2025.
The top 10 tech and gadget reveals from day 2:
1. A stretchable Micro LED display that turns 2D into 3D by Samsung
2. A multitasking household robot that does everything from vacuuming, organization, air purification, monitoring pets, and even delivering you food while you sit on the couch by SwitchBot
3. An immerse location-based entertainment concept that allows players to use flashlights and guns in an LED environment by Sony
Google just released Veo 2, a new state-of-the-art AI video model.
In testing, Veo beat OpenAI Sora in BOTH quality and prompt adherence.
The video compilation below is 100% created by AI (more details in thread):
Veo can generate 8 second videos in up to 4K resolution (720p at launch).
The model also features:
— Better understanding of physics for more natural movement, lighting, etc.
— Enhanced clarity and sharpness of outputs
— Reduced hallucinated objects and details
The model also excels at a variety of cinematic styles, with better camera control for more creative storytelling.
I've been an early tester + had in-person demos for most of Google’s AI projects announced today.
I found several practical use cases that will benefit everyday people.
12 use cases of Project Astra/Deep Research Agents/Project Mariner (beyond the hype):
Project Astra: Google's AI agent that can 'see the world' using your phone camera
Use cases that stood out to me:
> Summarizing a book page in seconds and chatting with it for follow-ups on complex topics (professor-in-your-pocket)
> Identifying a rash: just seasonal hives or something more serious?
> Real-time translation of languages, sign language, and books (worked great for Japanese writing → English summary).
> Locating landscapes in photos and estimating their distance using the Google Maps integration.
> Remembering cookbook recipes and recommending wine pairings based on the recipe and budget.
> Summarizing thousands of Amazon/Airbnb reviews in seconds using mobile screen sharing, with highlights of any negative feedback.
Deep Research Agent: Google’s new research assistant that create's full reports on any topic and links back to the relevant sources.
Use cases that stood out to me:
> Coming up with interview questions based on what people are curious about across the internet.
> Conducting market research on stocks (e.g., "Why did Google stock go up today?").
> Creating a full Christmas gift plan for my mom (based on current trends and her preferences highlighted in the prompt)
> Creating an analysis and report of my health/fitness and how I can improve based on my Whoop data.