NVIDIA just removed one of the biggest friction points in Voice AI.
PersonaPlex-7B is an open-source, full-duplex conversational model.
Free, open source (MIT), with open model weights on @huggingface 🤗
Links to repo and weights in 🧵↓
The traditional ASR → LLM → TTS pipeline forces rigid turn-taking.
It’s efficient, but it never feels natural.
PersonaPlex-7B changes that.
This @nvidia model can listen and speak at the same time.
It runs directly on continuous audio tokens with a dual-stream transformer, generating text and audio in parallel instead of passing control between components.
That unlocks:
→ instant back-channel responses
→ interruptions that feel human
→ real conversational rhythm
Persona control is fully zero-shot!
If you’re building low-latency assistants or support agents, this is a big step forward 🔥
MIT and Oxford released their $2,500 agentic AI curriculum on GitHub at no cost.
15,000 people already paid for it.
Now it's on GitHub!
It covers patterns, orchestration, memory, coordination, and deployment.
A strong roadmap to production ready systems.
Repo in 🧵 ↓
10 chapters:
Part 1. What agents are and how they differ from plain generative AI.
Part 2. The four agent types and when to use each.
Part 3. How tools work and how to build them.
Part 4. RAG vs agentic RAG and key patterns.
Part 5. What MCP is and why it matters.
Part 6. How agents plan with reasoning models.
Part 7. Memory systems and architecture choices.
Part 8. Multi agent coordination and scaling.
Part 9. Real world production case studies.
Part 10. Industry trends and what is coming next.