Among all the cool things #ChatGPT can do, it is super capable of handling and manipulating data in bulk, making numerous data wrangling, scraping, and lookup tasks obsolete.
Let me show you a few cool tricks, no coding skills are required!
(A thread) 👇🧵
Let's start easy by heading to chat.openai.com/chat and pasting a list of 60 countries in the text field
Let's ask #ChatGPT to give us the main language, latitude, longitude, and country code for each of these countries
That was easy enough, right?
Now let's add more data to our output by asking #ChatGPT to provide the population of each of these countries
Uber cool! 😎
Let's ask ChatGPT to wrap these results in a table
Let's conclude this thread by asking #ChatGPT to create a @streamlit app with a CSV uploader and filter boxes to filter `longitude`, `latitude`, and `country code`.
Not only does #ChatGPT displays the code, but it also provides clear explanations for each step! 👏
This is just a quick overview of what you can do with #ChatGPT.
I'm only scratching the surface here.
For more cool things you can do with it, check out my other thread
1. Follow me @DataChaz to read more content like this. 2. Share it with an RT, so others can read it too! 🙌
Note that while #AI is capable of handling tasks such as sourcing and sorting, as well as some aspects of app development, it is not yet advanced enough to replace the need for human verification.
Even with its impressive capabilities, AI still requires human oversight.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
NVIDIA just removed one of the biggest friction points in Voice AI.
PersonaPlex-7B is an open-source, full-duplex conversational model.
Free, open source (MIT), with open model weights on @huggingface 🤗
Links to repo and weights in 🧵↓
The traditional ASR → LLM → TTS pipeline forces rigid turn-taking.
It’s efficient, but it never feels natural.
PersonaPlex-7B changes that.
This @nvidia model can listen and speak at the same time.
It runs directly on continuous audio tokens with a dual-stream transformer, generating text and audio in parallel instead of passing control between components.
That unlocks:
→ instant back-channel responses
→ interruptions that feel human
→ real conversational rhythm
Persona control is fully zero-shot!
If you’re building low-latency assistants or support agents, this is a big step forward 🔥
MIT and Oxford released their $2,500 agentic AI curriculum on GitHub at no cost.
15,000 people already paid for it.
Now it's on GitHub!
It covers patterns, orchestration, memory, coordination, and deployment.
A strong roadmap to production ready systems.
Repo in 🧵 ↓
10 chapters:
Part 1. What agents are and how they differ from plain generative AI.
Part 2. The four agent types and when to use each.
Part 3. How tools work and how to build them.
Part 4. RAG vs agentic RAG and key patterns.
Part 5. What MCP is and why it matters.
Part 6. How agents plan with reasoning models.
Part 7. Memory systems and architecture choices.
Part 8. Multi agent coordination and scaling.
Part 9. Real world production case studies.
Part 10. Industry trends and what is coming next.