Tetsuo Profile picture
Oct 11 6 tweets 4 min read Read on X
In this small thread, I'll break down how you can create full-length movies or anime with Grok 4 Imagine.

This entire video was created with Grok4 Imagine and a video editor.
1/n



The first thing you'll want to do is come up with a prompt for your 'characters' from a storyboard you have created.

I built a free-to-use app that uses AI to take a generic prompt or image and provide an optimized prompt that will work well in Grok Imagine.grokprompt.fun
2/n

Once you have your prompt, you'll want to put this black image into Grok Imagine and use the prompt from in the custom section. This will create your first scene with an animation.grokprompt.funImage
Image
Image
3/n

Now, the next thing you'll want to do is head over to . This is a webpage that @nfkmobile built. You can use it to get the last frame from your video. The image provided is what you will want to use in Grok Imagine for the next scene of your video. egod.devImage
4/n

Finally, you'll want to get a video editor like CapCut, which is very friendly to people with no editing skills. You can use this to edit your video, and there's also royalty-free music in the app that you can use with your art on X. Image
5/n

You can continue the previous steps and add as many scenes as you require for your film!

Download and use this clip on the eGOD webpage for practice.

Just Grok It 💫

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Tetsuo

Tetsuo Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @tetsuoai

Mar 2
Adobe Fuck you, you snakes. Who charges people for canceling a subscription? Image
@Adobe Do better.
Going to get my money worth. 🤬 Image
Read 18 tweets
Feb 9
🧵0/3 Here's why we are building the AgenC open source AI agent framework entirely in C and how it will revolutionize edge computing and embedded AI.

👇This thread is worth reading.
🧵1/3 Market Impact & Adoption Potential

Shift in Edge Computing and IoT AI: An open-source C AI agent framework will be a game-changer for edge AI deployment. By enabling sophisticated AI models to run on inexpensive, low-power hardware, it will allow AI processing to be pushed out closer to sensors and end-users. This reduces reliance on cloud computation, lowers latency, and improves privacy (since raw data need not leave the device). Industries are already keen on on-device AI – the Edge AI market is booming, projected to grow to $270+ billion by 2032. A lightweight, efficient framework is exactly what's needed to unlock AI use-cases in this space, from smart home appliances to industrial IoT sensors. For example, imagine intelligent monitoring on a microcontroller that can detect anomalies in machinery in real-time, or tiny medical wearables that run neural networks locally. Today, these are often implemented with highly optimized C/C++ inferencing libraries (like TensorFlow Lite Micro, or vendor-specific libraries) because Python frameworks are too heavy. A dedicated C agent framework, especially since it's open-source, will become the standard for these edge scenarios. Analysts predict TinyML (tiny machine learning on microdevices) will explode in the coming years – device installs are expected to rise to over 11 billion by 2027. The AgenC framework will be poised to ride that wave, enabling AI on billions of devices that were previously too resource-constrained for anything beyond trivial logic.

Open-Source Innovation & Industry Collaboration: By being open-source, the AgenC C-based AI framework would benefit from collective innovation. Many organizations in performance-critical industries (automotive, robotics, aerospace, healthcare devices, etc.) have specialized needs that aren't fully met by one-size-fits-all frameworks. With an open project, they could contribute code for optimizations, new hardware backends, or domain-specific features. This collaborative development can dramatically accelerate the project's evolution. History shows that open-source projects often innovate faster and dominate their domains – Linux, for instance, became the ubiquitous OS through community contributions. In the AI domain, the open-source ethos is already seen as crucial for progress. Most people in the tech community believe that OSS fosters a collaborative environment and accelerates AI innovation. By lowering the barrier for anyone (companies, academics, hobbyists) to inspect and improve the code, the framework will quickly gain powerful features and optimizations that a single team can not develop alone. Open availability will also democratize AI deployment know-how. Small startups or research labs can use the framework to run state-of-the-art agents on cheap hardware, driving further creative applications. Essentially, an open-source C AI framework could become a community-driven standard for embedded AI, much like how OpenCV became a standard library for computer vision in C/C++. This broad participation would not only improve the framework rapidly but also increase trust and adoption in enterprise settings (since many eyes have vetted the code, and no single vendor "owns" it).

Advancing AI in Embedded & Constrained Environments: Perhaps the most exciting potential impact is how the framework could expand the frontiers of where AI can be deployed. Today's cutting-edge AI models mostly live in the cloud or on powerful edge devices (like GPUs in cars or phones). A robust C framework will bring advanced AI to far more constrained settings. Think microcontrollers running reinforcement learning for adaptive control, or tiny drones with onboard neural navigation. We're already seeing hints of this – researchers managed to deploy a deep reinforcement learning policy on a microcontroller-powered nano-drone by writing a custom C inference library, something that general frameworks couldn't handle. With a dedicated framework making this easier, we could see a new class of "smart" embedded agents. This could transform products and industries: smart sensors that don't just report data but analyze it on-site, medical implants that adjust therapy in real-time via AI, or spacecraft and autonomous robots that need ultra-reliable, real-time onboard decision making without bulky runtime environments. By optimizing for minimal memory and maximal efficiency, the C framework would empower developers to squeeze AI into devices and scenarios that were previously off-limits. And because it's open-source, educational institutions and hobbyists could also experiment freely, accelerating the spread of AI into every corner of the physical world.
🧵2/3 Technical Advantages of a C-Based AI Framework.

High Performance & Low-Level Efficiency: A framework written in C can achieve significantly faster execution and lower latency than Python-based frameworks. Compiled C/C++ code produces compact machine instructions with minimal overhead, whereas Python incurs runtime interpretation, GIL locking, and garbage collection costs. Studies on microcontroller workloads show that C/C++ implementations run many times faster than MicroPython (Python). In short, C lets you utilize CPU/GPU hardware more directly and efficiently, without the layers of indirection that Python frameworks rely on.

Real-Time Processing & Low Latency: For robotics, embedded control, and other real-time applications, C offers more predictable and deterministic timing. High-frequency control loops (e.g. 100 Hz or above) and latency-critical tasks can be met reliably with C/C++, whereas Python's interpreter and global lock can introduce jitter or delays. In autonomous vehicles and drones, for example, developers often favor C/C++ over Python specifically to meet strict latency and scheduling requirements. One research team found that TensorFlow Lite and other Python-oriented inference libraries had "too much overhead to run reliably" on a microcontroller-based robot; by switching to a custom lightweight C library, they achieved stable 100Hz inference performance for their AI policy.

Portability to Diverse Hardware (Edge & Microcontrollers): C is famously portable – it's been called a language "universally understood by almost every computer and microcontroller". An AI framework in pure C could be compiled for a vast range of architectures, from x86 servers down to tiny 8/16/32-bit microcontrollers, with minimal modifications. Python-centric frameworks require a POSIX-like OS and substantial resources, making them impractical on constrained devices. By using C, the framework could run bare-metal or on a simple RTOS, bringing AI capabilities to devices that can't run a Python interpreter. This approach aligns with the TinyML movement: there are an estimated 250+ billion microcontrollers in use (growing by ~30B per year), and on-device ML (TinyML) is emerging as the way to make these ubiquitous chips intelligent. A C-based solution can directly leverage this hardware ubiquity.

Security & Minimal Attack Surface: A framework written in C with minimal dependencies can be easier to secure. Without needing a large runtime (like a Python VM) or numerous external libraries, the overall codebase and attack surface can be kept small. Fewer software layers mean fewer potential vulnerabilities and points of entry for attackers. Using lean binaries or containers with only what's necessary shrinks the number of vulnerabilities… introduced through dependencies and considerably lowers the attack surface. In a C framework, there is no need to ship a full interpreter or manage Python package dependencies (which have been a source of supply-chain attacks in the past), reducing risk. While one must still practice secure coding (C has its own memory safety challenges), a purpose-built C agent framework can be audited and sandboxed more tightly than a complex web of Python modules, leading to security advantages in critical deployments.
Read 5 tweets
Feb 4
🧵1/19 DeepSeek lied about it's costs.

DeepSeek R1's training did not actually cost only $6 million, that figure was an incomplete and selective representation of the costs. They also have access to around 50,000 Hopper GPUs.

Here's proof.👇
🧵2/19 Officially, the company reported $5.5M in GPU compute for a single training run, but it left out the massive investments in hardware and development that were required to make that run successful.
🧵3/19 Aanalyses have since revealed that DeepSeek spent on the order of $1–2 billion in total to build R1​. The Chinese startup claimed R1 was built in two months for under $6 million in training expenses (using Nvidia’s H800 chips).
Read 21 tweets
Jan 30
🧵0/9 Stanford Intro Reinforcement Learning, from generalization to advanced deep RL. Full videos, assignments, and a final project.

Prereqs:
- Programming knowledge.
- Derivatives & matrix vector operations.
- Probability and Stats.
- Foundations of ML.

Links in comments👇 Image
🧵1/9
Full playlist: youtube.com/playlist?list=…

CS234: web.stanford.edu/class/cs234/in…

👇Projects in 2/9 ... 9/9
🧵2/9 Using Transfer Learning Between Games to Improve Deep Reinforcement Learning Performance.

web.stanford.edu/class/cs234/pa…Image
Read 10 tweets
Jan 27
🚨 Important Notice for Crypto/AI Investors: 🚨

If you see projects claiming "DeepSeek AI Integration," there are serious red flags to consider.

Technical Reality:

DeepSeek-R1 lacks fundamental features needed for - real applications.

- No function calling capability
- Can't interface with external systems
- Limited multi-turn interactions
- No API integration support

Integration Impossibility:

- Without function calling and API support, meaningful integration into other projects is impossible
- Even GPT-3.5 from 2022 has more practical utility for real applications

Market Context:

Projects claiming DeepSeek integration are likely:

- Misrepresenting technical capabilities
- Lacking technical understanding
- Using buzzwords without substance

Bottom Line:

DeepSeek-R1 is a research model showing interesting results on benchmarks, but it cannot be meaningfully "integrated" into applications due to fundamental technical limitations.

Any project claiming otherwise raises serious concerns about their technical credibility and transparency.

Remember:

Real AI integration requires specific technical capabilities that DeepSeek-R1 explicitly acknowledges it doesn't have!
Using R1's reasoning outputs as prompts = potentially viable. Claiming full "DeepSeek AI integration" = still misleading if the project needs actual function calling or API integration.
Additional Agent-Related Limitations

- Poor multi-turn interaction (required for agent dialogue)
- Limited JSON output (key for structured agent communications)
- Issues with complex role-playing (important for agent personas)

Real-World Impact:

- Can't integrate with popular agent frameworks
- Limited use in automation systems
- No ability to chain tools or APIs
- Restricted to pure language tasks
Read 4 tweets
Oct 5, 2024
🧵1/3 Operating Systems: Three Easy Pieces is one of the best ways to learn OS concepts.

The book, along with online resources, homework, projects, and video lectures, is completely free.

I've also dropped some playlists to follow the book.

👇Book and resources in comments. Image
🧵2/3 free online book, in chapter-by-chapter form: pages.cs.wisc.edu/~remzi/OSTEP/

Errata and Book News: pages.cs.wisc.edu/~remzi/OSTEP/c…

👇Lecture videos, homework, projects. Image
🧵3/3 Resources

Chapter problem sets: pages.cs.wisc.edu/~remzi/OSTEP/H…

OSTEP homework: github.com/remzi-arpacidu…

OSTEP Projects: github.com/remzi-arpacidu…

Video Lectures:
- Operating Systems UPLB: youtube.com/playlist?list=…

- CS 162 youtube.com/playlist?list=…

- OS: youtube.com/playlist?list=… Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(