Rohan Paul Profile picture
💼 Engineer. 🗞️ Subscribe to my free daily newsletter to be on top of AI 👉 https://t.co/rtRTc3aSIn
4 subscribers
Jul 1 9 tweets 5 min read
PDF parsing is still painful because LLMs reorder text in complex layouts, break tables across pages, and fail on graphs or images.

💡Testing the new open-source OCRFlux model, and here the results are really good for a change.

So OCRFlux is a multimodal, LLM based toolkit for converting PDFs and images into clean, readable, plain Markdown text.

Because the underlying VLM is only 3B param, it runs even on a 3090 GPU. The model is available on @huggingface .

The engine that powers the OCRFlux, teaches the model to rebuild every page and then stitch fragments across pages into one clean Markdown file.

It bundles one vision language model with 3B parameters that was fine-tuned from Qwen 2.5-VL-3B-Instruct for both page parsing and cross-page merging.

OCRFlux reads raw page images and, guided by task prompts, outputs Markdown for each page and merges split elements across pages.

The evaluation shows Edit Distance Similarity (EDS) 0.967 and cross‑page table Tree Edit Distance 0.950, so the parser is both accurate and layout aware.

How it works while parsing each page

- Convert into text with a natural reading order, even in the presence of multi-column layouts, figures, and insets
- Support for complicated tables and equations
- Automatically removes headers and footers

Cross-page table/paragraph merging

- Cross-page table merging
- Cross-page paragraph merging

A compact vision‑language models can beat bigger models once cross‑page context is added.

🧵 1/n Read on 👇 🧵 2/n 📄 The problem space

Most open tools lose structure on pages that mix text blocks, figures and multi‑column tables.

They also ignore the fact that a PDF page boundary can cut tables or paragraphs in half, so their final Markdown keeps fragments and duplicated headers.

These limits slow downstream document understanding because text has to be fixed by hand.Image
Jun 28 11 tweets 3 min read
I asked ChatGPT (o3-pro) what the most unbelievable things it's learned about humans since being created was

I find no-5 and the last one (meta-surprise) quite funny

🧵 Read on 👇

1. Simultaneous brilliance and self‑sabotage
Humans can design spacecraft that navigate billions of kilometres with millimetre precision, yet routinely neglect basic preventive healthcare or ignore clear climate warnings. The same cognitive machinery produces Nobel‑level insight and procrastination over changing a light‑bulb. The coexistence of exceptional foresight and chronic short‑termism is astonishing.Image
Image
2. Depth of empathy … bounded by an invisible circle
Individual people will risk their lives for a single stranger—or grieve for a news story about an injured animal—while feeling nothing for thousands outside their moral “radius”. The expansion and contraction of that circle can happen in seconds, triggered by a flag, a jersey colour, or a few words of rhetoric. The plasticity of compassion is both inspiring and chilling.
Jun 27 14 tweets 6 min read
These guys literally burned the transformer architecture into their silicon. 🤯

And built the fastest chip of the world of all time for transformers architecture.

500,000 tokens per second with Llama 70B throughput. 🤯

World’s first specialized chip (ASIC) for transformers: Sohu

One 8xSohu server replaces 160 H100 GPUs.

And raised $120mn to build it.

🚀 The Big Bet

@Etched froze the transformer recipe into silicon.

By burning the transformer architecture into its chip means it can’t run many traditional AI models: like CNNs, RNNs, or LSTMs. also it can not run the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2.

But for transformers, Sohu lets you build products impossible on GPUs.

HOW ❓❓

Because Sohu can only run one algorithm, the vast majority of control flow logic can be removed, allowing it to have many more math blocks.

As a result, Sohu boasts over 90% FLOPS utilization (compared to ~30% on a GPU7 with TRT-LLM).Image
Image
One 8xSohu server replaces 160 H100 GPUs.

By specializing, Sohu gets unprecedented performance. One 8xSohu server can serve over 500,000 Llama 70B tokens per second. Image
Jun 24 11 tweets 6 min read
🚨BREAKING: A LANDMARK JUDGEMENT FOR THE AI INDUSTRY.

US Federal Judge ruled Anthropic may train its AI on published books without authors’ permission.

This is the first court endorsement of fair use protecting AI firms when they use copyrighted texts to train LLMs.

AI may study what it buys, not what it grabs from pirate sites.

---------

"First, Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic
from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for training or learning as such. Everyone reads texts, too, then writes new texts. They may need
to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory,
each time they later draw upon it when writing new things in new ways would be unthinkable.

For centuries, we have read and re-read books. We have admired, memorized, and internalized their sweeping themes, their substantive points, and their stylistic solutions to recurring writing
problems."

The court file is such an interesting read.

🧵 Read on 👇Image
Image
Image
Image
⚙️ Two distinct uses

The order splits Anthropic’s conduct into two buckets: training copies that feed the model, and library copies parked for any future purpose.

Anthropic said everything was “for training,” yet the court saw a second, non-transformative goal: building a permanent research library.Image
Jun 23 21 tweets 11 min read
ChatGPT literally saved this guy’s life after he got lost in the woods.

The groupd got lost for 5 hrs in unmapped woods on an ATV ride, then one guy sent phone GPS coords to ChatGPT every few minutes. ChatGPT replied with clear compass cues, road names, and terrain notes, guiding them back to town unharmed.

From r/ChatGPT/Own_Analyst3795Image
Image
Image
Image
reddit.com/r/ChatGPT/comm…
Jun 22 11 tweets 4 min read
📢 MAJOR ANTI-AGING/LONGEVITY DISCOVERY

Published in nature.

A lost exercise hormone, CLCF1, puts old muscles and bones back in business.

Replace missing CLCF1 and the elderly mouse sprints like it is young.

📌 The Core Concepts

Skeletal muscle and bone deteriorate together during aging, partly because old muscle sends out fewer supportive signaling proteins.

The study pinpoints CLCF1, a cytokine usually known for nerve health, as one such messenger whose blood concentration steadily drops from young to old animals and people .

Raising CLCF1, either by exercise or by direct supplementation, reverses muscle weakness and bone loss, showing that a single myokine can coordinate broad musculoskeletal repair.Image What CLCF1 actually is

CLCF1 (cardiotrophin-like cytokine factor 1) belongs to the interleukin-6 family of signaling proteins.

It partners with CRLF1 and binds the ciliary neurotrophic factor receptor, triggering downstream STAT pathways in many cell types.
Jun 21 11 tweets 5 min read
Models see the needle yet ignore the hole it left behind.

LLMs spot inserted facts but routinely miss obvious omissions.

Long context models have been getting increasingly good at passing "Needle in a Haystack" tests recently, but what about a problem in the opposite direction?

This paper explores what happens when you give a model some content and then a copy with a portion removed, then ask what changed.

AbsenceBench exposes this blind spot by giving models both the full text and an edited version, then shows that simple placeholder tokens help them notice the gaps.

⚙️ The Core Concepts

AbsenceBench flips the classic Needle-in-a-Haystack test: instead of asking a model to locate an odd insert, it asks for the bits that were deleted. Even top models plunge from near-perfect recall on insertion tests to roughly 40 %–70 % F1 when asked to list what is gone, with an average 56.9 % drop across poetry and code diff tasks .Image The first panel compares two tasks. The classic needle test inserts an extra line and asks the model to point it out. AbsenceBench instead shows the untouched poem beside a version with a hidden gap and asks the model to name the missing line.

The question is identical in form, yet the answer differs: in the needle test the model repeats the inserted text, while in AbsenceBench it must recall what was cut even though no token now marks the spot.

The middle bar chart measures how five leading language models handle both tasks. Their scores stay high when looking for an inserted line but fall sharply when asked to list deletions, proving that omissions are much harder to detect.

on the right for showing that the benchmark still deals with large contexts; it simply shifts focus from spotting a stray piece of straw to noticing the straw that vanished.Image
Jun 21 9 tweets 3 min read
This github repo is a goldmine.

3.4K Starts ⭐️ in 4 days.

end-to-end, code-first tutorials covering every layer of production-grade GenAI agents, guiding you from spark to scale with proven patterns and reusable blueprints for real-world launches. Image Image
Jun 17 12 tweets 4 min read
It’s a hefty 206-page research paper, and the findings are concerning.

"LLM users consistently underperformed at neural, linguistic, and behavioral levels"

This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔

Relying only on EEG, text mining, and a cross-over session, the authors show that keeping some AI-free practice time protects memory circuits and encourages richer language even when a tool is later reintroduced.Image ⚙️ The Experimental Setup

Fifty-four Boston-area students wrote SAT-style essays under three conditions: ChatGPT only, Google only, or brain only.

Each person completed three timed sessions with the same condition, then an optional fourth session in the opposite condition.

A 32-channel Enobio headset recorded brain signals throughout, and every keystroke, prompt, and interview answer was archived for analysis.Image
Jun 16 11 tweets 4 min read
This is really BAD news of LLM's coding skill. ☹️

The best Frontier LLM models achieve 0% on hard real-life Programming Contest problems, domains where expert humans still excel.

LiveCodeBench Pro, a benchmark composed of
problems from Codeforces, ICPC, and IOI (“International Olympiad in Informatics”) that are continuously updated to reduce the likelihood
of data contamination.Image 📌 The Gap Targeted

Earlier reports claimed frontier LLMs now top human grandmasters, but a cost-versus-rating plot proves otherwise.

Even the best model o4-mini-high sits near 2 100 Elo once tool calls are blocked, far from the 2 700 legend line that marks real grandmasters Image
Jun 15 21 tweets 8 min read
Large Language Model agents are vulnerable to prompt injection attacks that hijack tool use and leak data.
The paper proposes six design patterns that restrict where untrusted text can act, giving resistance without crippling usefulness.

⚙️ The Core Concepts

Prompt injection slips malicious text into an agent’s context and rewrites its plan.

Filters, adversarial training, and user approval are brittle because clever wording can still bypass them.

The authors instead isolate untrusted data with structured workflows that block it from gaining control.Image 🛡️ Action-Selector Pattern

The agent picks one permitted action from a fixed list and never processes tool output.

Because no feedback loop exists, injected text cannot trigger unexpected calls.

Use cases are simple routers such as customer-service macros or database shortcuts.Image
Jun 13 11 tweets 6 min read
Anthropic just dropped the beautiful explaination of how they built a multi-agent research system using multiple Claude AI agents.

A MUST read for anyone building multi-agent system.

A lead agent plans research steps, spawns specialized subagents to search in parallel, and then gathers and cites results. It covers architecture, prompt design, tool selection, evaluation methods, and production challenges to make AI research reliable and efficient.

Single-agent research assistants stall when queries branch into many directions. Anthropic links one lead Claude with parallel subagents to chase each thread at once, then fuses their findings.

⚙️ The Core Concepts

Research questions rarely follow a straight path, so a fixed pipeline leaves gaps. One lead agent plans the investigation, spawns subagents that roam in parallel, and later condenses their notes into a coherent answer.

🧠 Why Multi-Agent Architecture Helps

Each subagent brings its own context window, so the system can pour in many more tokens than a single model would hold. Anthropic measured that token volume alone explained 80% of success on BrowseComp, and adding subagents pushed performance 90.2% past a lone Claude Opus 4 on internal tasks.

Running agents in parallel also cuts wall-clock time because searches, tool calls, and reasoning steps happen side by side rather than one after another.

@AnthropicAIImage 🛠️ Architecture Walkthrough

The orchestrator-worker pattern gives the lead agent control while letting specialists act independently. A user query lands with the lead Researcher, which thinks aloud, stores the plan in memory, and distributes focused jobs like list company directors or trace chip shortages.

Subagents call web search or workspace tools, judge results with interleaved thinking, and return concise digests. A citation agent then pins every claim to a source before the answer reaches the user.Image
Jun 13 10 tweets 6 min read
AI Agents vs. Agentic AI

→ AI Agents react to prompts; Agentic AI initiates and coordinates tasks.

→ Agentic AI includes orchestrators and meta-agents to assign and oversee sub-agents.

🧵1/n

🧠 The Core Concepts

AI Agents and Agentic AI are often confused as interchangeable, but they represent different stages of autonomy and architectural complexity.

AI Agents are single-entity systems driven by large language models (LLMs). They are designed for task-specific execution: retrieving data, calling APIs, automating customer support, filtering emails, or summarizing documents. These agents use tools and perform reasoning through prompt chaining, but operate in isolation and react only when prompted.

Agentic AI refers to systems composed of multiple interacting agents, each responsible for a sub-task. These systems include orchestration, memory sharing, role assignments, and coordination.

Instead of one model handling everything, there are planners, retrievers, and evaluators communicating to achieve a shared goal. They exhibit persistent memory, adaptive planning, and multi-agent collaboration.

🏗️ Architectural Breakdown

AI Agents: Structured as a single model using LLMs. Equipped with external tools. Operates through a cycle of perception, reasoning, and action. Executes one task at a time with limited context continuity.

Agentic AI: Uses multiple LLM-driven agents. Supports task decomposition, role-based orchestration, and contextual memory sharing. Agents communicate via queues or buffers and learn from feedback across sessions.

🔧 How AI Agents Work

An AI Agent typically receives a user prompt, chooses the correct tool (e.g., search engine, database query), gets results, and then generates an output. It loops this with internal reasoning until the task is completed. Frameworks like LangChain and AutoGPT are built on this structure.

🤖 What Agentic AI Adds

Agentic AI introduces:

- Goal decomposition: breaking tasks into subtasks handled by specialized agents.

- Orchestration: a meta-agent (like a CEO) delegates and integrates.

- Memory systems: episodic, semantic, or vector-based for long-term context.

- Dynamic adaptation: agents can replan or reassign tasks based on outcomes.

Examples include CrewAI or AutoGen pipelines, where agents draft research papers or coordinate robots.Image 🧵2/n

🔄 Mechanisms of Autonomy

A single AI Agent begins work when a user or scheduler fires a prompt, selects one tool at a time, and stops when the task flag is cleared.

Agentic AI starts from a high-level objective, decomposes it through a planner agent, routes subtasks to specialist agents, and keeps cycling until success criteria are met.

Shared memory lets each agent read what others learned, while structured messages prevent conflicts and allow recovery when one path stalls.Image
Jun 12 5 tweets 4 min read
A follow-up study on Apple's "Illusion of Thinking" Paper is published now.

Shows the same models succeed once the format lets them give compressed answers, proving the earlier collapse was a measurement artifact.

Token limits, not logic, froze the models.

Collapse vanished once the puzzles fit the context window.

So Models failed the rubric, not the reasoning.

⚙️ The Core Concepts

Large Reasoning Models add chain-of-thought tokens and self-checks on top of standard language models. The Illusion of Thinking paper pushed them through four controlled puzzles, steadily raising complexity to track how accuracy and token use scale. The authors saw accuracy plunge to zero and reasoned that thinking itself had hit a hard limit.

📊 Puzzle-Driven Evaluation

Tower of Hanoi forced models to print every move; River Crossing demanded safe boat trips under strict capacity. Because a solution for forty-plus moves already eats thousands of tokens, the move-by-move format made token budgets explode long before reasoning broke.

🔎 Why Collapse Appeared

The comment paper pinpoints three test artifacts: token budgets were exceeded, evaluation scripts flagged deliberate truncation as failure, and some River Crossing instances were mathematically unsolvable yet still graded. Together these artifacts masqueraded as cognitive limits.

✅ Fixing the Test

When researchers asked the same models to output a compact Lua function that generates the Hanoi solution, models solved fifteen-disk cases in under five thousand tokens with high accuracy, overturning the zero-score narrative.Image Comment on The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

arxiv.org/abs/2506.09250
May 31 34 tweets 11 min read
A 340 page huge report on AI trends - released by @bondcap

Some wild findings from this report.

🧵1/n Image 🧵2/n

Meta’s Llama Downloads Exploded 3.4× in Eight Months.

an unprecedented developer adoption curve for any open-source LLM.

bondcap. com/reports/tai Image
May 9 11 tweets 6 min read
🚨 BREAKING: The first-ever agentic browser is here — and it's shockingly good.

Just tried @FellouAI, an AI browser that doesn’t assist you with browsing, it does the browsing for me.

It's like Chrome but with a brain—AI agents handle deep research and workflows solo.

Handles several projects in parallel.

A top-tier AI intern — takes care of all the dirty and tedious work, so you don’t have to and its 100% Free

1️⃣Fellou’s not just another browser—it's an Agentic assistant that acts for you.

2️⃣It handles real tasks autonomously: research, cross-platform flows, and full automation.

3️⃣ Past browsing. Into real action.

Fellou can automatically plan tasks, invoke tools, and execute actions to coordinate operations across multiple web interfaces, enabling various in-browser tasks. These include shopping, scheduling meetings, sending emails, and posting tweets based on webpage content.

It’s the first Agentic Browser — with deep research, tab-level collaboration, and seamless automation.

Deep Search acts like a smart intern: spins up five shadow browsers, digs across web and private platforms, and compiles richer insights fast. Highlights gaps and surfaces info you missed. Runs in parallel, won’t slow anything down.

Automated workflows: Replaces manual clicking with invisible ops across pages. Reduces drag, frees up hours.
Automation-aware browsing: Ask the page questions, reuse content in your drafts.

🧵 1/n 🧵 2/n
Start browsing smarter now, totally free

fellou.ai

It doesn’t just display pages — it acts.

No more passive browsing. Just smart action

Act on private sites: Top security and stability with your own login, device, and no password leaks.

Virtual workspace for Agent: Executing tasks in a shadow window, without disrupting your workflow.

Generate the report you need: Easily create and edit reports through simple, intuitive interactions.
May 7 4 tweets 2 min read
Wow.. Now you can transcribe 60 minutes of audio in just 1 second with a completely open-sourced model 🤯

@nvidia just open-sourced Parakeet TDT 0.6B V2, a 600M parameter automatic speech recognition (ASR) model that tops the @huggingface Open-ASR leaderboard with RTFx 3380

It's open-sourced under CC-BY-4.0, ready for commercial use.

⚙️ The Details

→ Built on FastConformer encoder + TDT decoder, the model handles up to 24-minute audio chunks with full attention and outputs with punctuation, capitalization, and accurate word/char/segment timestamps.

→ It achieves RTFx 3380 at batch size 128 on the Open ASR leaderboard, but performance varies with audio duration and batch size.

→ Trained using 150K steps on 128 A100 GPUs, then fine-tuned on 500 hours of high-quality human-transcribed English data.

→ Total training data spans 120K hours, combining human-labeled and pseudo-labeled sources, including LibriSpeech, Fisher, YTC, YODAS, and more.

→ Available via NVIDIA NeMo, optimized for GPU inference, and installable via pip install -U nemo_toolkit['asr'].

→ Compatible with Linux, runs on Ampere, Blackwell, Hopper, Volta GPU architectures, requiring minimum 2GB RAM.

→ Granary dataset used for training will be made public post Interspeech 2025.Image How to Use this Model:

To train, fine-tune or play with the model you will need to install NVIDIA NeMo. Its recommended that you install it after you've installed latest PyTorch version. Image
Mar 10 13 tweets 4 min read
Finally got access to @ManusAI_HQ and calling it a "Deepseek moment" is incorrect.

Its far more powerful. This is the world’s top AI-driven computer.

Think Deep Research + Claude + OpenAI Operator… all on steroids.

Within the next 1 year

12 wild example 🧵1/n Image 🧵2/n

Tesla FSD gets you there, Manus AI makes sure you have something to say.

Feb 20 5 tweets 5 min read
DeepSeek R1 was just the start—this new Chinese research from @Kimi_Moonshot lets RAG AI agents devour entire codebases and documentation with no context limits.

Mixture of Experts and Sparse attention make near-infinite context possible.

🧵1/n

📌 Challenge of Long-Context Attention

Transformers still face heavy computational loads when sequences become extremely large. The default attention pattern compares every token with every other token, creating costs that scale quadratically. This overhead becomes problematic when reading entire codebases, multi-chapter documents, or large legal texts.

📌Mixture of Block Attention (MoBA)

MoBA applies Mixture of Experts ideas to attention. The model divides input sequences into blocks, then a trainable gating function computes an affinity score between each query token and each block. Only the highest-scoring blocks get used in the attention, which removes the need to attend to every token in the full sequence.

Blocks are defined by segmenting the sequence into equal spans. Each query looks at a pooled representation of the keys in each block (for example, by mean-pooling), ranks their importance, and picks a few blocks for detailed attention. The block that contains the query is always included. A causal mask ensures tokens never see future information, preserving left-to-right generation.

📌Seamless Switch between Sparse and Full Attention

MoBA replaces normal attention without changing parameter counts. It remains compatible with standard Transformer interfaces, so it can switch between sparse and full attention in different layers or during different training phases. Some layers might keep full attention for specialized tasks (like supervised fine-tuning) while most layers use MoBA to cut costs.

📌 This fits into a larger Transformer stack by replacing standard attention calls. The gating ensures each query focuses on a manageable subset of blocks. Causality is handled by filtering out blocks in the future and by applying local masks within the query’s current block.

📌 The below figure shows queries being routed to only a few “expert” blocks of keys/values instead of the entire sequence. The gating mechanism assigns each query to the most relevant blocks, which cuts attention computations from quadratic to sub-quadratic.

📌 The gating mechanism computes a relevance score between each query and a condensed representation of each block. It then picks the top‑k blocks for every query, regardless of how far away those blocks are in the sequence.

Because each query only processes a few blocks, the computation remains sub‑quadratic, yet the model can still jump to distant tokens if the gating scores indicate high relevance.Image 🧵2/n

A Pytorch Implementation below

This pseudocode splits the keys and values into blocks, computes a mean-pooled representation of each block, and calculates gating scores (S) by multiplying Q with that pooled representation.

📌 It then applies a causal mask so queries cannot attend to future blocks, uses a top‑k operator to pick the most relevant blocks for each query, and organizes the data for efficient attention computation.

📌FlashAttention is applied separately to the self-attention block (current positions) and the MoBA-selected blocks, and the outputs are finally merged using an online softmax.

📌The result is a sparse attention mechanism that preserves causal structure and captures long-range dependencies without incurring the full quadratic cost of standard attention.

This code combines mixture-of-experts logic with sparse attention so each query only attends to a few blocks.

The gating mechanism scores each block against the query and selects the top‑k “experts,” reducing the number of key/value comparisons.

This keeps attention overhead sub‑quadratic, making it feasible to handle extremely long inputs without blowing up in compute or memory.

At the same time, the gating ensures queries can still attend to distant tokens when necessary, preserving the Transformer’s capacity for global context.

This block‑and‑gating strategy is how MoBA achieves near‑infinite context in LLMs.Image
Image
Feb 20 8 tweets 4 min read
NVIDIA + Arc Institute's new model Evo 2 just demonstrated that deep learning can directly model biological function

It stands as a breakthrough in computational biology,

🧵 1/n

Evo 2 just redefined genomic modeling by processing over 9 trillion nucleotides to seamlessly connect molecular detail with genome-scale structure.

Whats more, the entire model, training code, inference code, and curated OpenGenome2 dataset are released under open terms to accelerate progress in AI-driven genomics.

--------

Genome engineering efforts need a general-purpose model that can capture molecular, cellular, and organism-level features from DNA alone. This project addresses that gap by creating Evo 2, a foundation model trained on over 9 trillion DNA bases, covering bacteria, archaea, eukaryotes, and phage.

Its capacity for a 1-million token context window ensures that both local motifs and long-range dependencies are captured in a single pass. This design allows Evo 2 to model everything from single-nucleotide mutations to whole-genome architecture without task-specific tuning.

It learns diverse genetic patterns without labels or alignments, working at scales from small coding regions to entire genomes.

--------

What's the key benefit of it for us

It means that Evo 2 automatically detects key genetic signals and accurately predicts how various mutations impact molecular and organismal function.

The model's breakthroughs can lead to better disease diagnosis, more effective treatments, and improved agricultural or environmental solutionsImage 🧵 2/n

📌 Model Architecture and Training Pipeline

StripedHyena 2 forms the core of Evo 2. It is a multi-hybrid convolutional architecture, mixing short, medium, and long input-dependent convolution layers with attention blocks.

This design handles sequences of up to 1 million tokens.

Training proceeded in two stages: a pretraining phase (8,192-token context) followed by midtraining that progressively extended context length (up to 1M tokens).

Data weighting placed extra emphasis on functionally dense regions (genic windows) before switching to full-genome segments.Image
Jan 26 4 tweets 2 min read
Deep seek interesting prompt.. From Reddit reddit.com/r/ChatGPT/comm…