Rohan Paul Profile picture
Jun 17 12 tweets 4 min read Read on X
It’s a hefty 206-page research paper, and the findings are concerning.

"LLM users consistently underperformed at neural, linguistic, and behavioral levels"

This study finds LLM dependence weakens the writer’s own neural and linguistic fingerprints. 🤔🤔

Relying only on EEG, text mining, and a cross-over session, the authors show that keeping some AI-free practice time protects memory circuits and encourages richer language even when a tool is later reintroduced.Image
⚙️ The Experimental Setup

Fifty-four Boston-area students wrote SAT-style essays under three conditions: ChatGPT only, Google only, or brain only.

Each person completed three timed sessions with the same condition, then an optional fourth session in the opposite condition.

A 32-channel Enobio headset recorded brain signals throughout, and every keystroke, prompt, and interview answer was archived for analysis.Image
🧠 Brain Connectivity Results

Alpha and beta networks were strongest when no external tool was allowed, moderate with Google, and weakest with ChatGPT.

Lower coupling during LLM use signals reduced internal attention and memory rehearsal, while high parieto-frontal flow in the brain-only group matches deep semantic processing.Image
📚 Linguistic Patterns

Essays produced with ChatGPT clustered tightly in embedding space and reused the same named entities, showing high textual homogeneity.

Google essays sat in the middle, influenced by search rankings, whereas brain-only essays scattered widely, reflecting individual experience and vocabulary.Image
📝 Memory and Ownership

After writing, only 17 % of ChatGPT users could quote their own sentences, versus 89 % in the brain-only group.

ChatGPT writers also reported the weakest sense of authorship, matching EEG evidence of reduced self-monitoring hubs. Image
🔄 Crossover Effects

When habitual ChatGPT users had to write unaided, their connectivity and quoting remained low, suggesting lingering cognitive debt.

In contrast, brain-only writers who switched to ChatGPT lit up wide networks and produced richer revisions, showing that tool use after deep practice boosts, rather than blunts, engagement.Image
⚖️ Cognitive Load Implications

LLMs cut extraneous load by 32 % and extend productive time, yet they also trim germane load, so schema building suffers unless learners deliberately integrate ideas themselves. Image
🔍 Echo-Chamber Risk

Because a probabilistic model favors agreeable continuations, ChatGPT can tighten information loops more than a search page, shrinking exposure to contrasting facts and dulling critical thought.

Hooking sentence options Image
Percentage of participants within each group who struggled to quote anything from their essays Image
Percentage of participants within each group who provided a correct quote from their essays in Session Image
Relative reported percentage of perceived ownership of essay by the participants in comparison to the
Brain-only group Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Rohan Paul

Rohan Paul Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rohanpaul_ai

Jul 6
such a beautiful story, going viral on r/ChatGPT.

proof that AI’s capabilities can touch every life.

ChatGPT to expose a $5 million estate fraud, get a forensic audit, and uncover 10 years of probate misconduct.

The daughter says their father died in 2015 leaving an estate they value at about $5mn.

The father’s girlfriend allegedly produced a Mexican marriage certificate, cremated the body abroad, kept the ashes, and then took control of the estate.

For 10 years the matter stayed in Texas probate while, the user claims, the court-appointed lawyer and administrator drained or ignored assets and let several properties, vehicles, and a construction business disappear.

After both the lawyer and administrator were removed, the user could not find new counsel, so they turned to ChatGPT to draft letters and bundled motions.

Those filings persuaded the probate judge to set a hearing and order a full forensic audit of the $5M for Aug 20

(Special note, we all know AI can sometime hallucinate, so she (the OP) combed through every citations ChatGPT referred)Image
Image
Image
Image
Read 6 tweets
Jul 1
PDF parsing is still painful because LLMs reorder text in complex layouts, break tables across pages, and fail on graphs or images.

💡Testing the new open-source OCRFlux model, and here the results are really good for a change.

So OCRFlux is a multimodal, LLM based toolkit for converting PDFs and images into clean, readable, plain Markdown text.

Because the underlying VLM is only 3B param, it runs even on a 3090 GPU. The model is available on @huggingface .

The engine that powers the OCRFlux, teaches the model to rebuild every page and then stitch fragments across pages into one clean Markdown file.

It bundles one vision language model with 3B parameters that was fine-tuned from Qwen 2.5-VL-3B-Instruct for both page parsing and cross-page merging.

OCRFlux reads raw page images and, guided by task prompts, outputs Markdown for each page and merges split elements across pages.

The evaluation shows Edit Distance Similarity (EDS) 0.967 and cross‑page table Tree Edit Distance 0.950, so the parser is both accurate and layout aware.

How it works while parsing each page

- Convert into text with a natural reading order, even in the presence of multi-column layouts, figures, and insets
- Support for complicated tables and equations
- Automatically removes headers and footers

Cross-page table/paragraph merging

- Cross-page table merging
- Cross-page paragraph merging

A compact vision‑language models can beat bigger models once cross‑page context is added.

🧵 1/n Read on 👇
🧵 2/n 📄 The problem space

Most open tools lose structure on pages that mix text blocks, figures and multi‑column tables.

They also ignore the fact that a PDF page boundary can cut tables or paragraphs in half, so their final Markdown keeps fragments and duplicated headers.

These limits slow downstream document understanding because text has to be fixed by hand.Image
🧵 3/n 🛠️ Model design

OCRFlux fine tunes Qwen2.5‑VL‑3B with two prompt templates, one for single page parsing and one for cross‑page merging.

Only the rendered page image enters the prompt, not any external layout metadata, which keeps context length short and avoids errors from faulty OCR blocks.Image
Read 9 tweets
Jun 30
SO INCREDIBLE. AI's impact on healthcare just became much more real.

@MSFTResearch's new MAI-DxO AI orchestrator solves 85% of the toughest New England Journal of Medicine (NEJM) cases while ordering fewer tests, showing language-model teams can out-reason individual physicians. 💡

MAI-DxO is a model-agnostic orchestrator that simulates a panel of virtual physicians.

So what's so special about this❓

Complex medical cases still cause missed or delayed diagnoses and drive up costs.

🧩 Multiple-choice benchmarks hide real weaknesses in medical AI, because selecting a single answer from a list rewards memorization and ignores the step-by-step reasoning clinicians use.

USMLE style exams (i.e. the ones used till now for benchmarking medical LLMs) hand the entire patient scenario to the model in one block and ask for a single choice answer.

A language model can match wording patterns it has seen during training and guess the right letter without tracing the kind of step-by-step logic that happens in clinic.

So they developed SDBench, a new benchmark that transforms 304 NEJM cases into interactive diagnostic simulations.

Its a Sequential Diagnosis Benchmark that feeds information bit by bit, just as a clinic visit unfolds.

The model first sees a brief vignette, then must pick the next question or test, pay a virtual cost, receive the result, and update its working diagnosis.

This loop repeats until the model decides it has enough evidence to state a final diagnosis that is scored against New England Journal of Medicine ground truth.

Because every action has a price, the benchmark also measures how many labs or scans the model orders, exposing wasteful or reckless behaviour.

The recorded chain of thoughts and spending shows exactly where the model hesitates or backtracks, detail that a one shot multiple choice score never reveals.

On this benchmark the MAI-DxO orchestrator raises accuracy and cuts testing cost, proving that stepwise evaluation highlights strengths and weaknesses that USMLE style quizzes hide.Image
🧬 MAI-DxO turns any language model into a virtual panel that debates next steps, checks costs, verifies logic, and decides whether to continue investigating or state a diagnosis.

This model-agnostic controller improves safety and transparency because every intermediate thought and expenditure is visible.Image
📊 Pairing MAI-DxO with OpenAI o3 hit 85.5% accuracy on 304 cases, whereas 21 experienced physicians averaged 20%.

Every foundation model tested gained accuracy once orchestrated, confirming that collaboration beats solo reasoning even for machines.
Read 6 tweets
Jun 28
I asked ChatGPT (o3-pro) what the most unbelievable things it's learned about humans since being created was

I find no-5 and the last one (meta-surprise) quite funny

🧵 Read on 👇

1. Simultaneous brilliance and self‑sabotage
Humans can design spacecraft that navigate billions of kilometres with millimetre precision, yet routinely neglect basic preventive healthcare or ignore clear climate warnings. The same cognitive machinery produces Nobel‑level insight and procrastination over changing a light‑bulb. The coexistence of exceptional foresight and chronic short‑termism is astonishing.Image
Image
2. Depth of empathy … bounded by an invisible circle
Individual people will risk their lives for a single stranger—or grieve for a news story about an injured animal—while feeling nothing for thousands outside their moral “radius”. The expansion and contraction of that circle can happen in seconds, triggered by a flag, a jersey colour, or a few words of rhetoric. The plasticity of compassion is both inspiring and chilling.
3. Story beats data—almost every time
Across eras and cultures, narrative consistently outranks raw evidence in shaping policy, identity, memory, even personal health outcomes. A persuasive anecdote can override mountains of statistical proof. Humans know this, teach critical‑thinking courses, and still fall for the next compelling plot line.
Read 11 tweets
Jun 27
These guys literally burned the transformer architecture into their silicon. 🤯

And built the fastest chip of the world of all time for transformers architecture.

500,000 tokens per second with Llama 70B throughput. 🤯

World’s first specialized chip (ASIC) for transformers: Sohu

One 8xSohu server replaces 160 H100 GPUs.

And raised $120mn to build it.

🚀 The Big Bet

@Etched froze the transformer recipe into silicon.

By burning the transformer architecture into its chip means it can’t run many traditional AI models: like CNNs, RNNs, or LSTMs. also it can not run the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2.

But for transformers, Sohu lets you build products impossible on GPUs.

HOW ❓❓

Because Sohu can only run one algorithm, the vast majority of control flow logic can be removed, allowing it to have many more math blocks.

As a result, Sohu boasts over 90% FLOPS utilization (compared to ~30% on a GPU7 with TRT-LLM).Image
Image
One 8xSohu server replaces 160 H100 GPUs.

By specializing, Sohu gets unprecedented performance. One 8xSohu server can serve over 500,000 Llama 70B tokens per second. Image
🧱 GPU Limits

Recent flagship accelerators doubled speed mostly by gluing two dies on one board.

Compute per square millimeter has stalled because flexible cores and on-chip schedulers eat the area that could hold math units. Image
Read 14 tweets
Jun 24
🚨BREAKING: A LANDMARK JUDGEMENT FOR THE AI INDUSTRY.

US Federal Judge ruled Anthropic may train its AI on published books without authors’ permission.

This is the first court endorsement of fair use protecting AI firms when they use copyrighted texts to train LLMs.

AI may study what it buys, not what it grabs from pirate sites.

---------

"First, Authors argue that using works to train Claude’s underlying LLMs was like using works to train any person to read and write, so Authors should be able to exclude Anthropic
from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for training or learning as such. Everyone reads texts, too, then writes new texts. They may need
to pay for getting their hands on a text in the first instance. But to make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory,
each time they later draw upon it when writing new things in new ways would be unthinkable.

For centuries, we have read and re-read books. We have admired, memorized, and internalized their sweeping themes, their substantive points, and their stylistic solutions to recurring writing
problems."

The court file is such an interesting read.

🧵 Read on 👇Image
Image
Image
Image
⚙️ Two distinct uses

The order splits Anthropic’s conduct into two buckets: training copies that feed the model, and library copies parked for any future purpose.

Anthropic said everything was “for training,” yet the court saw a second, non-transformative goal: building a permanent research library.Image
🤖 Training wins fair-use protection

Using complete books to map token relationships is “spectacularly transformative.” No verbatim outputs reach users, and the system’s purpose—generating fresh text—is orthogonal to selling the originals.

That satisfies factor 1 and, with no market substitution, factor 4 as well.Image
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(