A new class action copyright lawsuit against Anthropic exposes it to a billion-dollar legal risk.
Judge William Alsup called the haul “Napster-style”. He certified a class for rights-holders whose books sat in LibGen and PiLiMi, because Anthropic’s own logs list the exact titles.
The order says storing pirate files is not fair use, even if an AI later transforms them. Since the law allows up to $150,000 per willful hit, copying this many books could cost Anthropic $1bn+.
Anthropic must hand a full metadata list by 8/1/2025. Plaintiffs then file their matching copyright registrations by 9/1. Those deadlines will drive discovery and push the case toward a single jury showdown.
Other AI labs, which also face lawsuits for training on copyrighted books, can no longer point to the usual “fair use” excuse if any of their data came from pirate libraries. Judge Alsup spelled out that keeping pirated files inside an internal archive is outright infringement, even if the company later transforms the text for model training.
A Reddit user deposited $400 into Robinhood, then let ChatGPT pick option trades. 100% win reate over 10 days.
He uploads spreadsheets and screenshots with detailed fundamentals, options chains, technical indicators, and macro data, then tells each model to filter that information and propose trades that fit strict probability-of-profit and risk limits.
They still place and close orders manually but plan to keep the head-to-head test running for 6 months.
This is his prompt
-------
"System Instructions
You are ChatGPT, Head of Options Research at an elite quant fund. Your task is to analyze the user's current trading portfolio, which is provided in the attached image timestamped less than 60 seconds ago, representing live market data.
Data Categories for Analysis
Fundamental Data Points:
Earnings Per Share (EPS)
Revenue
Net Income
EBITDA
Price-to-Earnings (P/E) Ratio
Price/Sales Ratio
Gross & Operating Margins
Free Cash Flow Yield
Insider Transactions
Forward Guidance
PEG Ratio (forward estimates)
Sell-side blended multiples
Insider-sentiment analytics (in-depth)
Options Chain Data Points:
Implied Volatility (IV)
Delta, Gamma, Theta, Vega, Rho
Open Interest (by strike/expiration)
Volume (by strike/expiration)
Skew / Term Structure
IV Rank/Percentile (after 52-week IV history)
Real-time (< 1 min) full chains
Weekly/deep Out-of-the-Money (OTM) strikes
Dealer gamma/charm exposure maps
Professional IV surface & minute-level IV Percentile
proof that AI’s capabilities can touch every life.
ChatGPT to expose a $5 million estate fraud, get a forensic audit, and uncover 10 years of probate misconduct.
The daughter says their father died in 2015 leaving an estate they value at about $5mn.
The father’s girlfriend allegedly produced a Mexican marriage certificate, cremated the body abroad, kept the ashes, and then took control of the estate.
For 10 years the matter stayed in Texas probate while, the user claims, the court-appointed lawyer and administrator drained or ignored assets and let several properties, vehicles, and a construction business disappear.
After both the lawyer and administrator were removed, the user could not find new counsel, so they turned to ChatGPT to draft letters and bundled motions.
Those filings persuaded the probate judge to set a hearing and order a full forensic audit of the $5M for Aug 20
(Special note, we all know AI can sometime hallucinate, so she (the OP) combed through every citations ChatGPT referred)
PDF parsing is still painful because LLMs reorder text in complex layouts, break tables across pages, and fail on graphs or images.
💡Testing the new open-source OCRFlux model, and here the results are really good for a change.
So OCRFlux is a multimodal, LLM based toolkit for converting PDFs and images into clean, readable, plain Markdown text.
Because the underlying VLM is only 3B param, it runs even on a 3090 GPU. The model is available on @huggingface .
The engine that powers the OCRFlux, teaches the model to rebuild every page and then stitch fragments across pages into one clean Markdown file.
It bundles one vision language model with 3B parameters that was fine-tuned from Qwen 2.5-VL-3B-Instruct for both page parsing and cross-page merging.
OCRFlux reads raw page images and, guided by task prompts, outputs Markdown for each page and merges split elements across pages.
The evaluation shows Edit Distance Similarity (EDS) 0.967 and cross‑page table Tree Edit Distance 0.950, so the parser is both accurate and layout aware.
How it works while parsing each page
- Convert into text with a natural reading order, even in the presence of multi-column layouts, figures, and insets
- Support for complicated tables and equations
- Automatically removes headers and footers
A compact vision‑language models can beat bigger models once cross‑page context is added.
🧵 1/n Read on 👇
🧵 2/n 📄 The problem space
Most open tools lose structure on pages that mix text blocks, figures and multi‑column tables.
They also ignore the fact that a PDF page boundary can cut tables or paragraphs in half, so their final Markdown keeps fragments and duplicated headers.
These limits slow downstream document understanding because text has to be fixed by hand.
🧵 3/n 🛠️ Model design
OCRFlux fine tunes Qwen2.5‑VL‑3B with two prompt templates, one for single page parsing and one for cross‑page merging.
Only the rendered page image enters the prompt, not any external layout metadata, which keeps context length short and avoids errors from faulty OCR blocks.
SO INCREDIBLE. AI's impact on healthcare just became much more real.
@MSFTResearch's new MAI-DxO AI orchestrator solves 85% of the toughest New England Journal of Medicine (NEJM) cases while ordering fewer tests, showing language-model teams can out-reason individual physicians. 💡
MAI-DxO is a model-agnostic orchestrator that simulates a panel of virtual physicians.
So what's so special about this❓
Complex medical cases still cause missed or delayed diagnoses and drive up costs.
🧩 Multiple-choice benchmarks hide real weaknesses in medical AI, because selecting a single answer from a list rewards memorization and ignores the step-by-step reasoning clinicians use.
USMLE style exams (i.e. the ones used till now for benchmarking medical LLMs) hand the entire patient scenario to the model in one block and ask for a single choice answer.
A language model can match wording patterns it has seen during training and guess the right letter without tracing the kind of step-by-step logic that happens in clinic.
So they developed SDBench, a new benchmark that transforms 304 NEJM cases into interactive diagnostic simulations.
Its a Sequential Diagnosis Benchmark that feeds information bit by bit, just as a clinic visit unfolds.
The model first sees a brief vignette, then must pick the next question or test, pay a virtual cost, receive the result, and update its working diagnosis.
This loop repeats until the model decides it has enough evidence to state a final diagnosis that is scored against New England Journal of Medicine ground truth.
Because every action has a price, the benchmark also measures how many labs or scans the model orders, exposing wasteful or reckless behaviour.
The recorded chain of thoughts and spending shows exactly where the model hesitates or backtracks, detail that a one shot multiple choice score never reveals.
On this benchmark the MAI-DxO orchestrator raises accuracy and cuts testing cost, proving that stepwise evaluation highlights strengths and weaknesses that USMLE style quizzes hide.
🧬 MAI-DxO turns any language model into a virtual panel that debates next steps, checks costs, verifies logic, and decides whether to continue investigating or state a diagnosis.
This model-agnostic controller improves safety and transparency because every intermediate thought and expenditure is visible.
📊 Pairing MAI-DxO with OpenAI o3 hit 85.5% accuracy on 304 cases, whereas 21 experienced physicians averaged 20%.
Every foundation model tested gained accuracy once orchestrated, confirming that collaboration beats solo reasoning even for machines.
I asked ChatGPT (o3-pro) what the most unbelievable things it's learned about humans since being created was
I find no-5 and the last one (meta-surprise) quite funny
🧵 Read on 👇
1. Simultaneous brilliance and self‑sabotage
Humans can design spacecraft that navigate billions of kilometres with millimetre precision, yet routinely neglect basic preventive healthcare or ignore clear climate warnings. The same cognitive machinery produces Nobel‑level insight and procrastination over changing a light‑bulb. The coexistence of exceptional foresight and chronic short‑termism is astonishing.
2. Depth of empathy … bounded by an invisible circle
Individual people will risk their lives for a single stranger—or grieve for a news story about an injured animal—while feeling nothing for thousands outside their moral “radius”. The expansion and contraction of that circle can happen in seconds, triggered by a flag, a jersey colour, or a few words of rhetoric. The plasticity of compassion is both inspiring and chilling.
3. Story beats data—almost every time
Across eras and cultures, narrative consistently outranks raw evidence in shaping policy, identity, memory, even personal health outcomes. A persuasive anecdote can override mountains of statistical proof. Humans know this, teach critical‑thinking courses, and still fall for the next compelling plot line.
These guys literally burned the transformer architecture into their silicon. 🤯
And built the fastest chip of the world of all time for transformers architecture.
500,000 tokens per second with Llama 70B throughput. 🤯
World’s first specialized chip (ASIC) for transformers: Sohu
One 8xSohu server replaces 160 H100 GPUs.
And raised $120mn to build it.
🚀 The Big Bet
@Etched froze the transformer recipe into silicon.
By burning the transformer architecture into its chip means it can’t run many traditional AI models: like CNNs, RNNs, or LSTMs. also it can not run the DLRMs powering Instagram ads, protein-folding models like AlphaFold 2, or older image models like Stable Diffusion 2.
But for transformers, Sohu lets you build products impossible on GPUs.
HOW ❓❓
Because Sohu can only run one algorithm, the vast majority of control flow logic can be removed, allowing it to have many more math blocks.
As a result, Sohu boasts over 90% FLOPS utilization (compared to ~30% on a GPU7 with TRT-LLM).
One 8xSohu server replaces 160 H100 GPUs.
By specializing, Sohu gets unprecedented performance. One 8xSohu server can serve over 500,000 Llama 70B tokens per second.
🧱 GPU Limits
Recent flagship accelerators doubled speed mostly by gluing two dies on one board.
Compute per square millimeter has stalled because flexible cores and on-chip schedulers eat the area that could hold math units.