Josh Wolfe Profile picture
May 2, 2022 15 tweets 5 min read Read on X
1/ FASCINATING new theory on DREAMING from observing a deep neural net

but 1st––some of the most cutting-edge + interesting research on how 🧠🧠 work –– are inspired by observation of modern 💻💻 + algorithms

every era there's a theory of brains running parallel to era's tech
2/ Go back to Descartes who thought the 🧠 worked like hydraulic pumps ⛽️––the available new tech or his era
3/ Freud looked to the tech of his time to describe the mechanics of the brain––the steam engine
4/ More recent analogies have been to the brain as a computer––which notably inspired lots of AI research, specifically the early work on neural nets which lost and regained favor over the decades
5/ Then we have had the analogy of the brain as an internet––with islands of functional groups interconnected
6/ All models are wrong––some of them are useful.

Insights from trying to understand our internal human SENSES, PERCEPTION, SPEECH, VISION, HEARING, MEMORY have all led to embodied technologies

which in turn lead to new theories...
7/ We already know we SEE what we BELIEVE

Illusions are excellent at humbling us.
Even if we know they are illusions.
8/ (almost there...stick with me;)
Now our study + design of neural nets is leading to a 'consilience of inductions'––

many different researchers convening on common explanations that point to same conclusion
9/ Like "memory––prediction" framework and the computational layer between them

that ingests reality, makes models + predictions of patterns it later expects to see, then updates models based on 'reality' (just as robots/machine vision do)
10/ now –– Erik Hoel has a COOL hypothesis

what if the REASON we DREAM––was similar to
the REASON programmers add noise to deep neural nets

to prevent narrow training from experience
+ generalize, allowing for anticipation of weird new stuff––and be evolutionarily adaptive...
11/ Hoel calls it the Overfitting Brain Hypothesis

the problem of OVERFITTING in machine learning is best visualized by this
12/ The way researchers solve the "overfitting" problem for Deep Neural Nets––is by introducing "noise injections" in the form of corrupt inputs

Why? So the algorithms don't treat everything so narrowly SPECIFIC and precise––but instead can better GENERALIZE
13/ Now IF our brain processes + stores information from stimulus it receives all day long––and learns from experiences in a narrow way––THEN it too can "overfit" a model

& be less fit to encounter wider variations from it
(like the real world)...

So the PROVOCATIVE theory...
14/...Is that the evolutionary PURPOSE of DREAMING is to purposely corrupt data (memory or predictions) by inject noise into the system

And prevent learning from just rote routine memorization

Basically––
natural hallucination improves generalization
🤯
15/ Link to full paper PDF here––a quick and VERY provocative read from @erikphoel cell.com/action/showPdf…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Josh Wolfe

Josh Wolfe Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @wolfejosh

Aug 21
1/ Apple researchers just dropped NEW foundation model on BEHAVIORAL data from wearables.

Forget raw sensors—think steps, heart variability, gait

Trained on 2.5B hours via 162K people

Predicts age, sex, pregnancy, sleep like a crystal ball. Image
2/ Their new foundation model (WBM) trained not on noisy low-level sensor data––but on derived behavioral metrics via wearables

Outperforms raw sensor models in predicting sleep, injury, & infection.

When combined CRUSH it—predicting pregnancy w/ >90% accuracy. Image
3/ Contrarian take––Transformers may be overkill for everything

head-to-head on wearable data (Mamba-2) beat Transformers for health prediction. Simpler tokenization also won.

Lesosn: data’s unique physics (irregular, noisy)+ ignore the loud signals to hear the whispers––edge is in the architectureImage
Image
Read 4 tweets
Aug 21
1/ New Lux quarterly LP letter––Q2 2025

theme: “Friction Frontier”

-Over 50% of small VCs will involuntarily exit as ~5 VCs are planning to voluntarily exit (IPO)

-Lux team has invested over half a billion dollars across 82 companies (new + existing) in the past few months… Image
Image
2/
-Zuck’s “ Poachapalooza 2025”

-L&A (License + Acquihire) is the new M&A

-the AI capex surge (5% of GDP) ~ fiber-optic + router binge of 2000 tech boom (5.2%) and short step below sheet-rock bonanza of 2005 housing bubble (6.7%).

-Open source vs Closed… Image
3/great irony in frenzy for ARTIFICIAL intelligence––what’s acquired isnt hardware but HUMAN intelligence

-frontier of agentic Al not defined by model size-but what we can verify

-99% reliable agent degrades to 60.5% after 50 sequential decision tasks––each error cascading like cracks in glassImage
Read 8 tweets
Aug 18
1/ Beyond the hype + headlines

some eye-opening AI stats to consider

Today Mag 7 = 35% of US stock mkt value

NVDA = 19% of that––and 42% of NVDA revenue comes from just 5 Big Tech co's buying GPUs:

Meta, Amazon, Microsoft, Google, Tesla
2/At recent lunch with LP who put Lux in business I talked about shift from consensus cloud inference (GPUs + datacenters + power) to on-device inference (memory + batteries)...
3/ This valued LP talked about CAPEX:

Meta, AMZN, MSFT, GOOG TLSA spending $560B by EOY 2025

The total AI revenue from that spend? ~$35B

That's not great ~6% return
Read 5 tweets
Jun 7
Apple just GaryMarcus'd LLM reasoning ability Image
2/ Apple tested today's "reasoning" AIs like Claude + DeepSeek which look smart—but when complexity rises, they collapse.

Not fail gracefully. Collapse completely.
3/ They found LLMs don't scale reasoning like humans do.

They think MORE up to a point…

Then they GIVE UP early, even when they have plenty of compute left.
Read 6 tweets
May 19
1/ Three big Lux things tonight

-@anduriltech founder Palmer Luckey on @60Minutes
“The Future of Warfare”

-@eGenesisBio on @CNN with Sanjay Gupta
“Animal Pharm”

-new Lux Q1 2025 Quarterly LetterImage
Image
Image
2/
This is one of my favorite letters we’ve ever written

The theme is PARTNERSHIPS

-between man and machine
-between present and future selves, decision, companies
-between us and our founders of Lux companies
-between Lux partners ourselvesImage
3/ word processor > spellcheck >grammar‑check > LLM “style‑check” if paragraph was birthed by human mind or statistical echo. We detect plagiarism but also something subtler—call it prompterism, prose coaxed from silicon’s
latent space rather than summoned from lived experience.Image
Read 9 tweets
May 9
BIG NEWS from Lux🚨…

American scientists––and those we attract to America🇺🇸––are what have helped make America not just great but absolutely + relatively EXCEPTIONAL.

We CANNOT cede scientific supremacy to China.
We don’t need talk, we need action…
2/ The 🇨🇳CCP “2035 Science & Technology Vision” states unapologetically “original innovation is the sharpest blade.” coupling that declaration w/ vast subsidies, talent visas + procurement guarantees which have helped them take lead in 37 of 44 critical and emerging technologies!
3/ Upending American science seemed an impossible order.

Regularly copied and always envied, but never rivaled.

Constructed by visionaries like Vannevar Bush in the ashes of WWII + propelled by brilliant legislation like Bayh-Dole Act––we did the impossible: an open scientific enterprise that could simultaneously probe into the furthest frontiers while translating the most original insights uncovered into the most prosperous companies in the world.
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(