What's the most important thing that has ever happened?

A thread about transitions.
Suppose you're telling the history of the entire universe, from the Big Bang until today, and you can choose to highlight only ONE event. One major before/after moment. What would it be?
The founder of Big History, @davidgchristian, identifies 7 major transitions, each giving rise to new forms of complexity:

1. Birth of stars.
2. Birth of heavier elements.
3. Planets.
4. Life.
5. Humans.
6. Agriculture.
7. Industry.

But which of these is the MOST important?
I'm sure there are many perspectives one could reasonably take here. But for me, the answer is clear.

The most interesting, most important thing in the history of the universe is life.
I like @ESYudkowsky's take on this: Before life was the Age of Boredom. Space dust, stars, planets, solar systems: it's just a bunch of blind forces grinding out the same patterns over and over and over.

lesswrong.com/posts/spKYZgoh… ← very good post
But once you get chemicals arranged in self-replicating bundles, you've entered an entirely new regime: evolution by natural selection. Suddenly there's a process of cumulative change. Suddenly there's LEARNING.
This isn't geocentrism. Any life, anywhere in the universe, gets us out of the Age of Boredom. It's just, as far as we know (for now), life happened to arise in only one place, here on Earth.
So, here's our narrative:

First, the Big Bang. Then 10 billion years of boredom (give or take). Then life! An explosion of learning! And 4 billion years later, here we are.
Now, what if we want to refine this picture? What if we allowed ourselves TWO events in our history of the universe? What's the second most important thing?
I know, there are many perspectives. But again, I think it's clear: humans are the next most important thing. And once again it comes down to learning.

Before humans, all cumulative learning took place in genomes. But humans introduced an entirely new modality: culture.
Yes, brains can learn, and brains existed before humans. But brains die with their hosts, and all their hard-won knowledge has to be re-learned from scratch by the next generation.
Some animals pass knowledge from parents to children. But the carrying capacity of this "culture" is small and fixed. Humans were the first species with the ability to ACCUMULATE knowledge in culture, in an open-ended way.
So, again, our story:

Big Bang. 10 billion years of boredom. Then life! And 4 billion years of natural selection. Then brains+culture! And a ~million years of cultural evolution later, with all of our farms and factories and computing devices, here we are.
If this view of history has merit, it suggests that the next truly transformative step will involve a new modality for cumulative learning.

Is this AI? Machine learning? The answer is "definitely maybe." But we're certainly not there yet.
AI systems of today are like the brains of early animals. They're capable of some cool feats of learning, but there's nothing cumulative about them. They're built, they learn a few things, and then they dead-end. Where feedback loops exist, they all pass through human culture.
The big thing to watch for, then, are AI systems capable of self-sustaining knowledge accumulation. An open-ended machine culture. It's a ways off — but it's coming, and it's very important.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kevin Simler

Kevin Simler Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @KevinSimler

10 Sep 19
LINDY LINKS THREAD — posts I find myself still thinking about at least a year after reading them
1. Too Late for the Pebbles to Vote (2016) status451.com/2016/08/09/too…

@maradydd’s three-part meditation on information silos, preference falsification, and sociopathy. Conceptually dense and beautifully written.
2. Gears in understanding (2017) lesswrong.com/posts/B7P97C27…

Great metaphor illustrating a property of a good mental model: clarity around _exactly_ how different parts of the model interact. Don’t be content with mere correlations — look for gears!
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!