Jason Crawford Profile picture
Sep 25, 2020 7 tweets 3 min read Read on X
Industrial civilization needs an owner's manual.
People are suggesting books on economics and philosophy, which is good. But I literally meant the basics. Where does energy come from, and why? How do we make steel? Cement? Textiles? What is needed to grow enough food to feed the planet? How do vaccines work? Computers? Etc…
People who can't answer these questions even at a very basic level still express very strong opinions on things like nuclear power, gas cars, plastic bags, etc.

We need *industrial literacy.* And people should start to feel at least *slightly* embarrassed if they don't have it.
Some books that have been suggested in the replies:

The Knowledge: How to Rebuild Civilization in the Aftermath of a Cataclysm, by @lewis_dartnell

amazon.com/Knowledge-Rebu…
How Innovation Works: And Why It Flourishes in Freedom, by @mattwridley

amazon.com/How-Innovation…
How to Invent Everything: A Survival Guide for the Stranded Time Traveler, by @ryanqnorth

amazon.com/How-Invent-Eve…
Infrastructure: A Guide to the Industrial Landscape, by @bit_player

amazon.com/Infrastructure…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jason Crawford

Jason Crawford Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jasoncrawford

Feb 2
Academia cares whether an idea is new. It doesn't really have to work

Industry only cares if an idea works. Doesn't matter if it's new

This creates a gap. Actually a few gaps:
1. It creates a culture gap

Academics look at industry people trying to get an idea to work, and complain, “they aren't doing anything new!”

2. It creates a gap in the path from idea to reality, aka the Valley of Death

Academics are done once a concept is demonstrated. Industry doesn't want to fund an idea before it is working/viable.

In between is the idea that is no longer new but does not yet work Image
Read 20 tweets
Dec 18, 2023
If “low-hanging fruit” or “ideas getting harder to find” was the main factor in the rate of technological progress, then the fastest progress would have been in the Stone Age.

Ideas were *very easy to find* in the Stone Age! There was *so much* low-hanging fruit! Image
Instead, the pattern we see is the opposite: progress accelerates over time. (Note that the chart below is *already on a log scale*)

Clearly, there is some positive factor that more than makes up for ideas getting harder to find / low-hanging fruit getting picked. Image
“Ideas getting harder to find” is ambiguous, let me clarify.

In the econ literature it refers to a specific phenomenon, which is that it takes exponentially increasing R&D investment to sustain exponential growth. This is basically all the low-hanging fruit getting picked.
Read 11 tweets
Jul 5, 2023
Suppose you give an AI an innocuous-seeming goal, like playing chess, fetching coffee, or calculating digits of π. What could go wrong?

Well, there is an argument that even “safe” goals for AI could be very dangerous.

I'm going to give the argument—and then push back on it.
This thread is adapted from an essay here, in case you prefer that format: rootsofprogress.org/power-seeking-…
So the argument goes like this. For any goal:

• The AI can do better at the goal if it can upgrade itself
• It will fail at the goal if it is shut down or destroyed (“you can’t get the coffee if you’re dead”)
• Less obviously, it will fail if anyone ever *modifies* its goals
Read 38 tweets
Jun 21, 2023
There is an AI doom argument that goes, in essence:

1. Sufficiently advanced AI will be smarter than us
2. Anything smarter than us, we cannot control
3. Having something in the world that we cannot control would be bad

∴ Sufficiently advanced AI would be bad. QED
One counter is to deny (1), eg: AI will never be that smart; intelligence is multi-dimensional and it doesn't make sense to compare them; super-human intelligence is so far in the future that we shouldn't worry about it; etc

This is becoming less popular recently as AI advances.
Another counter is to deny (2): we can build superintelligent systems, but have them be our tools or servants.

This is probably most popular among techno-optimists.
Read 6 tweets
Jun 21, 2023
Levels of safety for a technology

1. So dangerous that no one can use it safely
2. Safe if used carefully, dangerous otherwise
3. Safe if used normally, dangerous in malicious hands
4. So safe that even bad actors cannot cause harm

Important to know which you are talking about.
Arguably:

Level 1 should be banned
Level 2 requires licensing/insurance schemes
Level 3 requires security against bad actors
Level 4 is ideal!
(All of this is a bit oversimplified but hopefully useful)
Read 18 tweets
Jun 19, 2023
“Optimal Policies Tend to Seek Power” supposedly gives a theoretical basis for power-seeking behavior from AI

But it seems to just analyze a toy model and show that if you head towards a larger part of the state space, you are more likely to optimize a random reward function?
The intro claims that “power-seeking tendencies arise not from anthropomorphism, but from certain graphical symmetries present in many MDPs [markov decision processes]”

But what is actually demonstrated seems much more trivial than that. What am I missing?
I watched the NeurIPS talk, twice: neurips.cc/virtual/2021/p…

And looked through the paper (although I didn't closely examine the formal definitions and theorems): arxiv.org/pdf/1912.01683…
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(