1/1 Tweet thread from @NECSI's (New England Complex Systems Institute) annual conference, #ICCS2018.

@stephen_wolfram remembers the founding of modern complexity science in the 80s when his physics toolkit wasn't able to explain certain fluid dynamics.
1/2 @stephen_wolfram luckily had been coding, which had the mindset: create a certain set of primitives and then propagate them to learn about the world.

This same mindset could be used with complex systems: take primitives then propagate them and see what happens.

#ICCS2018
1/3 @stephen_wolfram started exploring certain propagation systems. He found some of them "seemed random" and could not be simplified. Systems like digits of pi and primes numbers.

Instead of searching for pattern, see that the MAIN meta-pattern is randomness.

#ICCS2018
1/4 @stephen_wolfram Found that certain things were computationally *irreducible*, which means that you couldn't just "do math" to predict how a system would propagate.

This meant that complexity modeling was not JUST convenient. It was was *necessary* as well.

#ICCS2018
1/5 @stephen_wolfram has an interesting definition of technology: "Taking things that exist and applying them for human purposes."

You can think of humans as searching the infinite computational universe for things that are usable.

#ICCS2018
1/6 @stephen_wolfram

You can take this idea of "searching the computation space for tools" and apply it to "finding helpful programs". So, instead of "engineering" a program defining it step-by-step, you just "search the computational space" for a helpful program.

#ICCS2018
1/7 @stephen_wolfram

He calls this "mining the computational universe".

You can imagine neural nets as an example of this progress.

#ICCS2018
1/8 @stephen_wolfram

These are all examples of a *massive* shift in "how science is done". For 400 years, there was a dominance of mathematical equations. But in the last ~20, we're now moving towards *programs* (rather than math) as science's "how".

#ICCS2018
1/9 @stephen_wolfram

@Wolfram_Alpha is looking to be a universal computation layer for computers. A crucial piece to this is "computational contracts" (which are like smart contracts, but not trustless).

#ICCS2018
1/10 @stephen_wolfram

@Wolfram_Alpha being a universal computation layer means:
1. We need to express contracts in code. (More exacting than English and Legalese.)
2. We need to connect to reality. Oracles do this. Wolfram Alpha is the best Oracle (right now).

#ICCS2018
1/11 @stephen_wolfram

Wanting to express human ideas in code is a crucial piece to AI alignment. We'll need to express the "constitutions for AIs" in a language they (AIs) can understand.

#ICCS2018
1/12 @stephen_wolfram

There's a feedback loop between the world and the *language* we use to abstract it.

e.g. There were no tables. There was no language for them. Then there were table-like things. Then we abstracted that into the word "table". Then we built more.

#ICCS2018
1/13 @stephen_wolfram thinks "computational essays" are the future of journalism.

(Strong agree! See @puddingviz and @ncasenmare as two leading examples)

#ICCS2018
1/14 @stephen_wolfram

As we begin to think about the future of AI and humans, a couple things are clear:

1. We can automate to get stuff done, but not what we *want* to do. i.e. There's no mathematical theory of "ultimate purpose".

#ICCS2018
1/15 @stephen_wolfram

2. It's not special. Some awesome AI simulation box is not that different than a rock. Both of them have lots of computation happening. Main difference is simply that we've connected the box to human purpose and history.

#ICCS2018
2/1 @cesifoti from @medialab's @collectivemit gives an overview of current updates to Economic Complexity (how info/value flow in networks). Quite a good talk, imo. Some interesting results:

#ICCS2018
2/2 @cesifoti

1. More complex industries (as measured by patents) are located in larger cities. e.g. Computer patents have a superlinear exponent (1.57) w.r.t. population, while piping patents have essentially a linear exponent (1.1) w.r.t. population.

#ICCS2018
2/3 @cesifoti

This is the strange thing about the internet. It allows *information* to flow freely, but not *knowledge*. That still exists in geo-network hubs (like SF).

Information disaggregates, knowledge aggregates.

#ICCS2018
2/4 @cesifoti

2. Complex knowledge is "harder" to diffuse. They measured this through different GitHub languages.

Easy languages (like plain text, HTML, Python) needed few collaborators to make progress. Harder languages needed more collaborators to make progress.

#ICCS2018
2/5 @cesifoti

I'd expect Cesar's work to eventually overlap with blockchain-based value flows. (It essentially adds another layer/dimension of data for him to correlate on.) Lots of learning to be had there!

#ICCS2018
3/1 @nntaleb begins his talk with this overview:

"What happens when you look at risk *dynamically* not *statically*?"

#ICCS2018
3/2 @nntaleb

"I was lucky to be unaware of the decision science literature."

With fat tail distributions, you can't look at the average:
- "Never cross a river that is *on average* 4 feet deep"
- "If you have a blind horse, you want it to be slow"

#ICCS2018
3/3 @nntaleb

You need to analyze *from* the tail itself.

So, although 100k+ Americans die every year from cigarettes, alcohol, and obesity, we should *still* be worried about Ebola because it propagates virally (tail risk).

#ICCS2018
3/4 @nntaleb

Though some folks claim: "Hey, you're not a biologist! You can't talk about this." Nope. Anything that has tail risk turns into a problem for statisticians.

If fat tail --> statistician.
If short tail (gaussian) --> specialist (biologist).

#ICCS2018
4/1 @geochurch starts macro: "We'll be talking about x-risk in the context of exponential biotech."

#ICCS2018
4/2 @geochurch

"To give you an idea of how fast these exponentials are going: In 2017, we had only used CRISPR to knock out 2 genes. Then we wanted to do 62.

It was 'embarrassingly easy'."

(They did this to do experiments that were being stopped by bio x-risk.)

#ICCS2018
4/3 @geochurch

The exponentially decreasing cost of genome sequencing over time:

1990-2014: $3B (with some decrease near the end)
2015: $6000 paid BY the patent (@VeritasGenetics)
2018: $500 paid TO the patent with blockchain (@NebulaGenomics)

#ICCS2018
4/4 Audience question about the societal/environmental impact of his new virus eliminator:

@geochurch's answer: We took a risk when we eradicated small pox. We have almost eradicated polio. With this, we could eliminate all at once. We should be very cautious.

#ICCS2018
4/5 @geochurch

"We're basically doing genetics at internet speed. No, we're *already* doing it at internet speed."

#ICCS2018
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Rhys is writing a book (14k/75k words). #ICCS2018
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!