Santa Fe Institute Profile picture
Aug 3, 2022 15 tweets 21 min read Read on X
🧵 "A #biologist's perspective of #process and #pattern in #innovation"
by SFI External Professor @HochTwit

Starting in just a few minutes on our YouTube channel.

Follow this thread for select slides and quotations...
youtube.com/user/santafein… Image
"I'm not going to pretend there's a unified theory of #innovation, and I'm going to explain why."

We begin with a tale of #Minitel: the original, now-extinct French "Web"...and then back further to #ARPANet...and then to theoretical precursors.


@HochTwit ImageImageImage
"There is no first-principles definition for #innovation."

"How could a company start with selling books online when people want to see a book in person and look through it? Nonetheless, @amazon survived..."

Before Amazon, Books.com on TelNet, later bought by B&N ImageImageImageImage
"We usually credit the transformatory impacts of #innovation to Austrian economist Joseph Schumpeter and his idea of #CreativeDestruction, that entire sectors were turned over and products from the past erased by products currently developed."

Earlier, Marx & Engels: Image
Three paths to #innovation:

• "Can we really construct something de novo from mutations?"

• "A screwdriver in principle has no #function, but we buy it for a specific function...this is adaptation."

• And then #Exaptation of existing features.

Re: convergence on #Flight: ImageImageImageImage
"Parts of the organism are co-evolving to permit the invasion to this new niche [from terrestrial life to #flight]."

On #exaptation, #recombination, #coevolution, #birds, and #planes — emerging incrementally from performance innovations: ImageImageImageImage
1) It takes 260 suppliers to make the parts for a @Boeing 787. Each of those suppliers requires myriad other suppliers.

2, 3) On #coevolution and #invasion of traits modifying #FitnessLandscapes (see Kauffman's #AdjacentPossible). New traits are typically difficult to predict. ImageImageImage
On @RELenski's Long-Term Evolution Experiment (#LTEE) — looking back thousands of #Ecoli generations, researchers found precursor "scaffolding" mutations that permitted later major metabolic innovations but were themselves not responsible for them.

(#Continency is key...) Image
On the horizontal transfer and #recombination of traits in #biology and #technology:

"Because cell phones become a necessary part of what we are, for most of us, and we're willing to pay the price, it's difficult to think of most of these novelties as innovations." ImageImageImageImage
Sometimes major innovations never see the light of day because they're perceived as non-competitive.

"The idea is there, the patent is there, it's in the public domain, and numerous researchers have tried to revive it. But there has been no marketed device based on this." ImageImage
1) "The cell phone did away with the bottom three. Internet has done away with the top three."
#CreativeDestruction

2) On the diffusion of #innovation via #EarlyAdopters:
"Can we call an innovation something that invades only 10% of the market? 50%?)

@HochTwit speaking now: ImageImage
"Perhaps there's a very #LongTail to the fixation of cell phone cameras, and a period of co-existence of [them with #DigitalCameras]."

(Are true transformatory innovations becoming rarer and rarer, or is creative destruction almost never perfect and complete?)
#Evolution + #Tech ImageImageImage
1) Why have US patent applications slowed asymptotically over the last decade?

More efficient harvesting of existing patents?

(Red line shows what would have happened had the 2008-2009 Great Recession not occurred.)

2) #MooresLaw running up against hurdles, then #Recombination ImageImageImage
"Are we really running out of ideas?"

"What will be the next innovation? We see these macro transformations over the last 100 years. We do not know if #QuantumComputing will ever see the light of day."

Notable differences between bio & tech include goal orientation, theory... ImageImage
"I think one of the ingredients we'll need for a first-principles theory of #innovation, first of all, is this notion of 'surprisal.'"

"To what extent are there leaps available, or are we exhausting what is potentially out there?"

Read @RSocPublishing B:
royalsocietypublishing.org/doi/full/10.10… Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Santa Fe Institute

Santa Fe Institute Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sfiscience

Mar 10, 2023
🧵 "The Smell of Inhibition. A Code in the Nose?"

ICYMI, this week's SFI Seminar by Fractal Faculty Stuart Firestein (@Columbia) on "what started out ass a very simple-seeming problem [re: #olfaction] and turned out to be very complicated":

"Everything we know about the world comes through these little holes in our head and the skin covering our body, processed through tissue specialized to interpret it."

"The thing to notice about [sight and hearing] is that they're [processing] fairly low-dimensional stimuli."
"Even a simple smell is composed of a VARIETY of molecules, and these are high-dimensional from a chemical point of view. And it's also a somewhat discontinuous stimulus. How do we get from this bunch of molecules to this unitary perception of something like a rose?"
Read 8 tweets
Mar 10, 2023
🧵 "#Possibility Architectures: Exploring Human #Communication with Generative #AI"

Today's SFI Seminar with ExFac Simon DeDeo @LaboratoryMinds (@CarnegieMellon), streaming now:
"A key feature of this is talk is that we make sense of what each other are saying IN PART by what they say, but ALSO by what we expect of them."
"Language transmits info against a background of expectations – syntactic, semantic, and this larger cultural spectrum. It's not just the choices of make but [how] we set ourselves up to make later choices."

@LaboratoryMinds re: work led by @clairebergey:
Read 15 tweets
Mar 9, 2023
"I think what really drives [the popularity of the #multiverse in #scifi] is regret... There's a line in @allatoncemovie where #MichelleYeoh is told she's the worst version of herself."

"I don't think we should resist melting brains. I think we should just bite the bullet."
"When you measure the spin of an electron, or the position...what happened to all of the other things you could have seen? Everett's idea is that they're all real. They all become real in that measurement."
- SFI Fractal Faculty @seanmcarroll at @guardian
theguardian.com/science/audio/…
"At the level of the equations there is zero ambiguity, but the metaphors break down. The two universes it splits into aren't as big as the original universe. The thickness of the two new universes adds up to the thickness of the original universe."
Read 4 tweets
Dec 14, 2022
"Compositionality in Vector Space Models of Meaning"

Today's SFI Seminar by @marthaflinders, streaming:


Follow this 🧵 for highlights!
"Scientists gather here
Santa Fe Institute, oh so near
Inquiring minds seek truth"

#haiku about SFI c/o @marthaflinders & #ChatGPT

...but still, #AI fails at simple tasks:
"One way to represent the kind of #compositionality we want to do is with this kind of breakdown...eventually a kind of representation of a sentence. On the other hand, vector space models of #meaning or set-theoretical models put into a space have been very successful..."
Read 14 tweets
Dec 13, 2022
"Humans are prone to giving machines ambiguous or mistaken instructions, and we want them to do what we mean, not what we say. To solve this problem we must find ways to align AI with human preferences, goals & values."
- @MelMitchell1 at @QuantaMagazine:
quantamagazine.org/what-does-it-m…
“All that is needed to assure catastrophe is a highly competent machine combined with humans who have an imperfect ability to specify human preferences completely and correctly.”

- Stuart Russell (@UCBerkeley) as quoted by @MelMitchell1 in her latest @QuantaMagazine article
"It’s a familiar trope in #ScienceFiction — humanity threatened by out-of-control machines who have misinterpreted human desires. Now a not-insubstantial segment of the #AI research community is concerned about this kind of scenario playing out in real life."
- @MelMitchell1
Read 6 tweets
Dec 12, 2022
"Training Machines to Learn the Way Humans Do: an Alternative to #Backpropagation"

Today's SFI Seminar by Sanjukta Krishnagopal
(@UCBerkeley & @UCLA)

Starting now — follow this 🧵 for highlights:
Image
"When we learn something new, we look for relationships with things we know already."

"I don't just forget Calculus because I learned something else."

"We automatically know what a 'cat-dog' would look like, if it were to exist."

"We learn by training on very few examples." Image
1, 2) "[#MachineLearning] is fundamentally different from the way humans learn things."

3) Re: #FeedForward #NeuralNetworks

"You choose some loss function...maybe I'm learning the wrong weights. So I define some goal and then I want to learn these weights, these thetas." ImageImageImage
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(