1/n The private biotech industry perhaps has at least 10-20 years more accumulated knowledge than the academic world. A majority do not realize this. This is why nation-states like China and Russia cannot create vaccines like in the West. AI will end up similarly.
2/n This large disparity in knowledge and know-how is a consequence of the evolutionary nature of biology. This nature also exists in deep learning AI. Evolution's creativity is a consequence of frozen accidents; these accidents cannot be uncovered through first principles.
3/n Genentech, an early pioneer in genetic engineering, has specially engineered organisms (i.e., mice) that the rest of the world cannot access. Many biotech companies have secret sauce that can only be discovered through experimentation.
4/n The same will happen with deep learning AI. We have reached that state in development where firms like OpenAI are keeping their cards very close to their chest. There are scant details on the methods used in ChatGPT or GPT-4. We can only speculate as an outside observer.
5/n Most biological material is available to everyone, so it's possible for biotech firms to create similar function genetic material via convergent evolution. In AI, access to the largest models are monopolized by few players.
6/n But surprisingly, open-source models such as #stablediffusion have created their own cottage industry of innovation that has even surpassed closed-source systems. This is because it's unlikely for a private company to be clairvoyant of the entire long tail of use cases.
7/n Furthermore, there is a new debate where private companies like OpenAI argue that open-source AI is unwise because it's too potent. theverge.com/2023/3/15/2364…
8/n The most potent biotechnology and the most potent cognitive technology will be in the hands of private corporations. But how is this any different from the present reality that the most potent decentralized coordination is in the hands of private companies (see: money)?
9/n There you have it folks, the most powerful of technologies will always be in the hands of private companies. It has been like this for a very long time (see: central banks). Do not expect that to change in biotechnology or in AI.
10/n There is often this argument that everyone in AI knows what others AI groups know. This *was* true before ChatGPT. Knowledge blindness is more likely for technologies of an evolutionary nature. There is a wall of ignorance as a consequence of the Halting problem.
11/n That wall of ignorance becomes a defensive moat when companies finally discover its potency. You see, prior to 2022, deep learning AI was not potent enough. Then a phase transition happened, and all of a sudden we are in an entirely new regime.
12/n There are "hyperobjects" in this world that contain know-how and knowledge that are *do not* have mobility. They are affixed in the development and evolution of an "organism" (i.e., a living thing, an AI, a language, a corporation, a community, a city, a nation-state,… twitter.com/i/web/status/1…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

Mar 19
1/n It's extremely perplexing that GPT-4, by virtue of simply rearranging words, is coherent and logical with its words. Let me attempt to explain this in an intuitive manner.
2/n I previously wrote a blog entry about the levels of fluency that humans develop to achieve their current cognitive capabilities. It was not well written since I was struggling with the concept. medium.com/intuitionmachi…
3/n I asked GPT-4 to rewrite the blog. Here's the re-render:

Meta-Levels of Fluency: A Journey Through Knowing

Imagine embarking on a journey through the intriguing realms of knowledge and language fluency. As we explore, we'll uncover the remarkable connections between the… twitter.com/i/web/status/1…
Read 17 tweets
Mar 18
1/n GPT-4 self evaluates its cognitive capabilities based on a AI capability maturity framework defined here: medium.com/intuitionmachi…
2/n Here is the self-evaluation. ImageImage
3/n Here is its evaluation of GPT-3. Image
Read 9 tweets
Mar 18
It appears that GPT-4 is excellent at in-context learning. Here I cut and pasted the text of my blog medium.com/intuitionmachi… . Here is its accurate response. Image
Although it has a memory of ingesting the blog before, it will hallucinate about its contents. I asked it to verify the content with the cut-and-pasted text and apparently it was aware that it was the same text. Image
This is actually a good thing in that if you want GPT-4 to be accurate at how it responds to ideas, then you must have that content inserted in its context rather than recalled from its weights. It's internal weights are extremely unreliable. But if you think about it, so are… twitter.com/i/web/status/1…
Read 4 tweets
Mar 18
There's one argument that AI is the timeline that humanity desperately needs to find a solution for AI alignment. There is another argument that AI regulation will slow down progress. Why is it a bad thing to slow down AI progress? It's already moving too fast!
It is distressing that megacorps are rolling out their AI in every existing application (see: Microsoft and Google). Companies seek to get the maximum number of users in the fastest possible time. All exponentially increasing the risk of exploitation by bad actors.
It's as if the priority is to grow big faster than everyone else. This is in a context where the different AI are relatively equivalent. It's only in narrow use cases that you find superiority. It's in the narrow cases such as drug discovery where humanity will benefit most.
Read 5 tweets
Mar 18
1/n There are principles that we can grasp because we have metaphors to explain them by. The conservation principles in physics are like this. The conversation of energy and momentum, we can grasp because we grok the notion of something remaining constant as frames change.
2/n In fact, one can't help but notice that continuity is fundamental to agency and also to consciousness. Conservational principles are natural to our being. medium.com/intuitionmachi…
3/n But what about the other kind of principle. The kind of principle that is beyond our event horizon. The kind that we have no metaphors to express. The kind that human intuition has no capacity to grasp. Here's a list GPT-4 generated: Image
Read 7 tweets
Mar 17
1/n Okay, high time again for some book recommendations for people having an existential crisis with the rapid rise of AI.
2/n First book, "Escape from Model Land" will give you a rich understanding of the models we create to understand our reality. amazon.com/Escape-Model-L…
3/n 2n book "God, Human, Animal, Machine" will render how our religious and philosophical models relate to our understanding of AI. amazon.com/God-Human-Anim…
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(