Artificial General Intelligence is the field with a quest to automate human cognition. AGI is not superintelligence.
Computers and handheld calculators can perform calculations that are beyond the capabilities of any human. We don't call these intelligent things, that's despite 'superhuman' computatoinal capability.
Can AGI lead to superintelligence? The current consensus is yes.
Is human cognition a kind of super-intelligence? No. We can't do what computers do.
Can the invention of AGI offset the risk of a superintelligence? I think it can.
Can superintelligence happen without AGI? It's possible
Superintelligence is more alien than human. It's like the depiction of AlphaZero play. It's more than human, it's like an alien.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

16 Jun
It is often said that money buys freedom. But it is rarely mentioned that money also buys stability and reduces uncertainty. This other utility is what people seek more than freedom.
A majority of us will sacrifice our freedom and our youth for the stability and certainty of a steady paycheck. They say money buys happiness...
but in reality, it buys predictability. Because as humans, we value competence in this world and you cannot feel competence if you cannot predict the world.
Read 18 tweets
14 Jun
An unexplored scenario for humanity is that as technology becomes more advanced, we become more adept at detecting aliens. But the aliens have no interest in our affairs, so we know of their existence but they never attempt to interact.
The prime directive envisioned in Star Trek may in fact exist and we just happen to be that backward civilization. It's just like how we treat tribes in the Amazon. We allow them to thrive in complete isolation.
Any advanced civilization will likely have a history where members of their race chose to live in isolation. So it should be no surprise that they would leave our civilization alone to grow up for ourselves.
Read 7 tweets
14 Jun
So I'm reading this blog from Walid Saba where he lends 5 examples that are difficult to do in NLP. medium.com/ontologik/sema…
(1) Sara likes to play bridge
(2) Sara has a Greek statue in every corner of her apartment
(3) Sara loves to eat pizza with her kids (with pineapple)
(4) Sara enjoyed the movie
(5) The White House criticized the recent remarks made by Beijing
I ran these against GPT-3 to validate its understanding of them. GPT-3 appeared to 'know' that there was more than 1 statue for (2) and wasn't able to resolve what (4) might mean. However, it disambiguated the others correctly.
Read 8 tweets
14 Jun
The only thing that remains constant is change. If you think about it, what is constant is relative to time. Furthermore, what is constant is relative to what is moving. For physicists, something that is constant (i.e invariant) describes symmetry.
A symmetry is furthermore defined as a *change* in a reference frame. In otherwords, you can't define a constant unless there is something that changes.
What is constant is what is perceived to not change. It is the very definition of an abstraction. We are able to perceive the world because we abstract the world into things that don't change. Categories are also things that don't change.
Read 24 tweets
14 Jun
I agree that a single deep learning network can only interplay. But a GAN or a self-play network can obviously extrapolate. The question though is where is ingenuity coming from? medium.com/intuitionmachi…
I'm actually very surprised that @ecsquendor who has good videos summarizes the state of understanding in deep learning is fawning over @fchollet ideas. I'm perplexing what Chollet calls extrapolation:
There's also this depiction of different kinds of generalization. I do find it to be very odd and outright wrong.
Read 25 tweets
13 Jun
The "Reward is Enough" paper offers a piss-poor explanation as to how AGI is achieved. I'm actually more surprised that the @DeepMind wrote such a poorly constructed philosophical paper. sciencedirect.com/science/articl…
Keneth Stanley @kenneth0stanley actually has a much better construction which he presents in this paper. dl.acm.org/doi/abs/10.114…
The major flaw of the 'Reward is Enough' paper is that the authors don't know how to disentangle the self-referential nature of the problem. So it's written in a way that sounds like a bunch of poorly hidden tautological statements.
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(