Profile picture
, 31 tweets, 5 min read Read on Twitter
Panel: Will we hit an AI Singularity? #FQXi2019
Michael Vassar: What is the singularity? Earlier meaning: There would be feedback as AI makes ever more sophisticated AI we could reach a point where AI take over. Expected between 2005 and 2030, a switch between human intelligence as the driver of history to AI as the driver.
Vassar: Other definitions of the singularity are in terms of Moore's Law. But seem less relevant now, as Moore's Law seems to be ending.
Briggs: People are asking if AI will take over the Earth. I don't know the answer to that. But how do we get there from here? I think that's really important because it comes down to questions of what humans are for, what are our ultimate values, why are we here?
Briggs: With AI we need upward causation and downward causation. Think of AI as a natural resource. If you have a natural resource and you don't have good governance, that doesn't lead to prosperity, but to tears and bloodshed.
Briggs: If AI is mined without the good governance at first, we won't get prosperity. That's the downward causation.
Briggs: Upward causation, there needs to be a change in the values needed by people to deal with a world in which more decision-making is being taken over by machines.
Surya Ganguli: What things really impact the quality of human life? Attention is the most valuable resource we have to give to our loved ones. All these AI are designed to take our attention. What do we do about the future of work?
(Brigss = Andrew Briggs)
Ganguli: If someone is hit by a self-driving car, who is to blame? What ethics do you program into an AI to work out what decisions to make in a #trolleyproblem scenario. Racist and sexist bias programmed into AI through training data. We need to rethink everything we are doing.
Vassar: Big picture issue. We don't know how to build artificial general intelligence AGI, but also true that if we do build it, we won't know how to control it.
Moderator Susan Schneider: Will we reach a point where we lose control? How will humans retain parity?
Olaf Witkowsky: Not just an issue that we can't control AI, but that AI could start to control us. If they can take over Twitter streams, Twitter bots, that's how they can control us.
OW: These things are lost if you fear AI as a technology.
Briggs: Worth making a distinction between technical decision-making and moral decision-making. We can expect machines to get better and better at technological decision-making. Like in my lab. We are happy for them to do that.
Briggs: We can also think of moral decision-making, for ourselves and for society. Our experience of humans with moral decision-making is that there is both a rational component and an emotional component. When you remove the emotion, you don't live as well.
Schneider: Problem with deep learning is the black box issue. Ganguli has mentioned that bias can be trained into systems.
OW: Big problem. It won't be solved in a day. But when I talk to Andrew Briggs I don't see the inner workings of his mind. I try and empathise with him.
Briggs: This comes into sharp focus in legal decision-making. But machine doesn't have to be perfect, but should at least be less biased than a human. Eg in UK, there is an inconsistency in sentences by judges depends on who the judge is and the geographical area.
Vassar: A theme I am discovering is we seem to have choose between "be afraid" and "ignore it". Whereas what we should be told is "definitely don't be afraid" and "definitely don't ignore it".
Schneider: Suppose I am Elon Musk or Ray Kurtzweil, and I say how about we just enhance humans, so we will be able to keep up with AI etc.
Ganguli: Sceptical. It will be easier to improve AI than enhance humans. And I don't think technology will be the solution, but a mix of tech and humanism.
Ganguli: We have people thinking about how to de-base the data. We need to think about a new moral economy when humans don't need to work. These are human questions. It's about who we are.
OW: If you take my phone I feel like you have cut my arm off. We couple with technology. They modify us in a deep way and augment us.
Briggs: Martin Rees would say bits of science will be too hard for humans to get their minds around them without help from tech. Humans have co-evolved with crops and animals. Are humans evolving? Gene synthesis.
*De-bias the data.
Question from @tegmark: People in the room can contribute to the basic human questions of what do we want is something we should all be thinking about. Also, physicists are good at thinking about the trade-off between short term and long term issues.
Question: Could you build an AI to understand other AI? Answer from Briggs: They've done that and they created their own language. OW: But that example doesn't show the AI understand each other.
Vassar: It's anthropomorphic to talk about an AI and another AI.
Question: Tend to assume the singularity will involve an AI with agency and will. But maybe it will creep up another way. Eg Google is getting very advanced. Could you have a singularity without will?
Question from Pauline Davies: Has anyone done a calculation to show that it is a win for the world if we have more AI?
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to FQXi Physics
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!