, 75 tweets, 13 min read
Alright, #sciencefiction #neuroscience fans: this is the content you’re here for! In ~30 minutes, I will livetweet this #sfn19 “Dialogues Between Neuroscience & Society” talk on the future of AI and machine learning in human society. Bio of speaker Fei-Fei Li for “dialogues between neruoscience & society” talk
Any minute now we should be underway with the #SfN19 Dialogues Between Neuroscience and Society talk by @drfeifei on how AI can - and should - change the human experience. Stay turned for livetweeting!
Yes, it is 11:10. No, the talk has not started. We haven’t even begun introductions yet. Stay patient, friends!
SfN President Diana Lipscombe kicking off the conference. Registration so far about 28,000. Special thanks to that creative-commons logo in lower-right, that means I am share such photos! Woman at podium by Neuroscience logo
Introduction includes a shout-out to the scientists who were denied visas. (Ain’t just Worldcon with this problem nowadays.) There’s a “scientists without borders” mechanism for them to remote-present. We also get a solid statement on diversity & harassment.
And now @drfeifei to tell us about the role of AI in humanity and society!
We begin with a brief history of AI. A few decades between AlphaGO to Deep Blue to first attempts at chess-computers to Turing.
Computer vision - Dr. Li’s speciality - is about replicating how the human brain understands the “pixels” of the world. Huber & Wiesel neuroscientists told us that edges & lines are the fundamental component.
But object recognition/understanding is far beyond that. 10-20 years ago it still felt like an impossible problem, but in 2017, imagenet object classification error is 0.023 - surpassed human 0.06 in 2014-2015.
The real inflection point was 2012, when convolutional neural networks were first applied. (This dataset had 1000 categories, and 1.4 million images)
Deep learning revolution as alignment of 3 technical advances: computation, algorithms, and big data.
Moore’s Law and GPU parallel processing have allowed computers to do way more. But we’ve also changed what the computers do: statistical machine learning. “Deep learning” is inspired by neural networks. And internet-driven Big Data has allowed us to teach high-capacity models.
AI is no longer a niche market. Some call it the “fourth industrial revolution” as it undergoes exponential growth inside big companies and new startups. Scientists entered this field as a scientific curiosity 20 years ago, but now it’s a force for change in society!
It certainly could change society. If self-driving cars work, think about the added autonomy for e.g. blind people. But here in 2019, we all know this technology isn’t just here to bring us utopia.
Cultural changes: simple example of older vs. younger people and how they interact with Siri and Alexa. A silly example, but there are bigger realer ones: job displacement, privacy issues, and bias (the classic Racist Algorithms)
Einstein quote of “it has become appallingly obvious that our technology has exceeded our humanity.” Personally, I suspect this has been true at all times in human history, back to the era of stone axes.
So, how do we tackle this: “Human-Centered AI” approach. Bring human values into the development and deployment of AI.
Three pillars: (1) AI development must be guided by concern for human impact. (2) AI should strive to augment and enhance us, not replace us. (3) AI must be more inspired by human intelligence.
Now, going into Principle 1. AI not as a subfield of computer science; it’s too big/impactful. It’s its own interdisciplinary field. Needs input from humanities and social sciences.

Case study: machine learning bias!
Finance, law-enforcement, etc... anyone who’s been paying attention to this field knows the danger that algorithms pose for promoting/applying bias.
One part of the solution is “dataset fairness.” Ensure diversity & representation in training sets. For example, ImageNet is undergoing algorithmic rebalancing to make sure category “programmer” isn’t all white dudes.
Also, “algorithmic fairness.” Balance the natural-language embedding that capture human language gender bias. (E.g. men:doctor::woman:nurse). Use algorithms to unbiased this?
Sounds similar to survey sampling corrections - applying transformations to unwarp your warped data.
Third, “competing fairness.” Prevent disparity-amplification, promote distributional robustness (protect minority performance over time.) Want to know more about that, details scant here.
“Decision-making fairness” is about making e.g. race-blind decisions. I think the doing of this includes, again, “de-warping” data: altering the details of data to avoid presenting human eyes with things that will trigger our biases? Not sure I understood this, it went by fast!
Principle #2: AI should strive to augment and enhance us, not replace us.
Obviously, many folks fear robots/AI taking jobs. But, we’d rather use intelligent systems to boost human capabilities. Parallel is using driving automation systems to enhance human driving.
Medical error kills 250k people in the US every year - while car accidents kill 30k. $36B/year spent on dealing with falls among unmonitored elderly.
More specifically, hygiene in hospitals - 100k deaths/year from hospital infections! So, test project with self-driving car technology used in a hospital unit. 11M datapoints/sec creating 3d reconstruction of human environment and activities.
With this data, they can find important locations/objects, and use deep-learning algorithms to see when doctors go from important places (e.g. patient beds) via hand sanitizer dispenser or not!
Go deeper: are “dirty exits” associated with anything else about physician behavior, or patient details, or room design?
Current technology solution is to pay humans to stand there with clipboards and record whether people sanitize their hands.
With this data, they can test solutions for ways to promote better sanitization.
Another example: smart sensors on elderly to analyze their activities. Find out where and when those activities happen; monitor social isolation, nutrition, early signs of dementia, etc. Gait changes are a great early sign of many conditions.
This all lowers costs, improves safety, allows clinicians to spend more time with patients. (I wonder: doesn’t that in practice lead to automation job loss? If one nurse can get 2x as many people, you know under capitalism that will mean firing half the nurses.)
Ok, principle 3: AI must be inspired by human intelligence.
If we think AI should be working alongside and collaborating with humans, it needs to be able to understand & interact with those humans.
Example: picture of human and dog and torn-up couch. AI can see those objects; humans can see the dog’s guilt and the human’s incipient anger/frustration.
Today’s AI is static, disembodied, with simple goals. Whereas humans are dynamic, multi sensory, complex, uncertain, interaction.
This doesn’t require exactly copying human “hardware,” but it’s a damn good inspiration.
Example(s) of making AI interact richly with world. Enabling an AI agent to interact like a baby. Infants are curious and play with their environment, they don’t read a million labeled objects!
World Model neural network predicts consequences of AI agent (‘baby’) actions, plus Self Model network to predict errors of world-model (a form of self-awareness), plus learning algorithms.
Ongoing work, not solved. But the agent seems to go through learning stages (naturally emergent, not externally directed): first self-motion, then object attention, then object learning.
Another example: learning to interact with objects/tools. Recognition > Understanding > Manipulation. Ideally, for complex reasoning and implementation, e.g. multiple uses for the same tool.
This requires vision oriented around action: understanding grasp points, function points. Objects have affordances, which can differ on how you grasp/orient them! This is cool, I could go on about this for a while...
...affordances (“what action opportunities an object affords/allows”) are a critical principle for the way our brains interact with the world. Will later link to my @clarkesworld nonfiction piece that talks about this, as a text intro.
Anyways. Computer vision will play a big role in this: identifying all the things that can be done with objects (tools and targets). Complex multi-step manipulation processes. The whole vision/planning process in embodied context.
Next example: interacting with humans! Trying to create an AI agent that interacts with humans toward goal of gaining knowledge.
Specifically, visual knowledge: humans helping the AI understand what it sees. Normal human chat doesn’t do a great job of naming/identifying objects, the AI’s goal is to get that info.
Imagine the instagram post “look at this cute critter outside my apartment!” The AI engages with the human, trying to ask questions and maximize engagement to expand its knowledge.
That “maximize engagement” is critical, because these need to be real conversations - that’s the secret goal here. Nothing new in making AI ask “wha animal is that?”
And, rather abruptly, that concludes this whirlwind tour of research (mostly Stanford research b/c that’s where Dr. Li is). Finishing with a slide of the Three Principles, which I was too slow to photograph.
While they set up the Q&A, here’s that promised link to my @clarkesworld article that talks about the “Affordances Competition Hypothesis,” which is a theory of how affordances (action opportunities) underpin brain function. clarkesworldmagazine.com/kinney_01_17/
First question is about empathy and how we can build that in. Dr. Li is amused by the human-centric crowd. She thinks that the first step is about “theory of mind” understanding what other actors are thinking/wanting/doing.
There’s work on semantic (affect) analysis, researchers are working on ways to implement that ability. But yes, more largely, this IS something we should include in our plans.
Hah, next question is whether subtitles for this talk came from human or AI. Okay I think I just heard a human say “it’s a machine” while the subtitles said “[a human]”??? Hoping I misunderstood something there.
Missed a Q, I think about how brain-driven this engineering is. Answer is analogy from bird flight to airplanes: we use totally different mechanisms, but made possible by our understanding of the biological version.
I’m not going to report every single question. The moderators say there are literally hundreds of audience questions, we ain’t going to get through them all, even here live!
Interesting Q about how to avoid public fear. Answer: get the public engaged in the process. There’s stuff anyone can do, you don’t need to be writing deep learning code. Part of this is for you #SciFi writers out there, who will bring it into peoples’ lives.
Diversity & inclusion: ~2013 after ImageNet classification got “solved” by convolution neural networks, she saw the technology’s first appearance, and how everyone in charge at the time looked the same. Then she was the only woman professor of a Stanford AI lab.
“Machine values are human values. What we make reflects who we are.” Great pull quote from @drleilei during this Q&A of her #SfN19 talk. Doubt it’s a new quote today, but I likes it.
Part of this inclusion process also involves spreading AI research beyond universities like Stanford - partnering with local universities all over the nation/world.

Applause from the audience for the whole diversity answer.
Q: To fix these societal biases, should we become comfortable about AI making “better” decisions than humans, and who should make the final call?
A: Not “become comfortable,” that future is here. Who drives anymore without an AI map running? But for the big moral questions, we need to have a societal grappling. Answers may be sector-by-sector (e.g. commercial and medical fields don’t need the same answers).
Personally I feel like “society needs to grapple with this” is the easy answer to a lot of questions about “is this right??” I’m not sure if/how our current society is equipped to do that. It won’t be a genuine multi-stakeholder discussion...
...In practice you’d get 30% of people parroting whatever opinion Fox News provided, the rest a Facebook morass of toxicity/bubbles/advertisers, etc. Our current media (social and otherwise) just don’t promote actual discussions.
Not that I have a better solution, mind you. We still do need societal discussion of big moral questions. I just wish we had a way to do that.
Q about how AI can help solve climate change. Easy question! Answer is for data gathering, both about climate and to drive policy. (Like that healthcare example, AI can help us understand what humans do.)
Q: Brain-machine interface. Ethical concerns of human augmentation? Early in her career, she was focused only on science (getting something to work) but not social impact. Now that AI is getting deployed, researchers think more abt social. Similar arc coming for BMI.
Recognize both the journey and the responsibility - and ultimately the opportunity to invite a multi-stakeholder conversation as early as possible.
Q: Rich vs poor countries. How can it help life in poor countries too? A: Can also think about socio-economic divides within USA. Like all tools, AI is a double-edged sword. Must try not to let it polarize distributions of access/wealth/etc.
Because it’s a computing technology, AI could potentially be relatively accessible to worldwide people - more so than a physical technology. Also, can we use AI to democratize access to education/expertise?
Thanks everyone for reading! I’ll come add reactions in a bit, but first I need to bow out for a bit so I can watch @humansareawesme’s TED talk. Go livestream it yourself if you like!
@threadreaderapp unroll for me now
P.s. live feed of @humansareawesme’s cool space talk momentarily at tedxvienna.at/abouttime/
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Benjamin C. Kinney

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!