, 17 tweets, 7 min read Read on Twitter
1) A few weeks back, I started to live-tweet a talk by Dr. Chris McKillop from @turalt. I stopped because it was too interesting and things were coming too fast to compose. Here are some notes that I took from her talk.
@turalt 2) (Note that these are my interpretations, not always her words directly.)
3) Many people see AI as helping us to do stuff. A heavily multidisciplinary researcher (AI, ML, expert systems, cognition, neuroscience, psychology, philosophy, ethics... IIRC) in the field for decades, Dr. McKillop suggests AI more usefully helps us to understand our humanity.
4) It's easy to use AI to deceive. Much of the time, people don't mind being deceived, and to some degree, people might even WANT to be deceived.

She pointed to Sophia as an example of a "successful" AI chatbot around which there is lots of deception. smh.com.au/opinion/why-so…
5) Dr. McKillop pointed to the work of Joanna Bryson (@j2bryson, cited here theverge.com/2017/10/30/165…), who is similarly upset about misrepresentation.
@j2bryson 6) Dr. M. noted a fairly consistent failure as people put AI to work: after realizing that we CAN do something, not pausing to ask if we SHOULD — the Jurassic Park problem. As a recent example, the "good babysitter" applications: washingtonpost.com/technology/201…
@j2bryson 7) People behave differently online compared to the ways they behave in real social situations. Yet AI doesn't learn from the latter; only the former. Microsoft's Tay shows how THAT can go very badly indeed. theverge.com/2016/3/24/1129…
8) AI is unable to perform sensemaking the way humans do. An algorithm mistakes a portrait on a bus ad with a real person, and socially sanctions her for jaywalking: scmp.com/tech/innovatio… She's a prominent CEO, so the problem was noted and corrected before too long. This time.
9) In her talk, Dr. Chris McKillop (@turalt) noted that ethical problems for engineers and AI researchers could tell us about our own values, how we think about them, and how we reach them, if only we bothered to pay attention. Not very many people are doing this, alas.
10) One thing for sure; we should not delegate moral decisions to algorithms and checklists, nor should we accept other people doing that on our behalf without informed consent, which includes full disclosure, adequate comprehension, and voluntary choices by those giving consent.
11) From this, it seems to me that one future path for testers, as the din from AI and ML increases, is to investigate ethics. If testers are professional risk investigators, AI and ML present mountains of opportunities for us — and a lot of work to prepare for problems.
12) And yes, a lot of this involves getting language right. Example from Dr. McK.: have you ever noticed that, in our language, there are no founding *mothers* of countries or disciplines? To what degree will AI/ML, dependent on text, amplify that bias and threaten inclusivity?
13) Apropos of informed consent: how can we provide informed consent about our personal data to, say, Twitter, when Twitter itself doesn't know what it's going to do with the data five years hence?
14) Apropos of testing AI, here's a test idea that occurred to me during @turalt's talk: get the AI to make something that we would characterize as the *dumbest* decision possible. If it can't identify and explain why decisions are dumb, it's hardly intelligent, is it?
@turalt 15) (Maddeningly, my notes aren't clear on whether the idea of testing AI by getting it to describe and explain dumb decisions was @turalt's or mine. If it's hers, full credit; if not, full credit for the inspiration.) #notetakingfail
@turalt 16) In addition to what I've already mentioned, @turalt referred to Donald Schon, /The Reflective Practioner/, amazon.ca/Reflective-Pra…; the Cognitive Bias Codex, upload.wikimedia.org/wikipedia/comm…; Paula Boddington. cs.ox.ac.uk/efai/author/ph… It was a fantastic talk; I want to learn more.
@threadreaderapp Unroll the rolled.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Michael Bolton
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!