, 11 tweets, 2 min read
My Authors
Read all threads
With the last week's launch of Google Cloud’s Explainable AI, the conversation around #ExplainableAI has accelerated.

But it begs the questions - Should Google be explaining their own AI algorithms? Who should be doing the explaining? /thread
2/ What do businesses need in order to trust the predictions?

a) They need explanations so they understand what’s going on behind the scenes.

b) They need to know for a fact that these explanations are accurate and trustworthy and come from a reliable source.
3/ Shouldn't there be a separation between church and state?

If Google is building models and is also explaining it for customers -- without third party involvement -- would it align with the incentives for customers to completely trust their AI models?
4/ I believe it is a catch-22 for any company in the business of building AI models.

This is why existence of impartial and independent third parties is so crucial as they provide that all-important independent opinion to algorithm-generated outcomes.
5/ Google’s historical stance on explainability also raises questions. We've seen multiple instances where their executives questioned Explainable AI, for example:

computerworld.com.au/article/621059…
6/ In one case an exec publicly debunked Explainable AI saying that it won’t deliver.

hackernoon.com/explainable-ai…
7/ This makes one wonder about the last week's launch:

What’s the reason for the turnaround? Did Google notice an increase in potential market share for Explainability? Did they receive feedback from their customers asking for Explaianbility?
8/ Ethics plays a big role in explainability because ultimately the goal of explainability is to ensure that companies are building ethical and responsible AI.

Google started an ethics board only to be dissolved in less than a week. So how can we place full trust in them?
9/ For explainability to succeed in its ultimate goal of ethical AI, we need an agnostic and independent approach.
10/ Today, businesses are using many different AI solutions from various firms including Google.

In order to make sure there is consistency in the explanations across all the AI solutions, they need a centralized AI governance system.
11/ By streamlining AI governance, a business can ensure that there are impartial and consistent explanations.

Finally, if one cares about ethical and compliant AI in their organization, they should seriously look into a 3rd party Explainable AI solution. /end
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Krishna Gade

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!