But it begs the questions - Should Google be explaining their own AI algorithms? Who should be doing the explaining? /thread
a) They need explanations so they understand what’s going on behind the scenes.
b) They need to know for a fact that these explanations are accurate and trustworthy and come from a reliable source.
If Google is building models and is also explaining it for customers -- without third party involvement -- would it align with the incentives for customers to completely trust their AI models?
This is why existence of impartial and independent third parties is so crucial as they provide that all-important independent opinion to algorithm-generated outcomes.
computerworld.com.au/article/621059…
hackernoon.com/explainable-ai…
What’s the reason for the turnaround? Did Google notice an increase in potential market share for Explainability? Did they receive feedback from their customers asking for Explaianbility?
Google started an ethics board only to be dissolved in less than a week. So how can we place full trust in them?
In order to make sure there is consistency in the explanations across all the AI solutions, they need a centralized AI governance system.
Finally, if one cares about ethical and compliant AI in their organization, they should seriously look into a 3rd party Explainable AI solution. /end