Discover and read the best of Twitter Threads about #ExplainableAI

Most recents (9)

🔥 AI is changing the world faster than ever. Here are some of the most mind-blowing AI news from last week that you don't want to miss:

🧵 A thread
1/ AI Builder: a new app by Microsoft that lets you create and train AI models without coding. Use them for image recognition, text analysis, and more.

nypost.com/2023/05/28/mic…

#microsoft #aibuilder #windows #programming #artificialintelligence
2/ Neural Filters: a new feature by Adobe that adds generative AI to Photoshop. Apply realistic effects to your photos, such as changing facial expressions, hair styles, age, and lighting.

nypost.com/2023/05/27/ado…

#adobe #photoshop #neuralfilters #generativeai #photography
Read 6 tweets
As a society, we must ensure that the #AI systems we are building are #inclusive and #equitable. This will only happen through increased transparency and #diversity in the field. Using already "dirty data" is not the way

Using biased data to train AI has serious consequences, particularly when data is controlled by large corporations with little #transparency in their training methods

For fair & #equitable AI we need Web3 democratized & agendaless data for AI training

The use of flawed #AI training datasets propagates #bias, particularly in #GPT-type models which are now widely hyped but are controlled by compromised #Web2 MNCs who have a poor track record in #privacy, protecting civil #liberty & preserving free speech

mishcon.com/news/new-claim…
Read 11 tweets
New article on #websites #classification discussing possible #taxonomy that can be used (IAB, Google, Facebook, etc.) as well as #machinelearning models:
explainableaixai.github.io/websitesclassi…

list of useful resources: linktr.ee/airesearcher
a new telegram channel where will post about #explainableai (#XAI for short):
t.me/s/explainablea…
there are now many useful libraries available for doing #explainability of #AI models: SHAP, LIME, partial dependence plots PDP. And also the "classical" feature importance.
Our german blog on topic of website #categorizations: kategorisierungen.substack.com
Read 6 tweets
I was an eng leader on Facebook’s NewsFeed and my team was responsible for the feed ranking platform.

Every few days an engineer would get paged that a metric e.g., “likes” or “comments” is down.

It usually translated to a Machine Learning model performance issue. /thread
2/ The typical workflow to diagnose the alert by the engineer was to first check our internal monitoring system Unidash to see if the alert was indeed true and then dive into Scuba to diagnose it further.
3/ Scuba is a real-time analytics system that would store all the prediction logs and makes them available for slicing and dicing. It only supported filter and group by queries and was very fast.

research.fb.com/wp-content/upl…
Read 11 tweets
I was a product manager on Samsung #ecommerce and we were testing a hypothesis that users were ready to adopt purchase of products via chat i.e. conversational commerce. I got a $1M budget to validate it. Here’s how lack of model explainability dooms business decisions. Read on…
We selected the most popular chat app, FB Messenger, to test this hypothesis. To accelerate the test, we ran ads with a promotion that opened a chat window directly into our commerce chatbot.
Now about the ad target. We had a database of millions of users who had previously engaged with the Samsung brand. The target of 300k users for the ad campaign was decided by an #ML model.
Read 10 tweets
With the last week's launch of Google Cloud’s Explainable AI, the conversation around #ExplainableAI has accelerated.

But it begs the questions - Should Google be explaining their own AI algorithms? Who should be doing the explaining? /thread
2/ What do businesses need in order to trust the predictions?

a) They need explanations so they understand what’s going on behind the scenes.

b) They need to know for a fact that these explanations are accurate and trustworthy and come from a reliable source.
3/ Shouldn't there be a separation between church and state?

If Google is building models and is also explaining it for customers -- without third party involvement -- would it align with the incentives for customers to completely trust their AI models?
Read 11 tweets
We've been working on #ExplainableAI at @fiddlerlabs for a year now, here is a thread on some of the lessons we learned over this time.
2/ There is no consensus on what "Explainability" means. And people use all of these words to mean it.
3/ However, one thing is clear:

"AI is a Black-Box and people want to look inside".

The reasons to look inside vary from Model Producers (Data Scientists) to Model Consumers (Business teams, Model validators, Regulators, etc).
Read 14 tweets
It is amazing to see so many applications of game theory in modern software applications such as search ranking, internet ad auctions, recommendations, etc. An emerging application is in applying Shapley values to explain complex AI models. #ExplainableAI
Shapley value was named after its inventor Lloyd S. Shapley. It was devised as a method to distribute the value of a cooperative game among the players of the game proportional to their contribution to the game's outcome.
Suppose, 10 people came together to start a company that produces some revenue. How would you distribute the revenue of the company among the 10 people as a payoff so that the payoffs are fair and appropriate to their contributions?
Read 11 tweets
Today, @NITIAayog released a Discussion Paper titled ‘National Strategy for Artificial Intelligence”. #AIforAll
niti.gov.in/writereaddata/…
We provide our initial thoughts on the paper here. 1/n
We welcome this initiative by @NITIAayog, but a call for comments would have been a welcome addition. The paper takes important steps forward from the #AI Task Force report released earlier this year by the DIPP, Ministry of Commerce and Industry (dipp.nic.in/sites/default/…). 2/n
This paper attempts a more holistic look at a broader range of issues concerning AI including #regulation, #ethics, #fairness, #transparency and #accountability. However, a number of issues still remain with this paper. 3/n
Read 41 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!