Discover and read the best of Twitter Threads about #XAI

Most recents (11)

¿Podemos fiarnos de lo que nos diga una inteligencia artificial #IA? ¿Cómo interpretamos sus decisiones?

Vengo a contaros cómo mi tesis en @BigData_uc3m @uc3m busca arrojar algo de luz ☀️ sobre estos algoritmos tan opacos🌑!

Abro #HiloTesis 👇 Image
La #IA ha pasado de ser un tema exclusivo de los investigadores a estar en boca de todos en cuestión de meses. Y con ello han aparecido las preocupaciones y gente volviendo a ver Terminator para estar preparados antes de que los robots 🤖 nos dominen!
Por suerte, todavía estamos algo lejos de ese escenario apocalíptico, pero ¿os imagináis a inteligencias artificiales tomando decisiones importantes sin que sepamos cómo? Pues, en cierto modo, esto lleva tiempo ocurriendo prácticamente a diario. 😱
Read 20 tweets
As a society, we must ensure that the #AI systems we are building are #inclusive and #equitable. This will only happen through increased transparency and #diversity in the field. Using already "dirty data" is not the way

Using biased data to train AI has serious consequences, particularly when data is controlled by large corporations with little #transparency in their training methods

For fair & #equitable AI we need Web3 democratized & agendaless data for AI training

The use of flawed #AI training datasets propagates #bias, particularly in #GPT-type models which are now widely hyped but are controlled by compromised #Web2 MNCs who have a poor track record in #privacy, protecting civil #liberty & preserving free speech

mishcon.com/news/new-claim…
Read 11 tweets
Our group @ai4life_harvard is gearing up for showcasing our recent research and connecting with the #ML #TrustworthyML #XAI community at #NeurIPS2022. Here’s where you can find us at a glance. More details about our papers/talks/panels in the thread below 👇 [1/N] Image
@ai4life_harvard [Conference Paper] Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations (joint work with #TessaHan and @Suuraj) -- arxiv.org/abs/2206.01254. More details in this thread [2/N]
[Conference Paper] Efficient Training of Low-Curvature Neural Networks (joint work with @Suuraj, #KyleMatoba, @francoisfleuret) -- arxiv.org/abs/2206.07144. More details in this thread [3/N]
Read 10 tweets
How to understand your prediction model and use it in real algorithmic trading? 🧵🧵🧵

Get a cup of tea or coffee and enjoy new knowledge.

#Trading #MachineLearning #ExplainableML #xAI #Shapley
In AlgoTrading it is really important to understand why your model and strategy work or does not work. Having that knowledge, you can adjust your features, build a better model or use this model in other cases.
I do not believe in putting some data creating for 1 day and doing Neural Networks in order to beat the market. In order to do it, you have to dig deep down into the data, and eventually you will find something good.
Read 24 tweets
One of the biggest criticisms of the field of post hoc #XAI is that each method "does its own thing", it is unclear how these methods relate to each other & which methods are effective under what conditions. Our #NeurIPS2022 paper provides (some) answers to these questions. [1/N]
In our #NeurIPS2022 paper, we unify eight different state-of-the-art local post hoc explanation methods, and show that they are all performing local linear approximations of the underlying models, albeit with different loss functions and notions of local neighborhoods. [2/N]
By doing so, we are able to explain the similarities & differences between these methods. These methods are similar in the sense that they all perform local linear approximations of models, but they differ considerably in "how" they perform these approximations [3/N]
Read 13 tweets
There seems to be an almost willful confusion about the need and role for explainability of #AI systems on #AI twitter.

Contrary to the often polarizing positions, it is neither the case that we always need explanations nor is it the case that we never need explanations. 🧵1/
We look for explanations of high level decisions of (what for us are) explicit knowledge tasks; and where contestability and collaboration are important.

We rarely look for explanations of tacit knowledge/low level control decisions. 2/
I don't need explanation on why you see a dog in a picture; why you put your left foot 3 mm ahead of your left, or why facebook recommends me yet another page.

I do want one if am denied a loan, or I need a better model of you so I can coordinate with you. 3/
Read 14 tweets
New article on #websites #classification discussing possible #taxonomy that can be used (IAB, Google, Facebook, etc.) as well as #machinelearning models:
explainableaixai.github.io/websitesclassi…

list of useful resources: linktr.ee/airesearcher
a new telegram channel where will post about #explainableai (#XAI for short):
t.me/s/explainablea…
there are now many useful libraries available for doing #explainability of #AI models: SHAP, LIME, partial dependence plots PDP. And also the "classical" feature importance.
Our german blog on topic of website #categorizations: kategorisierungen.substack.com
Read 6 tweets
Interested in interpretable and explainable machine learning? Check out our new blog post with opinions on the field and 70 summaries of recent papers, by @__Owen___ and me!

Link: alignmentforum.org/posts/GEPX7jgL…
Topics include Theory, Evaluation, Feature Importance, Interpreting Representations, Generating Counterfactuals, Finding Influential Data, Natural Language Explanations, Adversarial/Robust Explanations, Unit Testing, Explaining RL Agents, and others (note: not a formal taxonomy)
We're excited to highlight the wide array of research in interpretability/transparency/explainability. We hope this work can help others identify common threads across research areas and get up to speed on the latest work in different subareas.
Read 4 tweets
Really existed about our #UAI2020 paper with @IAugenstein & @vageeshsaxena.

TX-Ray: interprets and quantifies adaptation/transfer during self-supervised pretraining and supervised fine-tuning -- i.e. explores transfer even without probing tasks. #ML #XAI
arxiv.org/abs/1912.00982 Image
TX-Ray adapts the activation maximization idea of "visualizing a neuron's preferred inputs" to discrete inputs - NLP. With a neuron as an 'input preference distribution' we can measure neuron input-preference adaptation or transfer. This works for self- & supervised models alike. Image
We analyzed how neuron preferences are build and adapted during: (a) pretraining, (b) 'zero-shot' application to a new domain, and (c) by supervised fine-tuning.

(a) Confirms that: pretraining learns POS first, as @nsaphra showed, and that preferences converge like perplexity. Image
Read 6 tweets
Je reprends mon thread sur #AInight19 pour la conférence sur l'explicabilité de l'IA, notamment avec David Sadek de @thalesgroup
Inutile de dire que c'est un sujet hyper à la mode.
Petite référence au papier de @Quantmetry
Read 26 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!