As a society, we must ensure that the #AI systems we are building are #inclusive and #equitable. This will only happen through increased transparency and #diversity in the field. Using already "dirty data" is not the way
Using biased data to train AI has serious consequences, particularly when data is controlled by large corporations with little #transparency in their training methods
For fair & #equitable AI we need Web3 democratized & agendaless data for AI training
The use of flawed #AI training datasets propagates #bias, particularly in #GPT-type models which are now widely hyped but are controlled by compromised #Web2 MNCs who have a poor track record in #privacy, protecting civil #liberty & preserving free speech
mishcon.com/news/new-claim…
We have already seen examples of this bias in real life, such as biased #facial recognition technology that disproportionately affects certain #ethnic groups
sitn.hms.harvard.edu/flash/2020/rac…
Additionally, the lack of transparency in the training data and methods used by corporations makes it difficult to detect and address bias in AI systems
brookings.edu/research/algor…
The development of explainable AI #XAI is not keeping pace with advancements in AI, making it harder to understand the #blackbox nature of AI decisions
As a society, we must demand that investment in #ExplainableAI keeps pace with the dev of #AGI & AI
engineering.dynatrace.com/blog/understan…
This lack of transparency and #ethics in AI decision-making has significant consequences, particularly in areas such as #healthcare and #finance
ncbi.nlm.nih.gov/pmc/articles/P…
Lack of diversity in data scientists, engineers & researchers working on AI contributes to #bias in AI
Biased AI systems perpetuate & amplify existing societal inequalities & #discrimination, leading to the further #marginalization of certain groups
theconversation.com/artificial-int…
It's crucial we address these issues to ensure fair & #ethical AI development by increasing transparency in training data & methods, plus diversifying teams. By fostering more diverse & inclusive teams, we have a chance to create a more robust & fair AI
computerweekly.com/opinion/Why-di…
There are efforts in industry & academia to detect & mitigate bias in AI systems & make them fairer, but we need more
The problem of bias in AI is complex, but by taking a holistic approach & addressing it at every stage of the AI dev process, we can create an ethical AI future
As a #society, we need to ensure that the AI systems we are building are #inclusive and #equitable
This will only happen through increased transparency and diversity in the field
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
