GP Profile picture
Jan 18, 2023 11 tweets 10 min read Read on X
As a society, we must ensure that the #AI systems we are building are #inclusive and #equitable. This will only happen through increased transparency and #diversity in the field. Using already "dirty data" is not the way

Using biased data to train AI has serious consequences, particularly when data is controlled by large corporations with little #transparency in their training methods

For fair & #equitable AI we need Web3 democratized & agendaless data for AI training

The use of flawed #AI training datasets propagates #bias, particularly in #GPT-type models which are now widely hyped but are controlled by compromised #Web2 MNCs who have a poor track record in #privacy, protecting civil #liberty & preserving free speech

mishcon.com/news/new-claim…
We have already seen examples of this bias in real life, such as biased #facial recognition technology that disproportionately affects certain #ethnic groups

sitn.hms.harvard.edu/flash/2020/rac…
Additionally, the lack of transparency in the training data and methods used by corporations makes it difficult to detect and address bias in AI systems

brookings.edu/research/algor…
The development of explainable AI #XAI is not keeping pace with advancements in AI, making it harder to understand the #blackbox nature of AI decisions

As a society, we must demand that investment in #ExplainableAI keeps pace with the dev of #AGI & AI

engineering.dynatrace.com/blog/understan…
This lack of transparency and #ethics in AI decision-making has significant consequences, particularly in areas such as #healthcare and #finance

ncbi.nlm.nih.gov/pmc/articles/P…
Lack of diversity in data scientists, engineers & researchers working on AI contributes to #bias in AI

Biased AI systems perpetuate & amplify existing societal inequalities & #discrimination, leading to the further #marginalization of certain groups

theconversation.com/artificial-int…
It's crucial we address these issues to ensure fair & #ethical AI development by increasing transparency in training data & methods, plus diversifying teams. By fostering more diverse & inclusive teams, we have a chance to create a more robust & fair AI

computerweekly.com/opinion/Why-di…
There are efforts in industry & academia to detect & mitigate bias in AI systems & make them fairer, but we need more

The problem of bias in AI is complex, but by taking a holistic approach & addressing it at every stage of the AI dev process, we can create an ethical AI future
As a #society, we need to ensure that the AI systems we are building are #inclusive and #equitable

This will only happen through increased transparency and diversity in the field Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with GP

GP Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Graham_dePenros

Feb 5, 2023
#SupervisedLearning is a type of #MachineLearning where an algorithm is trained on a labeled dataset to predict outcomes for new data

The labeled dataset contains input variables and the desired output, and the algorithm uses this information to make predictions
In #SupervisedLearning, the algorithm is constantly adjusting its parameters to minimize the prediction error

One of the most popular algorithms for #SupervisedLearning is #LinearRegression, used for prediction problems where the target is continuous
Another common algorithm is #LogisticRegression, used for classification problems where the target is binary

#DecisionTrees and #RandomForests are commonly used for both regression and classification problems
Read 15 tweets
Jan 29, 2023
Wanna know bout the effect on national security & global stability of #QuantumHacking in #Web3 #Crypto #AI #VR & #AR by nation-state-backed hacker groups like #USCyberCommand, #NorthKorea, #Iran, #Russia, & #China?

You do? Here's you're TL;DR to minimize your Units of Attention
APT (Advanced Persistent Threat) groups are a prime example of nation-state-backed hacker groups

#CozyBear (APT29), #LazarusGroup (APT38), #DoubleDragon (APT41), #FancyBear (APT28), and #HelixKitten (APT34) are some of the most well-known APT groups
These groups have been known to carry out cyber espionage, intellectual property theft, and sabotage. For instance, the #FancyBear APT group was responsible for the alleged 2016 US election interference
Read 17 tweets
Jan 23, 2023
The IEEE GLOBAL GENERAL PRINCIPLES OF ‘ETHICALLY ALIGNED DESIGN’ initiative on the ethics of autonomous & intelligent systems (A/IS) includes 8 pillars

1. HUMAN RIGHTS: AI shall be created & operated to respect, promote, & protect internationally recognized human rights
A real-world example of this pillar:

1. A facial recognition system used by law enforcement that respects individuals’ privacy and does not discriminate against certain groups
2. WELL-BEING: AI creators shall adopt increased human well-being as a primary success criterion

A real-world example is a healthcare AI system that prioritizes patient outcomes and improves overall well-being, rather than just maximizing profits
Read 14 tweets
Jan 18, 2023
#AI Masterclass for Business Owners

Draw benefits from currently available AI tools to streamline your business, decrease costs, increase brand reach, create efficiencies, enhance your marketing mix & messaging, develop new ideas, & amaze & delight your customers

@yaeunda
Murf enables anyone to convert text to speech, voice-overs, and dictations, and it is used by a wide range of professionals like product developers, podcasters, educators, and business leaders

murf.ai
Neuraltext aims to cover the entire content process, from ideation to execution, using AI

It’s an AI copywriter, SEO content tool, and keyword research tool

neuraltext.com
Read 14 tweets
Jan 18, 2023
Siloed development of AI by nation-states as National Security threat mitigation as well as the weaponizing of AI to infiltrate, and affect policy & population sentiment in adversary nations is a significant malignant threat to peace & exponentially increases the risk of conflict
AI algos harness volumes of macro & micro-data to influence decisions affecting people in a range of scenarios, from benign movie recommendations to less benign black-box creditworthiness tests, to malignant use by Alphabet Agencies for regime change

wired.co.uk/article/ai-mac…
Artificial intelligence extends the reach of national security threats that can target individuals and whole societies with precision, speed, and scale

#NatSec #NSA #CIA #DIA #FSB #MIT #Mossad

arxiv.org/pdf/1802.07228…

theregister.com/2022/07/27/us_…
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(