"We encourage the United States government to adopt a similar whole-of-government approach"
This paragraph on adversial attacks with #NLP sounds familiar.
It almost sounds like.. exactly what I reported in The Ruby Files re how Twitter weaponized NLP.
Adversial NLP attacks can come from within the organization - the bad actor is *not* always external.
Machine Learning Warfare:
"The model misinterprets the content of the image and misclassifies it. An attacker can tailor the expected behavior of an algorithm to achieve a number of outcomes."
"Adversial #AI targets areas of the attack surface that have never previously had to be secured, the AI models themselves.
From now on, organizations need to include these in their security budgets- or risk them being exploited by attackers."
"Poisoned training data"
"We would support NIST developing a sandboxing scheme for AI"
"....test and pilot AI algorithms and tools..."
Next, let's review some highlights in the academic research paper:
If you read the research paper carefully, you will notice a common pattern emerge.
Policy rec. in the name of AI fairness- and a rec. to change results of the model if the output does not align w/ their definition of "fairness."
NIST was repeatedly flagged as an N-Gram in the government agency category.
Why was Twitter using Machine Learning to monitor for NIST mentions?
Do you understand how dangerous this is?
AI “ethicists” have completely circumvented elected officials for well over 5 years now to implement whatever aligns w/ their worldview.
All in the name of “AI fairness”
I want to be clear in my language on this. When I say they- I am referring to a small group of people in ML who have hijacked the tech for nefarious purposes. I am not referring to the entire industry. There are many other people doing great work and find this deplorable.
I believe in the power of AI/ML. I want America to win the AI war. But we won’t lead with nefarious actors infiltrating the government.
They are using ML as a weapon to deploy personal ethics in the name of “AI fairness.”
If AI chatbots keep defaming living people in search- a class action will be next.
These AI chatbots are outright defaming people in the AI-generated answers.
That is not legal and is a blatant violation of free speech laws re defamation.
“Defamation occurs if you make a false statement of fact about someone else that harms that person’s reputation. Such speech is not protected by the First Amendment and could result in criminal and civil liability.”
“Defamation against public officials or public figures also requires that the party making the statement used “actual malice,” meaning the false statement was made “with knowledge that it was false or with reckless disregard of whether it was false or not.”
“If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.” -@FTC
“Whatever it can or can’t do, #AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
“Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional patentimages.storage.googleapis.com/19/f6/c8/1e402…… twitter.com/i/web/status/1…