We recently partnered with Sprinklr for an independent assessment of hate speech on Twitter, which we’ve been sharing data on publicly for several months.
It’s obviously trivial for a single person to create 10k bot accounts on one computer, each of which is tweeting slurs that are written to avoid text string detection. What matters is whether those tweets are actually shown to real users.
elonmusk/status/1638295996278136832
-
How Sprinklr Helps Identify and Measure Toxic Content with AI
New AI Model can analyze publicly available digital data to detect the presence of toxicity and provide context on the scope and impact of toxic content.
__ it's uncomfortable for political leaders to wade into the conversation __ aftermath of the murder of the United Health CEO __ listen to what people are feeling. And act.
Ignore everything before the “but.” This rule will never fail you and makes it easy to keep tabs on everyone who justifies political violence, murder, etc.
THIS MATERIAL EVENT CONTAINS A 'Less than Cat 3' LEVEL OF RADIOACTIVE MATERIAL
either sources that are very unlikely to cause permanent injury to individuals or contain a very small amount of radioactive material that would not cause any permanent injury
As an @UniofOxford alumni, I find this truly disgusting. I stand with Israel and the IDF in their fight against Hamas and every other terror group. 🇺🇸🇮🇱
Hamas chose war and they are getting everything they deserve l as a result.