We recently partnered with Sprinklr for an independent assessment of hate speech on Twitter, which we’ve been sharing data on publicly for several months.
It’s obviously trivial for a single person to create 10k bot accounts on one computer, each of which is tweeting slurs that are written to avoid text string detection. What matters is whether those tweets are actually shown to real users.
elonmusk/status/1638295996278136832
-
How Sprinklr Helps Identify and Measure Toxic Content with AI
New AI Model can analyze publicly available digital data to detect the presence of toxicity and provide context on the scope and impact of toxic content.
There are certainly other factors at play, but heavy use of c-sections allows for a larger brain, as brain size has historically been limited by birth canal diameter