The issue at the heart of Elon Musk’s lawsuit against Media Matters is who is responsible for data voids when they’re discovered. Media Matters identified a bunch of mini-data-voids on Twitter/X where problematic content lives — & then reported on it, rather than exploiting them.
When you search for an obscure or rarely-used hashtag or set of keywords — like what Media Matters did, and what a lot of people are now doing on Twitter/X — you will often stumble upon data voids, or informational dark spaces (IDS). That’s not fraud. This a known phenomenon.
Data voids, or informational dark spaces (IDS), are extremely vulnerable to manipulation & exploitation, often by extremists. In this presentation from Defcon 2022, I explain why. (I gave birth days before this was recorded, so if I sound exhausted…I was)
It’s not fraud when someone searches for and identifies a data void. In this case, Media Matters used the platform’s own search function to identify ads that Twitter itself placed on the platform. Twitter could’ve removed ads from these search terms…but they didn’t.
Another possible outcome here is that extremists or bad actors could’ve found these data voids — aka informational dark spaces (IDS), or places where search algorithms lack sufficient input/output — and exploited them, sort of like what just happened on Tik Tok last week.
There’s a cybersecurity parallel here, in that searching for data voids is like searching for vulnerabilities that hackers could exploit. In both cases, you hope that someone else — anyone else — discovers them before bad actors do. It could’ve been a lot worse for Twitter.
There are researchers studying this phenomenon (*ahem*) & developing proactive responses, but we can’t really do that for Twitter because we can’t access data. Good luck trying to argue that it’s fraudulent to find unflattering content using your own platform’s search feature tho
Also, it’s not a great sign that the head of Twitter doesn’t know what data voids/informational dark spaces (IDS) are. This is basically advertising a vulnerability.
So if you’re going to try to ruin someone’s life based on a video of them circulating on social media, it’s on you to verify that the video actually shows what people are claiming it shows. Some of you aren’t doing your basic due diligence, & the stakes are far too high for that.
For example: If someone actually is tearing down posters of Israeli hostages, then yes, public shaming seems appropriate. But I’m seeing more & more videos where the person is filmed tearing *something* down — yet we’re never shown what that *something* is.
As a reminder, a lot of antifascists tear down nazi/white supremacist flyers as a matter of habit, and there are countless videos of this. If someone took one of those videos and told you it was an anti-semite tearing down a poster of Israeli hostages … how would you know?
So apparently you can just start a tech company, claim that you have some crazy awesome proprietary social media analysis software, release some “exclusive reports” to the media, and get the media to report whatever you say without actually having any evidence.
I’ve watched the same company do this repeatedly, and not only do they not have any evidence for their claims, but they actually have videos on their website in which they talk about disinfo and “fake news” by describing the very things they’re doing. Brazen.
Anyway, if you’re a journalist or reporter who gets an “exclusive” report from a tech company about fake accounts or bots or hate speech or anything else on social media, please consult an outside expert (or multiple). Get in touch, I’ll help you evaluate it; so will many others.
This absolutely horrific account of Hamas’ atrocities suggests there is at least some truth to the “decapitated babies” story, though investigators can’t say if they were decapitated before or after death, nor how it happened. But it appears it did happen. themedialine.org/top-stories/ev…
Most contested stories like this *do* have some factual basis, but those facts often get distorted or misportrayed, which then makes people question whether the initial incident happened at all.
It’s a tried & true method for obscuring the truth about unthinkable atrocities.
This is a very thoughtful thread that touches on something that I’ve long tried to convey — though much less eloquently — about the futility of fact-checking at times like this. Fact-checking should be seen as a necessary but not nearly sufficient response to mis/disinformation.
In many instances, the value of information is not determined entirely or even mostly by its factual nature. People engage with information for a variety of reasons — eg, uncertainty reduction, status-seeking, identity affirmation, etc — and we are really bad at addressing this.
In many (most?) cases, people aren’t passive, unwitting “victims” of mis/disinformation — they’re active participants. This doesn’t mean they have malign motives, but it does mean that they have diverse and complex motives that aren’t addressed by fact-checking.
My research on the disinformation campaign(s) targeting the Hawaii fires —which was amplified by Russia and China — is featured in this new piece by @HawaiiNewsNow.
Here is the write-up I did about this disinformation campaign, which was really several lines of disinformation — involving AI, claims of “weather weapons”, and an attempt to undermine support for Ukraine — coming together at once.
According to Microsoft, Chinese state-linked operatives created AI images to bolster claims of “weather weapons” being used in Hawaii. Interestingly, I found Russian propaganda & proxy sites amplifying those exact claims: