A Guide to Countering Mis/Disinformation During a Crisis.
Amid unprecedented events, our news feeds overflow with reports, images, and videos, making real-time truth discernment challenging. This thread equips you with the tools for effective navigation amidst such situations.
When "Breaking News" emerges, the rush to lead and control the narrative ensues. Reports are hastily assembled, drawing from private sources, alleged incident images, and precedent, often intertwined with personal opinions. This process is standard, but it also leads to issues.
The hurried reporting often results in the publication of false testimonies, mislabeling unrelated pictures and videos as connected to the event, and unsourced reports driving a false narrative about the sequence of events.
How can WE avoid this?
Step 1: Be aware of your own bias.
Your belief system influences the narrative you embrace regarding "Breaking News." People often scramble to latch onto reports, pictures, or videos that align with their preconceived conclusions about the event, forgoing a critical analysis.
Step 2: Ignore reports that don't provide a source.
You are going to have to filter through a lot of content as you try to piece together what transpired. Ignore anything that doesn't provide a source for their claims about the event.
Step 3: For reports with alleged sources, check them!
Sometimes, the sources provided are "unnamed," "anonymous," or overly vague, such as when attributed statements are as generic as "a [country] official said". My advice would be to also initially ignore these alleged sources.
Step 4: For photos/videos attributed to the event - Patience.
Check for disputes regarding the origin of the photos/videos. Misattributed media is often swiftly "fact-checked." Moreover, consider using "Google Lens" on the image/video to uncover any possible older versions.
Step 5: Don't believe fact-checkers who don't provide sources.
Often, fact-checkers will rush to correct posts. Ensure that their explanations provide sources for their claims, and treat them as you would any "Breaking News" report.
Step 6: Don't rely on the captions or subtitles of videos of languages you don't speak.
If feasible, contact a user fluent in the language for verification. If not, attempt to locate a transcript of the video. Exercise caution with translation apps as they may not always be accurate.
Step 7: Sensational dazzles, yet truth it often eludes
Be cautious of initial reports packed with buzzwords and vivid details; they often serve a purpose. During a crisis, beware of clickbait designed to exploit biases and grab attention.
Navigating reports can be complex and time-consuming. This concise list aims to help you swiftly sift through your news feeds, drawing from personal experience to steer clear of fake reports, pictures, and videos.
For a more comprehensive overview of handling misinformation:
Note: A frequently asked question is, "Who should I follow for accuracy?"
We all have biases, which can lead to mistakes. Follow those you trust, but remain cautious and ensure they source anything they post. After all, we're all human and prone to errors.
Thanks for reading!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
After a discussion with @MiddleEastBuka and other researchers, I highly suspect that this video of an explosion is an AI-spliced video, created using the video on the left - and likely not the first of its kind.
Why?🧵
This video has several oddities, particularly in how the environment reacts to the explosion: 1. Both cars shift position, with the white car moving a fair distance from where it originally stood. However, the movement is unnaturally smooth, with no windows shattering and no visible damage to either vehicle. 2. The two motorcycles next to the soldier simply disappear completely, leaving only some sort of structured object just out of frame (I can't make out what it is)
Besides the quality of the footage and the physics of the explosion, what really caught my eye were the similarities this clip shares with a confirmed AI-spliced video I uncovered yesterday:
Note: BOTH videos are also showing the same alleged act; a strike on an Iranian officer using civilians as cover.
This isn't real footage, it's an AI-spliced video based on a real image from January 8th, 2026.
1. The AI artifacts:
A. Powerlines are floating, and its pole disappears with the explosion.
B. There is a random puff of smoke from one of the individuals standing before the explosion.
2. AI-Splicing evidence:
A. All of the people are in the exact same positions as the old image.
B. The AI messed up the front gate, with it adding/removing features not present in the real school gate (Compare red rectangle).
This image is currently being reported as visual "proof" that a misfired AD system was responsible for the strike on a building reportedly housing a girls’ elementary school in Minab.
It's not.
That image was taken in Zanjan, over 1.3k kilometers from the school in Minab.
A 🧵
As showcased, jointly with @Stinky915846091, the misfired AD system photographed was North-West of Zanjan, towards the mountains.
There is no possible way this could have hit the school, 1,300 kilometers in the opposite direction.
Ironically, this post is propaganda - Those images are from 2 entirely different cities.
1. The "Zoom Out" video was taken in Tehran (Geo: 35.726468, 51.322978). 2. The "Zoom In" video was taken in Mashhad (Geo: 36.327939, 59.498942).
How did I geolocate them? A 🧵
@GeoConfirmed
1. First, I used Google Lens on both images to see what results I could get to confirm whether the OP was being truthful or not.
I found 1 X account claiming the "Zoom Out (Now ZO) video was from "Kashani Street". For the "Zoom In" (ZI) video, I saw a comment saying "Mashhad".
2. This doesn't mean either OP were telling the truth - but it was a start.
For the longer version of the ZO video I had several details: 1. A garden. 2. A small street turning onto a main one. 3. Slanted buildings inwards (2)
So, I just followed the street till I found a match:
AI detectors, including those using LLMs, consistently fail to reliably identify AI-generated content, often producing false positives on real images.
But there is an effective, though limited, way to identify AI-generated content using tools like SynthID.
What is SynthID? 🧵
Background:
An AI-generated image claiming to show Maduro in U.S. custody was spread widely on X, believed by many to be authentic.
I debunked it primarily using SynthID. Some questioned why I used AI to debunk AI.
I didn’t - Well, not the way you may think.
What I actually did was use SynthID.
What is it?
In simple terms, it is an invisible watermark. Every piece of media generated using Google Gemini is embedded with this marker.
AI-Community Notes are wrong (Again) - This photo is not from the Azovstal steel plant in Ukraine.
It's from Gaza, and I managed to geolocate the rough location of the camera in Beit Lahia.
How did I find it? A Guide🧵
(H/t @Stinky915846091)
The so-called “proof” that this originated in Ukraine relies on AI-generated citations, but there also CNs by real people (but unapproved).
None of the images or videos presented in those searches match the screenshot in question - not a single one.
Claiming that they “look similar” is insufficient; many power stations share comparable structural features. Similarity alone does not constitute evidence.
So, how did I figure out that this was know this is from Gaza? 1. Find an older variant of the screenshot. 2. Geolocation.