This is what cyber warfare looks like.
I am raising awareness to the following information because it is absolutely crucial that everyone understands how the fog of war can and will be weaponized in the modern age of AI technology. 🧵
Recently, this photo of a burnt baby was released by Israeli Prime Minister Benjamin Netanyahu, among other horrific images, which then circulated on social media. The others seemed real to me, but something didn't feel right to me about one particular image. I've been an AI hobbyist for years and have developed a keen eye for what hallucinated imagery looks like, so alarms were going off in my head when I first saw this and people were claiming it wasn't real.
I'm a medical doctor, not a forensic scientist who's professionally trained to qualify the validity of what severely burnt flesh looks like, but even with years of physiological courses under my belt I struggled to identify the anatomical structures of what I was seeing. It looks more like a lump of charcoal in the vague shape of a humanoid form, the lack of visual landmarks had me anxiously uncertain at what I was looking at; a common feeling when examining AI imagery. However, noting my lack of experience in forensic anthropology and appearance of burn victims, I ultimately wrote it off as a toddler being disfigured via heat beyond recognition.
Then, news broke out that there was evidence claiming the image was likely AI generated. A community note was even put on Ben Shapiro's post of the image, referencing an AI detection tool. I took the photo, ran it through the "AI or Not" tool, and it indeed came back as "AI generated."
However, experimental diagnostic tools like this are in their infancy and are inherently unreliable.
I ran the other images that Netanyahu posted and they all came back as "Human".
The "👍 CORRECT" or "👎" choices are there to rate if the image was successfully identified or not. This is continuous feedback to the developers who are constantly working to improve the accuracy of the tool.
In Ben's thread, other users have posted their own submissions of the baby image, with a mix of real and fake results. I'm not certain how the detector analyzing the image, but it likely takes many variables into account. It seems possible that any amount of human tampering in post production, like cropping or resizing, editing or color correcting, or even other things like compression or quality loss from saving and reposing the image may throw off the detector and potentially signal a false negative result.
It was pointed out that Laura Loomer's X profile picture was determined by the tool to be AI, which I tried to the same result. The picture may be AI generated, which is very possible for modern AI image generators to achieve with enough images of your face to train it on, but only Loomer can confirm if this is the case. However, I also reverse image searched the image, found the same exact photo on her Gab account, and ran it. It came back Human.
This is not a reliable tool.
Without knowing what percentage of an AI image can be modified before an erroneous "Human" result is triggered, it appears very difficult to use this tool to determine whether a real image has had AI covertly inserted into it.
Then, it was revealed that the same image, but with a dog instead of the baby, was posted to 4chan several hours after the first images dropped. At first glance, there are elements of the image that seem more realistic. It's easy to look at this and think, oh yes, it was actually a photo of a puppy rescue that was AI modified to look like a burnt baby, case closed.
But this is where you're wrong.
The Arlington Post analyzed both images. Error Level Analysis (ELA) is a method used to identify digital image manipulations by examining compression artifacts. Using ELA, the burned baby photo displayed consistent compression patterns, edge sharpness, noise patterns, and brightness levels, suggesting it was unedited.
Conversely, the 4chan dog photo showed distinct inconsistencies in compression artifacts, sharper edges, varying noise patterns, and altered brightness in certain areas, indicating clear signs of digital tampering.
Thus, the baby photo appears authentic, while the dog photo seems manipulated.
I also aligned both images and took the difference to try and reveal the seams.
There are free AI tools that allow you to change specific areas of an image while preserving the rest. Here is one such tool where I asked thee AI to change the dog into a sandwich. It may not look as realistic, if it took me under a minute to do this inpainting with free software, can you imagine what an entity with far more time and resources could do with a stronger model?
So if the baby photo is real, who is behind that dog image and why?
Probably either a 4chan troll who made it for the lulz or a government level cyber division as a war psyop.
This is where we are at in history.
As of this post, the community note on Ben Shapiro's post has been removed. Based on the digital forensics provided and my own experience with AI image generation, I am personlly leaning towards the initnal images being real.
The quality of an AI image generation has been increasing exponentially since AI image tools like OpenAI's DALL-E and Stable Diffusion have gone public, SD being open source. Anyone can effectively generate anything right now with the right know how. I've seen what is possible and it is completely mind blowing. I feel that few people truly understand just how real a modern AI image can look.
And this is only the beginning.
The ultimate take away from all of this is that one or both of these images may not be real, and there is no definitive way to confirm or deny their validity.
This should absolutely terrify you.
We now live in a post truth age, where advanced AI creative tools are readily accessible to anyone with an internet connection. These free tools can craft content so realistic that it becomes nearly indistinguishable from reality. Additionally, AI-driven image modifications are often so subtle that they elude detection by the average internet user and can even deceive skilled investigators and their sophisticated tools.
As contemporary digital consumers, it is absolutely imperative that we remain acutely aware of this looming threat to our very perception of reality. In the haze of misinformation and war, heightened fear can make us mentally vulnerable, amplifying our susceptibility to manipulation and social engineering by nefarious actors who wish to control your mind.
Never believe what you see online.
Reasonable skepticism is healthy.
Stay vigilant.
Please share this thread so that more people can understand the sheer gravity of what we're facing.
I usually post satirical content, but this was far too important not to discuss. If you like satire/parody content, please consider supporting me. Thanks
Update:
This is apparently the guy who created the dog image. The tweet has been deleted.
@stellarman22 This is all so tiresome
@stellarman22 In summary
@stellarman22 lol
lmao even
Welcome to the post truth world.
@stellarman22 New cyber warfare just dropped!
@stellarman22 The original Arlington Post forensic analysis was removed due to gore, here's the reupload
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.