.@laylamashkoor notes that a standardized code of ethics doesn’t exist for OSINT researchers the way it does for lawyers, doctors, etc. Because of that, she says, it’s important for the community to discuss and debate these issues together and strive for high standards.
.@USAmbEU has arrived ahead of his welcoming remarks here at #360OS. Here he is being greeted by @GrahamBrookie and @KHenquinet. The main stage’s events are starting shortly. We’ll be going live all day on here, Facebook and YouTube. Keep an eye out for those posts soon.
For participants waiting to join us at the main stage, please join us for coffee and tea up on the fourth floor.
.@USAmbEU says Russia’s use of online lies in its assault on Ukraine highlights the role of disinformation in the battle between democracy and autocracy, “making it the defining struggle of our time.”
DFRLab’s @r_osadchuk joins virtually from Ukraine. He explains that Russia aimed “false flag” lies at both internal Russian and external global audiences surrounding its invasion and supposed justifications for doing so.
“Most of them crumble” under fact checking, he said.
DFRLab’s Nika Aleksejeva joins the stage with @Ing_Dickinson to discuss their work reviewing more than 3,000 fact checks of information about Russia’s invasion of Ukraine and the trends they found. Ingrid previews a project reviewing claims made on Russian Telegram channels.
.@Ing_Dickinson notes that Russia’s disinformation campaign against Ukraine started vague, but grew more specific in its claims as time went on.
Nika warns Russian disinfo is persisten, and “will continue even when the arms are put down.”
Now: Today’s panel on the use of open source investigation and documentation’s roles in documenting war crimes and human rights violations. Have questions for our panelists? @ us, DM us, or use the hashtag #360OS
"Debunking is important. Pre-bunking is effective," @janis_sarts says. He notes Russia is playing to the fatigue of the world in the war. "We have to plan for the longer period and create our narratives [to fit] ... We have to keep up our support to Ukraine as long as necessary."
DFRLab’s @jacqumalaret walks the audience through a timeline of moderation decisions by tech companies after Russia invaded Ukraine and how Russia reacted negatively.
“This tit-for-tat … demonstrated how platform policy has increasingly become a domain for warfare.”
Nonresident fellow @katieharbath notes that moderation surrounding Russia’s invasion forced platforms to make hard choices.
1. Is platform policy a domain of war? Of course it is. 2. This is not the first time we've seen platform policy become an element of war. 3. It will not be the last time. We can expect platform policy to continue to impact future conflicts
He adds that there will never be a universally applicable "wartime policy" for platforms that works forever. That's why transparency is crucial, he argues.
"All this stuff sets a precedent" that will inform policies in future conflicts.
Aaaaaand we’re back from lunch with the #DigitalSherlocks in The Arc. @aiganysh_aidar kicks us off with a talk about using Telegram for research of EU QAnon communities and more. She notes the platforms popularity with extreme groups due to its indifference toward content. #360OS
She adds Telegram can also be a good source of information in countries and regions where the free flow information is restricted. Researchers can gather a lot by learning how to utilize Telegram in relevant investigations. Now she’s giving a walkthrough and a demo of methods.
Now, on the main stage: a #360OS panel on governments’ use of the NSO Group’s “Pegasus” software crush dissent and free speech, leveraging the tool for autocratic, anti-democratic purposes. Our panelists were directly targeted with the software.
Our live stream is back! Remember that you can send questions for our panelists to answer by mentioning @DFRLab, using the #360OS hashtag, or by sending us a private message.
.@panyiszabolcs says learning he was targeted with Pegasus has changed the way he communicates with sources, pushing him to less tech-dependent ones.
@ckanimba wants people to go further than tech companies and demand accountability for governments using software like Pegasus
A somber point from @ckanimba: Since the reveal of governments' use of Pegasus spyware against critics and to try to squash them (or worse), nothing meaningful has happened to hold those responsible to account. We can't stop demanding that accountability, she says. #360OS
Now on the main stage: a panel on the challenges social media platforms face moderating non-English language content. @marwasf explains this can make non-English speakers feel discriminated against, in addition to other harms. #360OS
Downstairs, @RamyRaoof is talking to Sherlocks about surveillance and digital attacks against NGOs, independent media, activists and more. Sometimes doing great OSINT can turn make someone a target for harm. Ramy is teaching Sherlocks what attacks look like and how to stay safe.
Following protests in Hong Kong, the government reacted by restricting citizens’ ability access to online information.
@chungchingkwong explains how tech is being used to repress and conduct surveillance of Hong Kong citizens.
“There are a lot of cameras”, for example.
What might change something? Data minimization, says @chungchingkwong. She encourages other to imagine tech systems where users are not seen as products and invasive data collection is less incentivized. #360OS
On the main stage, our panel seeks to separate hype from fact about “web3.” @API_Economics notes there is a conflict between the promise of “decentralization” and what is actually happening with the kind of tech in the mix. @AlexZerden agrees, asking how to enable it inclusively.
.@NiNanjira encourages understanding web3 beyond the tech itself to understand the broader dynamics it is enabling.
"...the structure, the power that a few other people have gotten, despite the 'revolution' that was brought by the technology, is being calibrated."
Our friends at @NDI are currently talking with Sherlocks about information manipulation during elections. They’re talking with them about using social media to monitor and expose election-related threats. #DigitalSherlocks
@NDI On the main stage, @ngleicher is talking about the "defender advantage" in security: building a network in society that constantly improves its resilience to attackers and bad actors.
"When it's done right, it's substantially stronger than surprise and speed."
And our final mainstage panel of the day begins. This one is about the "Metaverse." @zanytomato explains that the metaverse is much broader than just VR headsets.
"The age of immersive technology is here. Many people just don't realize it yet," @brittanheller says.
.@lolkat on harms and abuse present in virtual reality platforms: "People often seem to treat the VR space as being on social media ... the problem is people have bodies in these spaces but they don't have autonomy." #360OS
Daniel Castaño notes a challenge in addressing dynamics in virtual reality: virtual experiences can impact the human mind in more profound ways.
@zanytomato proposes one idea: chaperones to monitor VR spaces. Though she also notes it would be difficult to scale that approach.)
@zanytomato Alright gang, that's a wrap on day one of #360OS in Brussels. Rest up, refresh, and recharge because we've got another day of panels and talks ready for you tomorrow morning! As for this humble live-tweeter...
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Following the October 7, 2023 Hamas attack, the digital tool "Words of Iron" emerged to amplify pro-Israel messages & mass-report content deemed false or anti-Israel. Targeting online users in the US & abroad, it represents a shift in wartime propaganda. dfrlab.org/2024/06/11/onl…
Following the October 7, 2023 Hamas attack, the "Words of Iron" website began assisting users in sharing pre-written pro-Israel messages and mass-reporting content deemed false or anti-Israel by the tool. Thousands of accounts used it within days of the conflict.
This tool is part of an unfolding digital space, where various pro-Israel groups launched digital tools supporting Israel’s public diplomacy efforts, reflecting a significant evolution in the approach to wartime propaganda.
In a controversial move, far-right parties in the Identity and Democracy (ID) coalition used generative AI in their European Parliament election campaigns, despite a signed code of conduct prohibiting it. The latest by @gyron_bydton: dfrlab.org/2024/06/11/far…
France’s Rassemblement National and Italy’s Lega, both members of ID, deployed AI-generated imagery to support their political messaging. These images depicted migrants as “invaders,” tractors in protests, and EU politicians in an unfavorable light.
These synthetic images violated an April 2024 voluntary code of conduct mandating clear labeling of AI-generated content. Of dozens of instances identified by DFRLab, only one was labeled correctly. The ID group has used generative AI since at least October 2023.
REPORT ALERT🚨 We're excited to announce, "Another Battlefield: Telegram as a Digital Front in Russia’s War Against Ukraine." This year-long research project delves into Telegram's crucial role in Russia's information warfare against Ukraine. Read more: dfrlab.org/2024/06/10/ano…
In the two years since Russia's full-scale invasion of Ukraine, Telegram has become a pivotal platform for understanding the Russian perspective on the war. It remains one of the few windows into Russian sentiment, from the public to the Kremlin.
The Kremlin's crackdown on Western social media while leveraging Telegram solidified the platform's influence, where it has become a primary source for observing Russian narratives and state propaganda.
In a recent investigation by @gyron_bydton and @Olari_Victoria, web forensics and domain analysis reveal significant connections between Moldovan pro-Russia news outlets and local political figures. The DFRLab’s latest: 🧵1/10 dfrlab.org/2024/05/28/web…
Our investigation shows that several Moldovan media outlets share web infrastructure with Russia-backed political parties. This includes hosting services from Russian companies and shared Google Analytics codes. 2/10
Irina Vlah, the former governor of Gagauzia, is a key figure. Her 2019 campaign website was linked to multiple pro-Russian media outlets through backend data and passive DNS records, revealing Russian hosting origins. 3/10
New from @DFRLab🚨 An inauthentic campaign is spreading Islamophobic content targeting Canadians. The network, including 50+ Facebook accounts, 18+ Instagram accounts, & 100+ on X, amplifies hate speech through AI-generated photos & coordinated tactics. shorturl.at/bfCMX
This network, employing AI-generated photos and fake accounts, amplified the United Citizens for Canada (UCC), posing as a Canadian nonprofit while disseminating anti-Muslim narratives.
@Meta has taken initial action against these assets, but the investigation is ongoing. The network, discovered during analysis of another suspicious campaign targeting UNRWA, exhibits coordinated behavior across platforms while directly targeting Canadian media and journalists.
An investigation by @SGelava has unearthed a disturbing trend: more than 100 Facebook assets are fueling the spread of pro-Kremlin propaganda in Bulgaria through links to external websites. Read more: dfrlab.org/2024/03/26/sus…
The DFRLab discovered a network of Facebook assets promoting websites targeting Bulgarian audiences with misleading content, echoing Kremlin propaganda. This cluster comprises 44 pages, 30 groups, and 28 accounts.
Notably, these sites have been accused of spreading Kremlin disinformation, including false narratives that NATO is preparing for war with Russia by undertaking exercises in Poland.