👀🔎 Last week Big Tech companies published their first reports of how they assess the systemic risks of their platforms to users under the EU’s Digital Services Act. Here’s what we’ve found so far: 🧵
The surveillance-based business model, toxic recommender systems and addictive design features – the very business model of these tech giants – don't feature as sources of systemic risks.
Instead, the reports focus on the symptom (“bad” user behaviour, ineffective content moderation), ignoring the root cause: bad platform design.
This is despite overwhelming evidence that these systems case harm, including the spread and amplification of hate, disinformation, division and the invasion of our privacy to profile us and exploit our vulnerabilities for profit.
By not addressing the risks related to platform design and business models, Big Tech companies have ignored years-worth of independent research on this issue, including by members of @PeoplevsBigTech peoplevsbig.tech/category/fix-o…
Big Tech companies say they consult with external stakeholders to assess & mitigate risks, but it seems none of the +120 orgs in @PeoplevsBigTech incl those who published research on systemic risks re: social media platforms were consulted. The DSA says they should. Coincidence?
There are also gaps in the reports – e.g. Facebook failing to explain why it identifies some risks as low. Some reports don't go much beyond what was already public knowledge, with very little concrete data on metrics and effectiveness.
Although most Big Tech companies provide information on the how they mitigate the risks they identify (some of which were suggested by civil society groups and experts), none of them provide information on how effective these mitigation measures are.
For example, Facebook cites the use of misinformation labels and YouTube highlights the 'Breaking News Shelf' feature as examples of mitigation measures, but we are not given any meaningful details about their effectiveness.
We need proof that these mitigation measures work (or don’t) to can judge whether they’re effective. So far research has shown that social media platforms are designed to engage, enrage & addict, harming our mental health. If there's evidence to the contrary, we want to see it!
Civil society also needs to be meaningfully consulted about the systemic risks related to social media platforms. Having published lots of independent research on this issue, we know a thing or two about the risks 😉
Stay tuned for more detailed analysis on the reports in the next few weeks 📑 In the meantime, you can find some of our previous research related to the risks of social media platforms here & below: globalwitness.org/en/campaigns/d…
panoptykon.org/sites/default/…
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.