People vs Big Tech Profile picture
Dec 3 13 tweets 5 min read Read on X
👀🔎 Last week Big Tech companies published their first reports of how they assess the systemic risks of their platforms to users under the EU’s Digital Services Act. Here’s what we’ve found so far: 🧵
The surveillance-based business model, toxic recommender systems and addictive design features – the very business model of these tech giants – don't feature as sources of systemic risks. Image
Instead, the reports focus on the symptom (“bad” user behaviour, ineffective content moderation), ignoring the root cause: bad platform design.
This is despite overwhelming evidence that these systems case harm, including the spread and amplification of hate, disinformation, division and the invasion of our privacy to profile us and exploit our vulnerabilities for profit.
By not addressing the risks related to platform design and business models, Big Tech companies have ignored years-worth of independent research on this issue, including by members of @PeoplevsBigTech peoplevsbig.tech/category/fix-o…
Big Tech companies say they consult with external stakeholders to assess & mitigate risks, but it seems none of the +120 orgs in @PeoplevsBigTech incl those who published research on systemic risks re: social media platforms were consulted. The DSA says they should. Coincidence?
There are also gaps in the reports – e.g. Facebook failing to explain why it identifies some risks as low. Some reports don't go much beyond what was already public knowledge, with very little concrete data on metrics and effectiveness.
Although most Big Tech companies provide information on the how they mitigate the risks they identify (some of which were suggested by civil society groups and experts), none of them provide information on how effective these mitigation measures are. Image
For example, Facebook cites the use of misinformation labels and YouTube highlights the 'Breaking News Shelf' feature as examples of mitigation measures, but we are not given any meaningful details about their effectiveness.
We need proof that these mitigation measures work (or don’t) to can judge whether they’re effective. So far research has shown that social media platforms are designed to engage, enrage & addict, harming our mental health. If there's evidence to the contrary, we want to see it!
Civil society also needs to be meaningfully consulted about the systemic risks related to social media platforms. Having published lots of independent research on this issue, we know a thing or two about the risks 😉
Stay tuned for more detailed analysis on the reports in the next few weeks 📑 In the meantime, you can find some of our previous research related to the risks of social media platforms here & below: globalwitness.org/en/campaigns/d…

panoptykon.org/sites/default/…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with People vs Big Tech

People vs Big Tech Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(