Out of meeting with digital minister @cj_dinenage and tech cos on #onlineharms and transparency.
A few thoughts on this: first, transparency is the easy bit, we all agree we need evidence.
However, making transparency generate comparability is really hard, or in practice impossible. Platforms are different, practices are different, users are different.
What we really need is *evidence*, including neutrally generated, independent academic (standard) evidence.
This extends to looking at platform systems and algorithms.
There is an objection that academics take too long and are behind the game. Maybe — but academic research has been critical to @OpenRightsGroup's understanding of many issues and frankly is often *ahead* of the policy curve.
There's also a tendency to manipulate evidence from groups that have special interests (especially but exclusively commercial ones). This often has to be exposed and refuted through academic, robust evidence.
Also: @ukhomeoffice needs to get the police, such as CTIRU at @metpoliceuk fully transparent and accountable, because today they are *not*.
The UK government needs to uphold the same standards as it demands.
But very few of the police and regulators that suspend domains have a public policy, regarding suspensions. As far as we know, there is no oversight mechanism.
In a similar vein, here is information about Counter-Terrorism Internet Referral Unit, who make bulk takedown requests. Other than the number of requests, what do we know about their work? #onlineharms
Basically, we know very little about government content removal. @DCMS is demanding a lot from companies, but it needs to hold government departments to account as well.
We have around a dozen de facto government content censors, all operating without transparency.
• • •
Missing some Tweet in this thread? You can try to
force a refresh