What's the big data privacy/security problem with EU techlash legislative proposals (#DSA, #DMA)?
🧵 A quick big picture explainer...
First, a general lesson on legal regulation. Law is not magic. Just because a law seems to require something (e.g. because it expressly says it does), it doesn't mean (1) this thing will actually happen and (2) the law actually requires it.
Just because a law says: "do this risky thing in a secure way", it doesn't mean that this law *really* allows doing that risky thing in a secure way. Wait, what?
Take, for example, an interoperability mandate that would say "Facebook has to allow two-way exchange of user data with any social network if the user agrees, provided that this is done in a way that secures user privacy and security". What could go wrong?
The devil is in the details. What would Facebook be allowed to do to ensure privacy/security under this law? Could Facebook require any platform who wants to interoperate first to undergo a security audit to ensure a level of security close to the one at Facebook?
If Facebook's / Google's / Apple's level of internal data security is unachievable for "two guys in a basement" startups (or even much bigger ones) - what should the standard be? And how could users give informed consent to it if all they now and expect is the highest standard?
There are already serious voices that big tech should not be allowed to put "unreasonable" requirements on third parties who would be receiving user data, because competition etc. And vague wording in a law can be easily interpreted in that direction by courts and bureaucrats.
And what about the standard of user "consent"? Should Facebook instantaneously transfer all user data just after the user clicks "ok" in a consent box? Would an e-mail confirmation suffice? Can such risk realistically be imposed on users? ("consent fatigue"!)
Many well-meaning smaller developers focus on the benefits they think such rules could give them. But the legislators need to think how bad and unreliable actors will benefit.
Russian criminals can easily rent French servers and pretend to operate an "EU social network" while using various "dark patterns" to get user consent to syphon data from Facebook. This is a part of the threat model. But strangely, this is not a part of the legislative debate.
One last thing: the law should not assume unrealistic levels of user interest and skill. To say that we can just leave threat assessment to ordinary users is a recipe for a thousand much bigger Cambridge Analyticas.
More on that in my Stanford paper: law.stanford.edu/publications/n…
General words in a law will be interpreted by people, in a political environment, where privacy and security concerns will not always be treated seriously. This should give some pause to those who think that DMA/DSA will not make us less safe, just because they seem to say so.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
🚨 Now in @ModernLRev (open access): bad statistics led the government to conclude that 'Cart' immigration judicial reviews are ineffective. My computational answer shows how such research should be done and that the govt's conclusion was wrong.
🧵 TLDR in the thread below
The government relied on two analyses. The first came from the Faulks (IRAL) report. In it, the authors made two very basic (and indefensible) errors:
(1) They looked for evidence of successful Cart JRs in a wrong database, which only included a small sample of relevant cases /1
(Hence, they unsurprisingly found only a small number of 'successful' cases.)
(2) They then compared this small number with the total number of all Cart JRs — even though they only looked for successes in a small sample!!! This is how they ended up with the 0.22% ratio. /2