So I've been stewing on the swirl around the @Facebook / NYU's Ad Observatory / @FTC issue for a few days, and it just keeps getting further under my skin. This latest news triggers me. A THREAD. washingtonpost.com/technology/202…
As prelude, I am a strong supporter of independent research on social media platforms. My org has funding at seven figures+ such research. I support even adversarial research and would support CFAA reform to enable it. (Reach out if you want to collaborate there!)
And I believe the team at NYU's Ad Observatory has been doing useful and careful research and I think their contribution has been important. I hope (and believe) they can continue their work.
But I was a lawyer at the FTC & know a good amount about when the FTC sues. There is a giant gap in the discussion. The press coverage missed it. Mozilla's post ignored it. FB's post didn't really explain it. Most egregiously, the FTC's recent statement avoids it. Here it is:
The NYU research program absolutely, incontrovertibly increased Facebook's legal risk. As a company under order at the FTC, that risk is even higher. As a company under order with a giant target on its back, that risk is significant. Here's why:
As I understand it, the AdObserver browser plug-in has access to all the content accessible to the user on the FB domain. NYU uses only a very limited subset of that information. They are very careful and privacy protective.
But as I heard NYU researcher @LauraEdelson2 quite rightly state on a Twitter Spaces discussion yesterday, all software has bugs. Their software has bugs. In fact, they made changes to the plugin in response to feedback from Mozilla.
So, a little thought experiment: If one of those bugs went bad or the plugin was misused and user data leaked, who do you think the FTC would go after? NYU researchers? Or FB? Academics, or the company that they already got a $5 billion settlement out of in a similar situation?
And the consent order that FB is under would *absolutely* enable the FTC to pursue an enforcement action against Facebook in such a case. This is true EVEN IF (and maybe ESPECIALLY IF) the FTC granted researchers an exemption for good faith research.
The @FTC recently claimed that if FB had asked ahead, FTC staff would have clarified that the order doesn't prevent FB from allowing good faith research. That's probably true but entirely beside the point. ftc.gov/news-events/bl…
The FTC could have reduced FB's legal risk by saying "Sure, allow this research; if researchers mess up we won't sue you." But the FTC (certainly not this FTC) would NEVER do this. Reporters (@viaCristiano, @GiladEdelman, @issielapowsky), ask Mr. Levine if you don't believe me.
I suppose the Ad Observatory also could have reduced Facebook's risk by entering a legal agreement indemnifying FB from any consequences if their use of data goes wrong. I wouldn't advise either side to enter such an agreement, but it'd be interesting to know if it was explored.
TL;DR: The simple fact is that the Ad Observatory research project increased FB's legal risk even without a FTC settlement, and the FTC settlement heightens that risk dramatically. So it is fully plausible to claim the FTC settlement as a reason for stopping this research.
You might think that FB should eat that risk (I'm sympathetic to this view) and should trust the NYU researchers (who again, seem very trustworthy), but you cannot pretend that there was zero legal risk to FB from allowing this research in this manner. /END
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There is a new AI proposal from @aipolicyus. It should SLAM the Overton window shut.
It's the most authoritarian piece of tech legislation I've read in my entire policy career (and I've read some doozies).
Everything in the bill is aimed at creating a democratically unaccountable government jobs program for doomers who want to regulate math.
I mean, just check out this section, which in a mere six paragraphs attempts to route around any potential checks from Congress or the courts.
@aipolicyus The amount of bureaucracy this bill would unleash is staggering. The bill attempts to streamline some of this by providing a "Fast track" but the main takeaway of this is how broad the types of software that are likely to be subject to regulation are:
The proposal also allows the Administrator to require any applicant (including those Fast Track applicants, and open source applicants) to adopt "safety procautions" which is entirely open-ended. Not thorough a rule-making process or any sort of due-process-protecting mechanism, but simply as a condition of granting a permit!
This @FT op ed by Marietje Schaake pairs well with my op ed with @ckoopman. Keep Congress AND tech CEOs away from AI regulation. 😏
Not joking. A 🧵
Schaake is correct that CEOs have an interest in shaping regulation to benefit their business model. But legislation isn't the only way regulatory capture happens. All prescriptive regulation inheriently favors incumbents b/c it is written for the present. 2/
Future, and especially disruptive, business models and technologies won't fit in those regulatory boxes. Such businesses face regulatory uncertainty PLUS established incumbents who speak the regulators' language. The FCC is a great example of this happening over and over. 3/
Starting ASAP, @elonmusk should require Twitter staff to record all requests for content moderation or user discipline from governments or government officials.
This info should be publicly released in periodic reports like the ones platforms do for law enforcement requests.
All other platforms should do this, too, btw.
Woah, this is doing numbers! I don’t have a SoundCloud ….
It's not surprising to see Parler attempting to use antitrust laws to force Big Tech back into doing business with them. Antitrust seems like everyone's fix-it tool these days. But Parler wields this tool particularly ineptly. /2
In duscussion with my colleagues, we agree that no one should be surprised at the quick dismissal for failure to state a claim that Amazon has probably already drafted. /3