Sen. Coons begins hearing on algorithms by saying he and Sasse don't have a specific legislative agenda: "Ranking member Sasse and I plan to use this hearing as an opportunity to learn."
Sasse also strikes a conciliatory tone. "It's too easy in DC for us to take any complicated issue and reduce it immediately to heroes and villains and whatever the regulatory or legislative pre-determined tool was to then slam it down on the newly to be defined problem."
Seems clear Coons and Sasse are trying to position this as the "serious" hearing, discouraging the usual partisan finger-pointing.
People, we're potentially looking at two legit and informative tech hearings in a row. Could it be? Reserving judgment.
Bickert says keeping people on Facebook with polarizing content, but ultimately losing them because of the ugliness of that content would be "self defeating." "It is not in our interest, financially or reputationally, to push people towards increasingly extreme content."
Veitch of YouTube says limiting recs of borderline content led to a "70% drop in watch time of such content from non-subscribed recommendations in the US that year."
From non-subscribed recommendations in the US is a heckuva qualifier.
Tristan Harris now talking about rabbit holes and echo chambers. "So long as that is the promise with personalization, we are each going to be steered into a different rabbit hole of reality."
He calls this a "psychological deranging process."
Harvard's @BostonJoan makes an important point at the outset: "Whatever policy ends up coming from the US will undoubtedly become the default settings for the rest of the world."
Donovan describes what drives people into rabbit holes online, riffing on YouTube's "four Rs" scheme: Repetition, redundancy, responsiveness, reinforcement
Coons is asking why social media companies don't use "virality circuit breakers" to ensure content that's "blowing up" gets a review by a human being.
Donovan says: "It has to be part of the business process ...to seek out content that is...out of skew."
Bickert said virality is already "a signal" as to whether Facebook should proactively review a piece of content, then pivots to talking about the fact-checking process.
Coons asks why Facebook wouldn't have these virality circuit breakers at work all of the time. Bickert said Facebook uses them in circumstances where there's "extreme and finite, in terms of date, risk, such as an election in a country that's going through civil unrest."
Coons asks whether engineers get pay incentives for time spent on a platform. Basically all the witnesses are saying no, they look at responsible growth, health of the platform, etc. etc.
But...
Tristan Harris contradicts Bickert. "I think there was a brief experimentation at Facebook with non-engagement based performance incentives for social impact, but that those have largely gone away."
Says metrics like sessions, 7-day active users, growth are still focus.
Sasse asks Bickert what Harris is missing. Bickert cites FB's transition to elevating "meaningful social interactions" in 2018. Says, "It led to people spending 10s of millions of fewer hours on Facebook every day."
"People are pretty good at short-term rage, and the product capitalizes on that doesn’t it?" Sasse says.
I think this is the right way to frame it. A lot of what we're talking about today are human instincts which are rewarded by social media.
"I'm a liberal arts, lawyer, not nearly as tech savvy as I should be for this hearing." Durbin says.
Lucky for him, there are no actual engineers from the companies testifying today anyway.
It strikes me as problematic that Tristan Harris is getting most of the questions here, considering he hasn't worked at any of these companies in the last 5 years, during which they've been doing most of the work on extremism and content moderation.
But the other witnesses won't say things like "attention vampires."
Klobuchar, who led a useful antitrust hearing with Apple and Google last week, is now asking how market power "exacerbates problems of disinformation, extremist content and bias."
Relatedly: Does *anyone* talk faster than Tristan Harris? My fingers don't move that fast, man.
Harris answers Klobuchar's question by saying that, say, WhatsApp couldn't choose to spend billions more on content moderation because they're under FB's wing.
But, the opposite is also true: would smaller companies have the resources to invest as much as FB is today?
Klobuchar asks whether FB is supportive of the Cantwell privacy bill. Bickert won't say.
Klobuchar asks whether FB supports giving people access to data "including what data is used in social media company algorithms."
Bickert ducks that, talking about the access FB does offer
Kennedy is weirdly playing hardball with Tristan Harris and Joan Donovan for not saying yes or no as to whether they'd vote for bill that revokes 230 protection for platforms that optimize engagement?
Then he won't let Joan speak when she tries to elaborate.
Ossoff starts with a very unrelated question about whether Facebook will "embark on further acquisitions of competitor services."
Bickert gives a very unrelated answer about transparency.
Ossoff is asking antitrust questions like he wandered into the wrong Zoom room.
It is a rookie move for Senators to ask FB a question about "selling data." Even if you think it's a dunk because Facebook sells ads built on data, you're just giving the witness an offramp from the question, which they always take.
Bickert to Ossoff: "We don't sell user data."
I said I was reserving judgment, but I've seen enough: we did not, in fact, get two useful tech hearings in a row.
Eureka, a good question from Coons: "If a video ends up getting taken down by YouTube for violating its content policies...could YouTube commit today to providing more transparency about your recommendation algorithm and its impacts."
Veitch calls it "interesting," but ducks.
Coons asks all the company witnesses whether their companies require employees to sign NDAs. Bickert says (!) she doesn't know.
You cannot walk in the doors at Facebook without being asked to sign an NDA.
Coons ends with this: "I think we have to approach these challenging and complex issues with both humility and urgency, the stakes demand nothing less."
And, adjourned. Not sure I learned much.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I pushed until they confirmed they were talking about advertisers' public Pages, not private users' accounts.
Had I published their statement outright, it would have been misleading or at least incomplete and damaging to the NYU researchers. protocol.com/nyu-facebook-r…
It reminded me of how much tech cos control the narrative around researchers intent.
The most glaring example of this was the vilification of the Cambridge Psychometrics Centre after the Cambridge Analytica scandal, which I covered for WIRED here: wired.com/story/the-man-…
NEW: Facebook’s attempt to shutter research at NYU on political ads is just the most extreme example of the increasingly fraught relationship between platforms and academics.
While reporting, Facebook told me Ad Observer violates their terms by scraping/publishing users' data who didn't consent.
That claim shocked me until I realized: the users Facebook was talking about were advertisers whose ads and Pages are already public protocol.com/nyu-facebook-r…
If Senators actually do their jobs during this hearing, we could get answers to critical questions about the efficacy of Facebook and Twitters' election defenses.
And we're off. In opening remarks, Graham asks: "If you're not a newspaper at Twitter or Facebook, then why do you have editorial control over the New York Post?"
Note: Not republishing something from the NY Post is not the same as having editorial control over the NY Post.
Some rational thinking from Graham: "I don't want the government to take over the job of telling America what tweets are legitimate and what are not."
Lawmakers from 9 countries are questioning Facebook's Richard Allan right now in London. First up is Canada's Charlie Angus, who's going in on the fact that Zuckerberg didn't show up like they wanted him to. He condemns the "frat boy billionaires" in CA upending global democracy.
"You have lost the trust of the international community to self-police." - Angus of Canada
Background: This should be an interesting day. Last week, the British MPs seized a cache of internal FB documents that are part of a legal case in CA and were ordered sealed. The docs allegedly back up accusations of Facebook exploiting user data and anticompetitive practices.
NEW: The House Democrats' trove of Russia-linked Facebook ads contained ads targeting suspicious Chrome extensions at teenage girls. The extensions gained wide access to users' browsing behavior and Facebook accounts. h/t @d1gi for spotting wired.com/story/russia-f…
The landing page for the ads where users could install the extension was registered in April 2016 in St. Petersburg, Russia. The ads went live in May. By June, people were already complaining about how the extension had spammed all their Facebook friends wired.com/story/russia-f…
Google confirmed it had removed the extension from the Chrome store and from users' devices. Unclear how many people downloaded the extension from the Facebook ads. The ads only got a little over 80 clicks. wired.com/story/russia-f…
A researcher with lots of foresight scraped 5 million political ads on Facebook during 6 weeks before the 2016 election. She found that half of the advertisers had absolutely no federal records or online footprint. Of that half, 1 in 6 were Russian trolls. wired.com/story/russian-…
These "suspicious" advertisers predominantly targeted voters in swing states like Wisconsin and Pennsylvania. She also found that white voters received 87 percent of all immigration ads. wired.com/story/russian-…
She found that the advertisers that were not required to file any disclaimers or disclosures with the FEC ran 4 times as many of these divisive ads as advertisers that did have to file with the FEC: wired.com/story/russian-…