Count me among those who consider this "a false distinction." I think there's a flaw in this argument. Eric does a brief steel man that he steps past, but I think has weight: Engagement is required to create scale. Scale is required to make FB successful...
I think this is a good post and his other good posts link to good arguments, so it's worth doing the same thing he did, and trying to understand his argument first:
1. FB doesn't do surveillance marketing t/f surveillance marketing is a myth.
Ok. So let's set aside the first part of the claim and hit the 2nd part. Is only Facebook doing the behavior that is commonly associated with surveillance marketing? ...
Well no. Obviously many other platforms do various types of ad targeting and the entire IAB UID2 proposal is specifically about tracking your personal data and personal movement around the web...
I'm sure other people can and have done a better job of this than I, but to be brief, the current regime of 3rd party cookies includes "fingerprinting" which uses a mix of your given identifiers (email) and probable identifiers (browser, OS, window size, etc...)...
This fingerprint is used to associate you with your specific data and activity across the web in many many venues (which I will admit, may not include FB!). You are *literally* being surveilled across the web and that data is being used for advertising...
And that doesn't even dig into your phone network selling your data, GPS level data, blue-tooth beacons, etc... and you could say that the *use* of this data is to target groups, not individuals... but the data to do so is built on surveillance.
So I disagree with the claim that only FB is responsible for "surveillance advertising"
2. Facebook doesn't do surveillance advertising because Faceebook's real use for personal data is engagement, and an entirely different dataset is used to drive ad logic.
Ok. I agree with the 2nd part of the statement to some extent. Yes. Facebook's use of engagement algorithms is predicated on a different understanding and use of user data than ad targeting. Are these separate datasets? Do ad preferences not drive engagement feed choices?...
I mean... we don't really know. Maybe lookalike audiences that advertisers build feed into Facebook's construction of social feed? Maybe it doesn't. I don't really see the grounds on which we can confirm this claim...
The idea that there is any sort of wall, even a paper one, between the two datasets (advertiser-provided user data vs volunteered to FB user data) *within Facebook* seems impossible to prove. Could both be used to train engagement algorithms? Maybe! But...
As is already stated in article, Facebook benefits and uses the information about what products users buy on users properties. I don't really believe that "purchasing information" that advertisers hand over is never used for personalizing a users' interactions w/the feed.
So I have to say I have serious doubts about this 2nd claim that engagement with Facebook posts is independent from advertising engagement and that these two data sets are irrelevant to each other. This is *further* confused by the issue of how Facebook (and many others!) do ads.
The native in-feed format of many social network ad systems means that engagement with the feed of user-generated content is *essential* to the success of their ads business. You *can not have* one success without the other.
This gets us to claim
3. Surveillance marketing is a myth because "the data that is used for targeting ads isn’t actually harvested on the social media properties themselves (eg. the Facebook app) but rather on third-party properties".
This actually is supported heavily by claim 1 in the article: only Facebook can do surveillance advertising. But, we *know* that isn't true. So are the 3rd party properties doing surveillance advertising (tracking individuals actions in multiple contexts for the purpose of ads)?
And... the implied 2nd part of this claim that has to be involved in answering the question is: does FB facilitate the collection of that data and does FB benefit from it and from the continuation of it's collection as an industry standard. Ok...
So: claim 3a that what advertisers are doing is to "pass conversion data back to ad platforms, and those ad platform partners use algorithms to optimize ad targeting" and that isn't surveillance...
Well... isn't clicking on a thing, having that click tracked, recorded and used to associate a click event with an individual personal data set ("Aram has clicked on a laptop ad and purchased the laptop") surveillance? I would argue... uhhh yes!
Worse, now 2 parties are party to this association between my personal identity AND advertising activity. The advertiser: who has my very personal purchasing data including credit card, address and full name. And also now Facebook, who has associated that activity w/my account.
So now we get to claim 3b: Does sharing that activity with Facebook create a advertising marketplace based on surveillance?
I'm going to start by noting the usual argument that gets brought up: 'you used to buy stuff & then get junk mail from the store, this is no different!'
Ok, well... predicating your defense on junk mail... a thing that the US gov't definitely regulates and tries to eliminate ain't a great look. Even if sharing your data with Facebook is the same as advertisers sharing your data with a mailing company it is bad! But!...
I don't think anyone is arguing that junkmail is surveillance (just shitty). So the question is what differentiates sending this data to a mailing service VS sending it to Facebook? Well... quite a few things actually...
Major ones:
Time: the communication between Facebook and an advertiser is real time
Precision: the data is less generalized than "household" in the targeting it enables. It enables to-a-individual targeting.
Use: What Facebook does with it is a persistent continual process.
We've already addressed the idea that these two datasets (what you've told FB about you VS what advertisers told FB about you) are separate as likely wrong but... let's say they are for a moment. The scale of FB *specifically* and its nature as a web platform causes issues.
See, we know from incidents like algorithmic redlining among FB advertisers on real estate ads that demographic distinctions at precision can be bad, but what is new is how the scale of FB creates a specific barrier to access in this regard...
FB's scale, and how it connects and *confirms* marketer assumptions thru its *specific 1st party data connecting to targeting efforts* creates a particularly impermeable barrier. Users are restricted from opportunities by ad targeting precision, that leads to content barriers
These restrictions create a bad feedback loop for ad access that feeds into content access. In this case the problem is not how the ads surveil, but how that surveillance restricts access to specific ads and drives users toward specific content...
Now, if we had a *robust* network of sites that present content and ads, this would be a lot less of an issue. But the nature of digital advertising as a marketplace makes competition with FB & G increasingly difficult, which means the barriers they activate are harder to bypass.
See... even if, tho it may be very unlikely, FB doesn't mix the "advertiser-volunteered data" with the "user-volunteered data". The system it creates builds obstacles for users that are enforced by the surveillance advertising that the advertiser-volunteered data creates.
When Anna Eshoo said "Your model has a cost to society. The most engaging posts are often those that induce fear, anxiety, anger and that includes deadly, deadly misinformation" this is the feedback loop to which she was referring. The ads and the content are *connected*.
And this gets us to an unstated but underlying claim that is *very common* in marketing circles:
4. The venue in which advertising is presented is not connected to the advertising.
But it *is*, especially when ads are in native feed-based formats. The connection is the user.
And we know this connection is present because brands and platforms are both constantly worried about it and activists create change by showcasing that connection, media outlets try to argue against it constantly, and advertisers are deathly afraid of it. branded.substack.com/p/heres-what-y…
See the surveillance of individuals that leads to more precise ad targeting is telling them a story about themselves and their expectations and their associations. It doesn't have to be accurate, or successful to do so, but it does so otherwise why have ads in the first place?...
So even if the 3rd party data that is sent to Facebook does nothing but make ad targeting more precise, the scale of Facebook means that targeting enables this influence and limitation on users and that leads to changes in their 1st party behavior.
Then let's take claim 5, which I think is the most interesting:
5. Facebook does not benefit from the advertisers' surveillance of users that drives advertiser targeting choices, except in the form of providing a venue for ads.
I think this is interesting because it skips over how Facebook has an entire other feedback loop besides the user feed...
see Facebook's really profitable feedback loop isn't for its users... it's for advertisers! ...
There is a reality and the story FB tells and that is a separate issue, but I think to really talk about it we need to start on the assumption that Facebook's claims about its advertising system's efficacy is correct. Even if it may not be.
So Facebook's usual claim here is that precision in targeting is good for advertisers because it allows them to target only their intended audience and not waste money on ads being shown uselessly to people who are outside of their target audience...
And uhhhh Maybe! But that's not the only question here...
See Facebook's profit is driven not by high ad prices, but high ad *efficiency*. The technical nature of this is effectively explained by another good post by Eric: mobiledevmemo.com/facebook-may-t…
The link to this in the top post states "the flow of conversion data is severed between advertisers and ad platforms [...] has no impact on an ad platform’s ability to optimize engagement on its owned-and-operated properties, and vice versa."
Ah... but it does! See... Facebook's most profitable feedback loop is that it incentives advertisers to do *more surveillance* to increase the precision of ad buying on Facebook, to increase efficiency, with the reward of lower ad prices.
And this is the issue with claim 5. Facebook absolutely benefits from surveillance advertising by 3rd parties, because it rewards it with lower prices, encourages more of it, and uses that precision to free up more scale to run more ads...
And this takes us to the final big claim:
6. 1st party advertiser data is where what, if any, surveillance is occurring, and its transmission to 3rd parties is irrelevant...
In-post this is stated in the conclusion: "If legislators want to ban or severely restrict the capability of ads platforms to provide targeted advertising, then that’s mostly an effort that needs to be directed at advertisers, which generate the data that is used to target ads."
I 100% agree with this statement. Congress should be talking to ad tech middle men and agencies. But the implication in this context is that social platforms' role in this is incidental. And I really cannot disagree more strongly...
Let's come back to the first claim. Does Facebook itself do surveillance?
Yes. FB's pixel means it can track you across many contexts. A SDK in your apps means it has data about your activities based on device ID w/o you opening up Facebook's app. Individual measurement of ad clicks cross-site for efficiency is enabled by technology that it provides.
Advertisers may *volunteer* to give user data away. Facebook's technology enables and in many cases creates a flow of data from advertisers to FB's systems. This is not something Facebook alone does. And yes advertisers are activating the data transfer. But FB is enabling it.
See claim 2 Facebook's matching of this advertiser-provided data *has* to in some way be linked to Facebook's engagement-driving 1st party data, even if just un. Otherwise they can't match users. Which means Facebook's massive user-base and pageview scale is required for success.
So now we see the problem if 3 is disproved. The connection between my personal identity, advertiser identity, and Facebook's version of my identity is required to make this system work and that means that Facebook provides the means for surveillance advertising to be a success
I agree with Eric strongly in another respect: Facebook is not literally selling user data, there are no databases of all a user's FB activity up on the market. I also hate how inaccurate reporting has misled people in that respect...
But that isn't the definition for surveillance marketing or the critique people who use the term intend to reference. See claims 3b and 4. The precision of the targeting creates a feedback loop for users which is negative for them, but which FB rewards advertisers for.
And in both cases, that loop is only enabled by scale! If nothing about the ad ecosystem changed except Facebook disappeared tomorrow, the value of individual targeting would shift b/c in *every other venue* w/o FB's scale, individual targeting drives prices up, not down.
It is the massive scale of FB (not just them of course, but their position is what the article is predicated on, so I'm making a somewhat unfair example of them here) that creates the value of this precision targeting and pushes advertisers towards greater tracking of users...
And it is only Facebook's scale that makes their first-party data, when it connects with advertiser data, such an inescapable reinforcement of barriers that, as Eshoo stated, "has a cost to society". And that encourages the engagement loops Kelly refers to...
See, Facebook would not be as bad for democracy if it wasn't driven by advertising and advertising wouldn't be as bad for democracy if it wasn't for big social platforms like Facebook. These two are inextricably intertwined.
Eric states that the W3C "have decreed that first-party data is privacy compliant". And like... no. First the W3C doesn't decree anything. Please sit in on any call and see that mostly we spend our time disagreeing...
But also the difference between the "content fortress" that is a 1st party environment creates (if we escape UID2) and the current 3p flows FB enables, is that something like algorithmic redlining becomes significantly easier to escape for a user.
Because now, users are new to each site they encounter. And that's a big difference! A new opportunity in each context to potentially build a different version of yourself that gets different ads.
This is why regulation aimed at big ad tech companies is so important! The problem, the core problem, the privacy-invading one, is big social platforms have a symbiotic relationship with highly invasive advertiser ad tech, and that creates a bad relationship with users.
We *should* regulate targeted advertising, but part of that process *has* to be handling the way particular platforms incentivizes it. And Facebook is as good a place to start as any in that regard...
What wall might exist between ad datasets and feed-engagement ones isn't relevant to the actual problem. The critique of surveillance advertising is trying to handle a problem of enormous scale, the type that enables technological "innovation" in targeting and rewards tracking...
How people identify it and how less technically informed people talk about "selling user data" is often not particularly accurate. And yeah, that SUCKS and is confusing and makes these discussions VERY difficult...
But the core problem that the term "surveillance advertising" is talking about, a dystopian endless surveillance of individuals, flowing through and target-able in the ad tech marketplace, to create market efficiency... that problem is as real as real gets.
Anyway... I'm not as good a writer as Eric is, so please excuse any grammatical errors. As always, this is my personal opinion at play, not representative of any larger entity. And I encourage and will retweet any good challenges to my assumptions!
The goal here is to create a more precise and effective discussion and dialogue around this issue by really making it clear what the problems are, and what's at stake, rather than relying on the some rather unclear assumptions in the general discussion floating out there. Thanks!
I appreciate your reading this looooong thread if you made it this far! As a reward, may I recommend this more philosophic discussion of the dangers of individual data collection, which I rather like:
(Also, I like everything Eric writes, his work is excellent, well put together and very informative. If it wasn't a well-written piece, I wouldn't have bothered spending this much time on it. None of this is a personal dig at him, just trying to make clear my disagreement)
(*Footnote: While Facebook does not sell user data directly in big db dumps, other entities do! The practice of selling and joining "2nd party data" is quite prevalent in the digital marketing space and should at least be examined by regulators as well. aws.amazon.com/marketplace/se… )
*if just on ID
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I have never ever wanted my browser to send me notifications from a website. Not even once.
Now that GPC is out there I sort of want to apply the same logic to other stuff... here's a browser signal that says NEVER ask me if I want notifications. Here's a browser signal that says I accept your cookie policy. Here's a browser signal that says no to your newsletter signup
I don't understand how there could be a human who would want a browser notification from a website I visited just once, but here, you can keep the functionality if you respect the rest of us saying we don't want it from the moment the HTTP request goes through.
Anyone who paid an ounce of attention in the last decade knows customers don't like their their contact lists being used to build out targeting data this way, but Clubhouse did it anyway b/c their's only one mode in SV: turning user data into investment.
I am having a hard time taking the 'can you report on surveillance effectively without using the surveillance data' debate seriously because you literally can't do anything without being caught up in surveillance, which is the point.
I think this is a place where "the master’s tools will never dismantle the master’s house" is a nice sentiment, but when there are no other tools, it's better to use the tools against themselves than to do nothing.
There is little doubt in my mind that brands in-housing their creative teams & the rise of 'social' platforms that push corporate comms like Clubhouse are connected in the same way Forbes letting CEOs write unedited op-eds was connected to the rise of early unmoderated social.
The thing that this is all about is that when the gatekeepers come down in any areas... that is great for minority voices, but it is also great for corporate communications teams to push out viewpoints as if they are journalism and remain unchallenged.
Eventually your platform has to take a stand on if you want to be the next big PR Wire or the next big place for an upswell of creatives or if you want to split the difference into ads and content like Facebook...
"Surrendering our privacy to the government would be foolish enough. But what is more insidious is the Faustian bargain made w/the marketing industry, which turns every location ping into currency... in the marketplace of surveillance advertising." nytimes.com/2021/02/05/opi…
"The data is supposed to be anonymous, but it isn’t. We found celebrities, Pentagon officials and average Americans..."
"It became clear that this data — collected by smartphone apps and then fed into a dizzyingly complex digital advertising ecosystem — was a liability to national security, to free assembly and to citizens living mundane lives."
This is a weirdly specific and awfully crazy thing to have to worry about but... is anyone else concerned that news organizations constantly using photos of Gre*ne wearing masks with crazy ideas on them is in effect amplifying her crazy ideas?
As someone super obsessed with share card images and what belongs on them vs what doesn't... perhaps we should consider if open graph/header images with things like 'Stop the St*al' are effectively retweets/mini-op-eds?
If readers only read headlines and look at the image on Facebook... isn't any image of her basically telling the story as much, if not more, than any headline? And if that image is dominated with a weird conspiracy message on her mask (clearly readable) aren't we... spreading it?