Public-interest researcher | Tech+society. Tracking, surveillance, consumer data, platform power, algorithmic decisions, data at work | @email@example.com
Nov 14 • 30 tweets • 14 min read
As part of our new report on RTB as a security threat and previously unreported, we reveal 'Patternz', a private mass surveillance system that harvests digital advertising data on behalf of 'national security agencies'.
5 billion user profiles, data from 87 adtech firms. Thread:
'Patternz' in the report by @johnnyryan and me published today:
Patternz is operated by a company based in Israel and/or Singapore. I came across it some time ago, received internal docs. Two docs are available online.
, a 'social risk intelligence platform' that provides digital profiles about named individuals regarding financial strain, food insecurity, housing instability etc for healthcare purposes.
Incredibly intrusive, horrifying that this can exist in the US. sociallydetermined.com
"It calculates risk scores for each risk domain for each person", according to the promotional video, and offers "clarity and granularity for the entire US".
Not redlining, though. They color it green.
Oct 13 • 18 tweets • 9 min read
New WSJ report found that 'Near', a consumer data broker based in India, Singapore and the US with an office in France, obtained massive location data via digital advertising firms like OpenX, Smaato and AdColony and sold it to US defense/intel agencies: wsj.com/tech/cybersecu…
Near's general counsel and chief privacy officer:
The US govt "gets our illegal EU data twice per day", a "massive illegal data dump".
"We sell geolocation data for which we do not have consent to do so", "we sell data outside the EU for which we do not have consent to do so"
Sep 22 • 10 tweets • 3 min read
Yesterday, I published a case study that examines enterprise software for process mining, workflow automation and algorithmic management.
I identified a list of mechanisms that involve personal data processing and can affect workers individually (right) or collectively (center).
I guess rarely anyone has ever examined this kind of software at such a level of detail, from a worker perspective.
The case study explores how employers can exploit worker data based on enterprise software docs. The chart is an excerpt from section 7: crackedlabs.org/en/data-work/p…
Jun 29 • 28 tweets • 10 min read
NEW by me: "Monitoring Work and Automating Task Allocation in Retail and Hospitality"
A case study on software systems and technologies for worker surveillance, performance monitoring and algorithmic management in retail stores, restaurants and hotels: crackedlabs.org/en/data-work/p…
…the second in a series of case studies, which are part of a larger project that aims to maps how companies use personal data on (and against) workers in Europe, led by Cracked Labs, together with @algorithmwatch, @JeremiasPrassl, @UNI_Europa, funded by Austrian @Arbeiterkammer.
Jun 9 • 12 tweets • 6 min read
In this thread, I want to share some additional details about what the file from Xandr/Microsoft, which was reported yesterday (themarkup.org/privacy/2023/0…), reveals about how hundreds of consumer data brokers trade personal information on billions of people at a global level.⬇️
The file, dated 2021 describes 650,000 'segments', most of which are lists of IDs that refer to people with certain characteristics. The lists are sold via the 'data marketplace' of Xandr, now a Microsoft company, for ad targeting.
The file reveals 93 distinct 'data providers'.
Jun 8 • 24 tweets • 14 min read
A while back, I stumbled upon a file I consider the largest piece of evidence revealing how hundreds of data brokers trade personal data on everyone, including very sensitive data, globally.
In-depth report by @reedalexander on JPMorgan Chase's "Workforce Activity Data Utility", a near-total surveillance system that observes what hundreds of thousands of employees are doing at work, from communication to desks to app usage, for many purposes: businessinsider.com/jpmorgan-chase…@reedalexander The article mentions data from badge swipes, desk attendance, MS365/Outlook/Excel, Zoom, Citrix and Blackberry phones; reports about individual workers; and many vague purposes from compliance to safety to 'business efficiency', which seem to creep into employment decisions.
May 17 • 21 tweets • 9 min read
Just saw this in my Android Chrome.
'Got it' and everything 'on' by default.
It's depressing that we let Google with its $280bn surveillance business and extreme infrastructural power unilaterally push its new 'privacy'-branded profiling tech directly into the dominant browser.
We let Google turn the web, mobile and other digital services into spaces with ubiquitous tracking and profiling. We let them delay the long overdue end of 3rd party cookies and advertising IDs forever.
And now we let them impose their replacement profiling tech to Chrome users.
May 17 • 18 tweets • 5 min read
The French data protection authority CNIL fined the French health website doctissimo.fr €280k for GDPR infringements plus €100k for ePrivacy infringements.
This is good, but. A few comments. cnil.fr/en/health-data…
It's good they took action against a web publisher, which European regulators rarely do.
And the fine may represent a considerable amount of the site's revenue (according to internet sources). It's still not much for Reworld Media though, which appears to own the site.
May 3 • 32 tweets • 13 min read
Largely unknown to a wider public, some of the biggest employers include so-called 'business process outsourcing' firms.
They run call centers and provide everything from sales and customer services to back-office work and content moderation, with several 100k workers.
The French outsourcing giant Teleperformance, for example, employs 420,000 people across 88 countries, many of them working from home.
Publications like this recently published "AI Index Report" (Stanford, Google, OpenAI, Microsoft, McKinsey) shape industrial policy.
Key 'AI' investment areas identified by them: healthcare, data/cloud, finance, cybersecurity, retail, industrial automation. Chatbots not so much.
Of course there's also marketing and multimedia content, and I guess this crazy LLM hype will make money flow hard. Nevertheless, media debates on 'AI' seem to miss a lot.
Anyway, because such reports affect policy it's also interesting what is considered as an 'AI' investment.
Apr 5 • 8 tweets • 3 min read
In 2019, the Czech antivirus/cybersecurity firm Avast was caught selling browsing data on millions to data brokers.
I did not hear about any real consequences. Instead, as I just learned, Avast was acquired by Gen Digital (formerly NortonLifeLock/Symantec) for 8 billion in 2021.
So you can do the worst thing a cybersecurity firm can do, secretly selling consumer data, and instead of facing harsh regulatory measures, being shut down or at least having your reputation downrated to zero, you get rewarded with $ 8 billion.
This digital economy is broken.
Apr 4 • 8 tweets • 3 min read
T-Mobile US, a data broker partly owned by Deutsche Telekom and by the German government, now boasts to commercially exploit "billions of data signals" on 50m households, 110m customers and 230m devices about how they use apps, "what they do, where they go, and what they buy".
T-Mobile US also claims to have "35+ industry leading, vetted data partners" (see screenshot above), which most likely means that T-Mobile US is re-selling personal information from dozens of other data brokers.
Mar 8 • 12 tweets • 5 min read
Putting processing that is "necessary" for "direct marketing" as a valid legitimate interest directly into Article 6 of the UK's GDPR Brexit, which has been officially "co-designed with business", really looks disastrous (irrespective of / in combination with the other changes).
And the phrase "The Secretary of State may" appears 84 times oh my 🥴
...not mentioning the identifiability stuff, the further processing / purpose stuff, the "recognized legitimate interests" stuff, the records-of-processing" stuff, the SAR firewall etc. publications.parliament.uk/pa/bills/cbill…
Mar 7 • 6 tweets • 2 min read
I came across a system that predicts sales of retail workers, i.e. employee performance, based on gender, age, disability status, language and other attributes.
Q: Would it be lawful for an US employer to make any kind of decision that affects workers based on these predictions?
As I understand it, it would be illegal to make hiring decisions based on a model that uses input variables such as gender, age, disability, language (proxy).
Would it also be illegal to make e.g. decisions about e.g. shift allocation or the type of work assigned to an employee?
Mar 5 • 5 tweets • 2 min read
A part of the adtech industry, which has long been harvesting user data, extracting value and misleading everyone, is now posing as an angry populist movement claiming to defend the small-business guy against 'privacy extremists', 'academics' and 'elites': startedwithatweet.substack.com/p/notes-on-pri…
"The [adtech] middlemen are flying high, taking a cut of the ad tech tax, and desperately afraid that even the smallest amount of regulation might reveal the majority of the more than 10k ad tech companies are built on top of extremely unstable sand."
The more I dive into worker surveillance, the more I realize the long history of trying to exert control over outsourced workers while avoiding to legally become their employer, from subcontracting to franchising. Today's platform work looks a lot less 'disruptive' in this light.
Good summary in this 2017 paper by Deepa Das Acevedo, focusing on the US, "Invisible Bosses for Invisible Workers, or Why the Sharing Economy is Actually Minimally Disruptive": chicagounbound.uchicago.edu/cgi/viewconten…
Mar 1 • 5 tweets • 2 min read
FB/Meta rebranded its automated ad products as 'Meta Advantage', including ml-based targeting and automated testing of image/media/text variants on people to find the versions that 'perform' best: facebook.com/business/help/…
FB has been offering automated ad experiments at least since 2017, or earlier, 2016?