Imagine you are in charge of security for the Pentagon web portals - you've got a specific website to control where both external contractors + internal staff access it.

One day, you wake up & a Chrome Extension claims to "support your users" w/ XYZ features you didn't make 🧵
To make matters worse, you've discovered that dozens of your users have installed the extension within days of the extension being released - & you find out that extension developer has been paying the extension store to promote this dangerous extension on search & video sites.
Now, what do you do? Do you initiate an internal meeting to audit the extensions in order to try and break the features that are unsafe? Do you contact the extension store to demand the extension be taken down? Contact the dev? Do you warn your users or disable their accounts?
Imagine if you woke up every day, and dozens of new "browser extensions" were approved that targeted your users and domain -- and you had minimal clarity on the legal entity who developed the extension, their privacy policies, and the accuracy of their policies. It this safe?
Now imagine it's been 3 days since you discovered an unknown browser extension developer was targeting your site and users w/ "advanced options" - and now 100 users have it. It seems the advertising promoting the extension is scaling up & is targeted to Virginia contractors.
Every day you don't act, more users root themselves via this browser extension, & send valuable data to an external dev- who could be selling the data they receive from this domain to threat actors.

Can a domain owner control side channels advertised to their users as features?
If you are the Data Controller in charge of a domain and the safety and security of its users, and a separate corporation has a "product" and revenue stream that consists of approving unsafe data side channels and then taking money to market those to users. How can you not act?
In the future, I believe that if a company owns a browser marketplace that allows developers to build "products" that side channel user data (especially social network data), the company **approving those extensions** is a joint-controller for all transfers via that extension.
In conclusion: if you are a joint-controller with an illegal user data ingestion pipeline, and also taking money via a separate product to market this side channel to users, you are creating risk for businesses and organizations all over the world, while padding your pockets.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with ℨ𝔞𝔠𝔥 𝔈𝔡𝔴𝔞𝔯𝔡𝔰

ℨ𝔞𝔠𝔥 𝔈𝔡𝔴𝔞𝔯𝔡𝔰 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @thezedwards

15 Feb
Are there any proposals to sandbox the mobile address book via iOS or Android so wild mobile apps like Clubhouse can't "go viral" and then encourage millions of Americans to share their personal user graphs and personally harvested contact information of friends/colleagues? ⚖️🧵
There are odd legal exposure issues related to a For-Profit Business requesting access to a Personal Contact Book from a non-business / person -- here's the flow imo:

Data Controller requests consent + marketing purpose to ingest Contact Address Book from non-covered entity
a Data Controller requesting 100% access to a personal Address Book, has ingested *user data, without consent from the users who the data belongs, to process it*

imo the phone APIs from iOS / Android that ingest + share address books violate Data Controller Frameworks
Read 17 tweets
14 Feb
Congress rarely provides justice or reform. It's a bastion of conflicts & procedural rules.

But for 18 months after a Presidential election, an agenda can be set.

& Congress can't chew gum and walk - they fuck that up bad. We could get 1 trial, or debates on a bunch of issues.
If President Biden had demanded Congress hold a trial, with witnesses and tons of subcommittee hearings, he could have easily done that. And he could have put so much pressure that today could have easily been a different outcome. Now, why didn't Biden put all his chips on this?
A U.S. President has about 18 months after a Presidential Election to get something important done. From 1990's healthcare reform attempts, Bush tax cuts, Obamacare, Trump's tax efforts -- and Biden *could have chosen* to spend his time/political capital on a trial.
Read 6 tweets
29 Oct 20
This is some of the worst ad tech research I’ve ever seen. The markup doesn’t have access to the actual bidding details of either campaign - they don’t have exclusion data either.

A few FB buying facts:

1) Exclusion audiences save money when high-bid pages are in an audience.
2) custom audiences cost less than native FB targeting of page interests/likes

3) lookalikes cost less than custom audiences, and less than native FB targeting

4) campaigns bid against each other - hugely popular states like Florida has tons of competition
5) it’s possible to attack the CPM rates by buying ads against XYZ fan page. Take 40 ads accounts you control, bid on only fan pages (Obama/Biden,hrc) & bid very high. Biden’s optimization choices for a campaign could then be used to push his CPM rates in some markets sky-high.
Read 8 tweets
22 Oct 20
1) I've reviewed the "Evaluation of Cohort Algorithms for the FLoC API" @… & have thoughts..🧵

high-level takeaway is that both methods Google tested *require an anonymity server* to filter cohorts that are too small.

This is *not* a deal breaker* Image
2) *Google tested methods that required an anonymity server because they don't have federated learning built into Chrome.

So Google tested "Centralized cohort building/filtering" vs "Pseudo-on-device cohort building/filtering" - the privacy safe version was 85% of the quality.
3) Differential privacy ≠ K-anonymity / We should focus on K-score to protect users (& merge cohorts) - it's a subtle difference but K-scores are more easily integrated into a "minimum viable cohort size" to be built into an open source anonymity server or federated Learning.. ImageImage
Read 12 tweets
19 Jul 20
I think there is another big twitter hack going on, but not to verified accounts... rayban DM spam going really big
Here's my threads w/ screen shots of users complaining about this recent twitter hack / DM mass messages pushing RayBans... (Seems like rate limits are being stomped imo)

Read 5 tweets
10 Feb 20
There is a data supply thread going around with experts dunking on people who believe microphones are listening to them & using their conversations to show targeted ads.

Experts claim it’s impossible, but they are wrong...and numerous companies have tech to do it. Thread time..
Numerous companies have been testing this Speech->Keyword->Ad Segments for over a decade. I’m sharing 14 patents from major companies that do *exactly this*

Sony submitted for a patent for Speech to Keyword to Advertising tech in 2007 (received in 2011) @…
AT&T submitted for a patent for Speech to Keyword to Advertising tech in 2008 (received in 2015) @…
Read 24 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!