I read the new location tracking complaint against Google filed by three state AGs and DC. It shouldn’t be surprising to anyone who is familiar with Google, but it’s pretty detailed. Thread. 1/
The basic allegation is that Google (mainly via Android) made it extremely difficult to turn off location data collection, and when people *did* try to turn this off, Google still collected and used location data for advertising.
As described in the complaint, there are basically three ways Google can get your location. (1) via GPS, (2) by monitoring nearby WiFi networks, (3) through IP address. Even if you turn GPS off, Google uses some of these. 2/
Once Google has your location information, the question is whether the user can stop them from recording it. As of 2018, Google seemed to make this possible through a Location History account setting. 3/
The Location History setting was described as “let[ting] Google save your location.” Presumably to ordinary non-technical users this language was about as clear as things get. According to the complaint, however, Google saved your location regardless of the setting. 4/
Specifically, Google has another “Web & App Activity” setting that also lets Google save your location. Because why have one setting when you can have many confusing ones? 5/
A brief interlude here to see what Google employees thought of these options. “[F]eels like it is designed to make things possible, but difficult enough that people won’t figure it out” is a solid quote. 6/
The complaint has a long section on “dark patterns” and this reads like a syllabus in a course on Silicon Valley privacy invasion. 7/
All the typical stuff: (1) presenting users with complicated opt-ins once at setup; (2) repeatedly “nudging” people who opt-out; (3) rewording dialog boxes to be less specific and maximize engagement; (4) hinting that apps “need” location history to work. It goes on. 8/
The one area where I felt j needed more detail was around the scanning of Wi-Fi networks. Even if you turn off GPS, companies like Google can determine your location by seeing nearby Wi-Fi. The complaint hints that Google does these even when you disable location. 9/
In fact, from context it feels like a lot of the redacted text in this document is about Wi-Fi geolocation. I hope future amended complaints get into the details. 10/
Final note: how did Google management feel about all of this? Was it all a big misunderstanding caused by good people trying hard not to be evil? Judge for yourself. 11/11 fin.
What is this new setting that sends photo data to Apple servers and why is it default “on” at the bottom of my settings screen?
I understand that it uses differential privacy and some fancy cryptography, but I would have loved to know what this is before it was deployed and turned on by default without my consent.
This seems to involve two separate components. One that builds an index using differential privacy (set at some budget) and the other that does a homomorphic search?
Does this work well enough that I want it on? I don’t know. I wasn’t given the time to think about it.
Most of cryptography research is developing a really nice mental model for what’s possible and impossible in the field, so you can avoid wasting time on dead ends. But every now and then someone kicks down a door and blows up that intuition, which is the best kind of result.
One of the most surprising privacy results of the last 5 years is the LMW “doubly efficient PIR” paper. The basic idea is that I can load an item from a public database without the operator seeing which item I’m loading & without it having to touch every item in the DB each time.
Short background: Private Information Retrieval isn’t a new idea. It lets me load items from a (remote) public database without the operator learning what item I’m asking for. But traditionally there’s a *huge* performance hit for doing this.
The new and revived Chat Control regulation is back. It still appears to demand client side scanning in encrypted messengers. But removes “detection of new CSAM” and simply demands detection of known CSAM. However: it retains the option to change this requirement back.
For those who haven’t been paying attention, the EU Council and Commission have been relentlessly pushing a regulation that would break encryption. It died last year, but it’s back again — this time with Hungary in the driver’s seat. And the timelines are short.
The goal is to require all apps to scan messages for child sexual abuse content (at first: other types of content have been proposed, and will probably be added later.) This is not possible for encrypted messengers without new technology that may break encryption.
One of the things we need to discuss is that LLMs listening to your conversations and phone calls, reading your texts and emails — this is all going to be normalized and inevitable within seven years.
In a very short timespan it’s going to be expected that your phone can answer questions about what you did or talked about recently, what restaurants you went to. More capability is going to drive more data access, and people will grant it.
I absolutely do believe that (at least initially), vendors will try to do this privately. The models will live on your device or, like Apple Intelligence, they’ll use some kind of secure outsourcing. It’ll be required for adoption.
I hope that the arrest of Pavel Durov does not lead to him or Telegram being held up as some hero of privacy. Telegram has consistently acted to collect huge amounts of unnecessary private data on their servers, and their only measure to protect it was “trust us.”
For years people begged them to roll out even rudimentary default encryption, and they pretty aggressively did not of that. Their response was to move their data centers to various middle eastern countries, and to argue that this made your data safe. Somehow.
Over the years I’ve heard dozens of theories about which nation-states were gaining access to that giant mousetrap full of data they’d built. I have no idea if any of those theories were true. Maybe none were, maybe they all were.
The TL;DR here is that Telegram has an optional end-to-end encryption mode that you have to turn on manually. It only works for individual conversations, not for group chats. This is so relatively annoying to turn on (and invisible to most users) that I doubt many people do.
This on paper isn’t that big a deal, but Telegram’s decision to market itself as a secure messenger means that loads of people (and policymakers) probably assume that lots of its content is end-to-end encrypted. Why wouldn’t you?