“iPhone Remains Findable After Power Off” what I can’t keep up anymore.
So I guess “power off” doesn’t mean “off” anymore, it means the device stays on and does some kind of low-power nearfield communication. I’m trying to decide how I feel about this.
The off switch is buried in the “Find My” settings dialog, weirdly in a tab called “Find My Network” which might make you think it’s intended to… find your network… but actually I think this is some kind of branding gone wrong.
I wonder what the attack surface of their “powered off you can only find the phone” mode looks like. I hope it doesn’t use weird exploitable SSL libraries that haven’t been updated since 2012.
In other news I updated my phone to iOS 15 and put it down to charge last night. When I woke up it was hot, and my battery has gone from 100% to 15% since 7:30am. I gotta get off this ecosystem.
Wow, this thread somehow inspired an insane comment thread on HN which is 50% people saying they’ve known about this feature for a year and only an idiot would be surprised by it, 50% people expressing surprise that the feature even exists. news.ycombinator.com/item?id=286929…
For the record (inspired by the many excellent comments on HN) I have no specific beef with this feature: I’d just like to know how it works. I think a proper explanation of it would be security-relevant and I would expect to see something about it in the iOS Security Guide.
A little bird told me the phone writes a series of pre-computed cryptographic beacons to the UWB chipset, but little birds are no substitute for official documentation.
Wow, ok! This post does some proper reverse engineering and shows that the “Always On Processor” interfaces with the Bluetooth chip to implement this functionality. Great to have an answer. naehrdine.blogspot.com/2021/09/always…
My tweet above (two higher in the thread) was apparently wrong. The Find My keys get exported to the Bluetooth chipset. I still wonder how exploitable the whole mess is while the phone is off. Should we care?
Uh, yeah.
Ok, update: the Find My beacons are spooled out to storage, rather than the keys themselves. Which presumably are safe in the SEP. Thanks @naehrdine for the second look. (Also anyone who cares about Apple RE should follow @naehrdine)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I really do think context is important here. Some of these age verification laws are based on good-faith concerns. But a lot of them are really designed to censor big chunks of the Internet, making them less accessible to both kids and adults.
If you’re thinking that some kind of privacy-preserving age verification system is the answer, that’s great! But you need to make sure your goals (easy access for adults, real privacy, no risk of credentials being stolen) actually overlap with the legislators’ goals.
These systems have loads of sharp edges, and even if you do a perfect job you’re already going to chill access to sites that require age verification. But of course *nobody* comes close to getting it right. For example: 404media.co/id-verificatio…
I want to agree with the idea that mass scanning “breaks encryption” but I think the entire question is a category error. Any law that installs surveillance software directly on your phone isn’t “breaking” or “not breaking” encryption, it’s doing exactly what it promises to do.
For decades we (in the west) had no mass surveillance of any communications. Starting in the 2010s some folks came up with the idea of scanning for illicit content like CSAM uploaded in plaintext on servers. (With apparently relatively little effect on the overall problem.)
I don’t think many people realize how new and unproven this scanning tech is: they just assume it’s always been there and it works. It really hasn’t: it’s only a few years old, and it doesn’t seem to have any noticeable impact on sharing of CSAM material.
So Apple has introduced a new system called “Private Cloud Compute” that allows your phone to offload complex (typically AI) tasks to specialized secure devices in the cloud. I’m still trying to work out what I think about this. So here’s a thread. 1/
Apple, unlike most other mobile providers, has traditionally done a lot of processing on-device. For example, all of the machine learning and OCR text recognition on Photos is done right on your device. 2/
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
Some folks are discussing what it means to be a “secure encrypted messaging app.” I think a lot of this discussion is shallow and in bad faith, but let’s talk about it a bit. Here’s a thread. 1/
First: the most critical element that (good) secure messengers protect is the content of your conversations in flight. This is usually done with end-to-end encryption. Messengers like Signal, WhatsApp, Matrix etc. encrypt this data using keys that only the end-devices know. 2/
Encrypting the content of your conversations, preferably by default, is “table stakes.” It isn’t perfect, but it’s required for a messenger even to flirt with the word “secure.” But security and privacy are hard, deep problems. Solving encrypted messaging is just the start. 3/
Several people have suggested that the EU’s mandatory chat scanning proposal was dead. In fact it seems that Belgium has resurrected it in a “compromise” and many EU member states are positive. There’s a real chance this becomes law. dropbox.com/scl/fi/9w611f2…
The basic idea of this proposal is to scan private (and encrypted) messages for child sexual abuse material. This now means just images and videos. Previous versions also included text and audio, but the new proposal has for the moment set that aside, because it was too creepy.
Previous versions of this idea ran into opposition from some EU member states. Apparently these modest changes have been enough to bring France and Poland around. Because “compromise”.
Telegram has launched a pretty intense campaign to malign Signal as insecure, with assistance from Elon Musk. The goal seems to be to get activists to switch away from encrypted Signal to mostly-unencrypted Telegram. I want to talk about this a bit. 1/
First things first, Signal Protocol, the cryptography behind Signal (also used in WhatsApp and several other messengers) is open source and has been intensively reviewed by cryptographers. When it comes to cryptography, this is pretty much the gold standard. 2/
Telegram by contrast does not end-to-end encrypt conversations by default. Unless you manually start an encrypted “Secret Chat”, all of your data is visible on the Telegram server. Given who uses Telegram, this server is probably a magnet for intelligence services. 3/