Jokes aside, though, as engineers we regularly deal with complex systems that can be difficult for our users to understand. Having a hard time explaining how they work is one thing, but regardless of your position on this technology @Apple’s messaging has been unacceptable.
Their reluctance to clearly describe how the software works, their seeming inability to be straightforwards with the fact that it fundamentally detects CSAM using filters that they control and uploads it to them, is very concerning. This isn’t how you inspire trust.
“Encrypted” and “on device” and “hashed” are not magic words that magically grant privacy. You can’t say “nothing is learned about the content on the device” if you can take the vouchers it sends you and decrypt them–even if you are “sure” they are CSAM. That’s just incorrect.
Being better “compared to the industry standard way” does not mean the technology is automatically “private”. And when you say you’re better than the industry standard from the perspective of being auditable, don’t be in a place where you can’t verify you are doing any better.
If you’re going to engage with criticism, the least you can do it is to interpret it in good faith. Just because people usually bring up tank man doesn’t mean “it seems to be the case that people agree U.S. law doesn’t offer these kinds of capabilities to our government”.
Privacy is a set of tradeoffs. Detecting crime is a set of tradeoffs. There’s certainly a lot that technology can do to improve the balance! But disregarding criticism while misrepresenting your solution is a bad way to gather legitimate credibility.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@pcwalton@iSH_app While constrained slightly differently (OSR requirements, usually tiered up to a JIT, etc.) JavaScript engine bytecodes are probably what you want to look at. Some certainly look a bit like what you’ve described.
@pcwalton@iSH_app For @iSH_app compatibility is very important, so the frontend is probably always going to accept an ISA with actual software compiled against it (and, soon, we’ll be limited to ones supported by Linux).
@pcwalton@iSH_app We emulate x86 on x86_64/arm64 so the interpreter is able to pin all the registers, at least: github.com/ish-app/ish/bl…. So we get that “for free”, at least. The instructions are still annoying to decode, though, so we cache those.
The news is just as shocking today as it was three months ago. How we deal with mental health is still something that we need to learn to handle better together. This is especially true in situations such as open source that are nominally technical but in reality heavily social.
That being said, in situations like these it’s especially important to remember that people’s mental state is often the result of complex circumstances that may be entirely out of your control. You can be talking to someone about programming one day and they’ll be gone the next.
I have nothing to suggest except having empathy whenever and wherever possible. You can’t fix everyone’s situation, but you can certainly do your best to provide support from your side. We might not have a magic solution for dealing with mental health but we know that this helps.
It seems like it’s about that time of year when people try to impute meaning to Apple’s marketing version numbers and use them to form conclusions of internal development processes. Here’s a thread to demystify them to the best of my knowledge (would be glad to hear corrections!)
First, a couple of clarifications so we can talk about this consistently: if x is the major version, y the minor and z the patch, then for iOS/watchOS/tvOS, the marketing version number is of the form x.y.z. For macOS pre-Big Sur, it’s 10.x.y; it now seems to be x.y.z.
Myth #1: marketing minor versions and patch versions match internal development or have some other useful meaning.
AFAIK, these are mostly decided arbitrarily, hence why I call them “marketing”. The major version matters (updated yearly) but the others just increment on chance.
iOS 14 gives users the ability to control which photos they would like to share with apps, even when they request blanket permissions. @googlephotos specifically detects this and locks the user out until they give full access. I am surprised and outraged that this shipped.
I didn’t even know the API to detect this even *existed* before I noticed Google Photos using it (it’s developer.apple.com/documentation/…). It’s so easy to abuse that I can’t comprehend how it was added alongside the other photos changes, which were designed to be transparent to apps.
Perhaps the designers felt that @AppStore Review would catch misuse; Google is certainly violating section 5.1.1 clause (iv):