I am so tired of watching people continue to juice numbers by doing things to boost short-term growth and causing attrition in places that are not tracked–spelling disaster in the long term. As an industry, we have really failed in creating "sustainable metrics".
The funny thing is that we all kind of already know exactly where these blinds spots are: if you are measuring engagement, it's easy to boost that by giving people things that keep them addicted to the platform. So you want to counterbalance by measuring conversation health.
If you're looking at new signups: you can easily boost this by putting a thing in everyone's face to ask for that. Obviously, you want to make sure that the people who are doing this actually stick around, rather than are just making a throwaway because you forced it on them.
"The metrics went up" is often used as a way to implement non-sustainable, user-hostile features. When that happens, it can be hard to go against a "data-driven" decision. Make sure to ask about which metrics they're *not* tracking–frequently they'll paint a different picture.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Just patched yet another project to delete a “-Werror” just so it would build on my computer and I think I’ve finally come to the conclusion that we have *really* failed at explaining why compiler warnings exist to an entire segment of programmers. We need to fix this.
The problem has gotten so bad that some of these people are *working on compilers* right now! They are literally encoding these assumptions into programming languages millions of people use. The productivity cost is real–if you use Swift or Go, you’re already living through it.
As a concept, warnings are pretty simple! They’re not that hard to understand if you approach them from the bottom up. The problems arise when you look at them in the opposite direction, without really understanding why they exist. Here, I can go through them now:
Jokes aside, though, as engineers we regularly deal with complex systems that can be difficult for our users to understand. Having a hard time explaining how they work is one thing, but regardless of your position on this technology @Apple’s messaging has been unacceptable.
Their reluctance to clearly describe how the software works, their seeming inability to be straightforwards with the fact that it fundamentally detects CSAM using filters that they control and uploads it to them, is very concerning. This isn’t how you inspire trust.
@pcwalton@iSH_app While constrained slightly differently (OSR requirements, usually tiered up to a JIT, etc.) JavaScript engine bytecodes are probably what you want to look at. Some certainly look a bit like what you’ve described.
@pcwalton@iSH_app For @iSH_app compatibility is very important, so the frontend is probably always going to accept an ISA with actual software compiled against it (and, soon, we’ll be limited to ones supported by Linux).
@pcwalton@iSH_app We emulate x86 on x86_64/arm64 so the interpreter is able to pin all the registers, at least: github.com/ish-app/ish/bl…. So we get that “for free”, at least. The instructions are still annoying to decode, though, so we cache those.
The news is just as shocking today as it was three months ago. How we deal with mental health is still something that we need to learn to handle better together. This is especially true in situations such as open source that are nominally technical but in reality heavily social.
That being said, in situations like these it’s especially important to remember that people’s mental state is often the result of complex circumstances that may be entirely out of your control. You can be talking to someone about programming one day and they’ll be gone the next.
I have nothing to suggest except having empathy whenever and wherever possible. You can’t fix everyone’s situation, but you can certainly do your best to provide support from your side. We might not have a magic solution for dealing with mental health but we know that this helps.
It seems like it’s about that time of year when people try to impute meaning to Apple’s marketing version numbers and use them to form conclusions of internal development processes. Here’s a thread to demystify them to the best of my knowledge (would be glad to hear corrections!)
First, a couple of clarifications so we can talk about this consistently: if x is the major version, y the minor and z the patch, then for iOS/watchOS/tvOS, the marketing version number is of the form x.y.z. For macOS pre-Big Sur, it’s 10.x.y; it now seems to be x.y.z.
Myth #1: marketing minor versions and patch versions match internal development or have some other useful meaning.
AFAIK, these are mostly decided arbitrarily, hence why I call them “marketing”. The major version matters (updated yearly) but the others just increment on chance.