Jesse Noller Profile picture
Distributed systems engineer, Python programmer, writer, hopeless science nerd, mycologist, naturalist, dog lover, dad, engineering leader, idiot philosopher.
Nov 18, 2022 7 tweets 2 min read
1. There will not be an equivalent to this site for some time
2. Network effects and data gravity matters
3. Most social has skewed to the visual/video market, something not aligned with a large swath of the market
4. Nothing user / community managed will hit critical mass 5. Volunteer and open orgs are already crushed vs oligarchs and their companies for basic competition
6. There’s too much money in user data and misinformation to get another social network aligned to the public good
Dec 3, 2020 17 tweets 3 min read
The IOPS issue(s), the sheer scope of it within the systems I saw from customer reports, internal and external users & testing. The systemic use of network attached disk was/is crippling the fleet - this did me in.

But, to know that you'd have to operate Linux An example of this is clusters that wouldn't even boot due to the IO throttling and IOPS starvation. Or Nodes that would randomly go offline.

Actual debugging on the command line was a bridge too far for all of engineering, after all, they're pets, who cares.
Dec 2, 2020 12 tweets 2 min read
I don't know how much of this I can "get into" - honestly I've been on the fence, but call this working through it.

TW: suicide.

My name is Jesse - and on November 13th I attempted suicide. I used a combination of drugs and well, cleaner, to take my own life. I’m lucky to be alive, lucky I’m functional. Lucky for a lot of things. I don't feel lucky, but intellectually, I know it.

What happened? Well, a lot of things, but I’d start with almost a decade of isolation and loneliness. Getting my adhd treated and losing my identity
Nov 8, 2019 22 tweets 3 min read
Here my quick and dirty Kubernetes issue diagnosis a thread:

1. Random latency talking over network
A: check disk IO on the host, you’re probably exceeding the IO levels on the OS disks. I bet it’s disk

2. My cluster goes down during an upgrade
A: set a pod disruption budget My containers are running slow or not well balanced and causing weird latency issues.

Check your CPU and memory limits in your yaml. Confirm the request limits are correct for your app and the back pressure from throttling at the container level does crash your app