GeePaw Hill Profile picture
Aug 18, 2019 37 tweets 6 min read Read on X
Upstream Uptime #3: Making Local-Runnable Services The Norm

I recently wrote about upstream-centric architectures and how we have to alter our *making* when we adopt them. A key alteration: change the definition of "deploy" to include "local-runnable".
(The first article is on the blog: geepawhill.org/upstream-uptim…

The second will be up soon, but here's the tweet that starts it:
)
In that long list of problems I encountered in a real upstream-centric app I worked with a few years ago, a great many of the first-round woes come from one simple fact: "The upstream I'm coding against lives somewhere else."
Network outages, rebuild blockages, vpn passwords, version stability, and dataset- app-sharing all serve to create situations where a geek *wants* to work on her code but *can't*, for reasons that are entirely out of her own, her team's, and even her manager's control.
And every one of these happens because the upstream is "out there" rather than "right here".
The specific recommendation: when a team deploys a new build, it deploys it in part as a locally-runnable app that any in-house developer can readily fire up and get running on their own box.
Whether new upstream builds come automatically straight from HEAD, which is the best approach, or feature-branched and externally qa's and signed-off on by big people and blah-blah-blah, when the button is pushed, we get an app that runs.
There are a host of potential objections to this idea, and plenty of possible variant responses. Most of them come down to two things: 1) That's too hard. 2) That's not servicing customers.
"That's not servicing customers." This is just one more case of valuing made over making. Here's the answer: We service customers by changing code, so if we can't change code, we're not servicing customers anyway, and remote upstreams routinely prevent us from changing code.
"That's too hard." Here we have to spread out a little, because what it is that's too hard about it varies, so the specific response has to vary. OTOH, the generic answer is just this: "It's too hard because we haven't ever tried to make it easy."
We should consider some of the major variations on what makes local-runnable upstreams hard, and look at specific approaches to resolving them. *BUT*. before we even go there, we need to be clear.
If it's too hard for us to make local-runnable services, we have to make that easy *before* we start expecting downstream teams to be productive. Making that easy has to become #1 priority before we start laying out a plan that counts on downstreams working in parallel with us.
How much you pay a geek? How many geeks in your downstream team? How many hours a day are you willing to pay them for blind debugging of a remote app they don't know and don't own and don't control? How many hours a day you willing to play them to *sit* *there* and *wait*?
(Sorry. I get upset about this kinda stuff sometimes.)
Okay, variation #1 in the it's-too-hard-theme: it's too hard because the prod dataset is 87 petabytes.

Answer: it's not the size of the prod dataset that matters, it's the schema and its enforcement and the variant cases.
Every variant I've solved I've solved for all its instances. If that's not true, then your problem isn't computable at all. (If the problem is "give this dataset integrity", that's a whole team problem that must be tackled at the root with a specific project to do so.)
Downstreams don't need the production dataset. They need a dataset that is sufficiently rich and interesting to contain every problem they're going to have to solve. That's all.
Variation #2: That's too hard because the app can only be deployed by one angry genius using emacs to manually edit 19 files in 11 locations with 476 variables.

Well, in the immortal words of every plumber I've ever encountered, "well there's yer problem right there".
*We* are the ones who built the app so it could only be deployed by manual labor that we pay $300K a year to wear headphones and growl at everyone who comes withing ten feet. What if we didn't build it that way?
The most common real-life situation, it's done that way because we're integrating a bunch of COTS frameworks and tools that have separate values that live all over hell's half-acre. There's no getting around the COTS. Except. Wait. What if we wrote an *app* that does that?
They have those kinds of apps. They're called installers. They themselves can often be written using COTS tools. (They'll fallback to "not servicing the customer" here. You fall back to "not changing code" here.)
Variation #3: It's too hard because upstream X only runs on the well-known platform Arugula, and all our devs only work in Windows.
Okay, then you're going to have to ship your local-runnable as a virtual machine of some kind. Note: I am not saying this is trivial. It's not. But you have to balance its cost against the cost of having whole teams not able to do their job because you don't do it.
Variation #3A: No, you don't understand, O/S Arugula can not possibly run on a dev box of any available dev flavor, because it only runs on mainframes that are bigger than our whole building, or it only runs with a dongle we can't afford to buy.
And now we come down to the hardest case. It's the hardest case, because it means you're going to have to be serious about wanting upstream-centric apps.
You're going to have to get your upstream team to write a fake. In the other immortal words from that same plumber, "This is gonna run ya."

BUT!! Don't freak out quite yet. We have to determine the meaning of the word fake.
The panic -- I feel it, too, I'm not gonna make fun of you here -- is that the upstream in question is a monster of combinatoric complexity that does *dozens* of things, with reads AND writes on machines in boxcar warehouses all over the world. Simulating all that would kill us.
Soooooo, what if we simulated only part of it? What if we only simulated the part of it the downstream cares about? What if we gave up or let go or ran screaming away from anything like a real simulation?
The key is to understand the downstream's needs. If the upstream's monstrous complexity is in full use by the downstream, why on earth are we going upstream-centric? Why not just stick w/the monolith we know and love? No, the downstream's only using part of it.
What can we throw away? 1) the full dataset. 2) our own upstream: remote reads and writes. 3) generality for all possible downstreams: one downstream = one fake. 4) tons and tons of validation. 5) noise fields the downstream doesn't use.
6) endpoints the downstream doesn't use. 7) operations the downstream doesn't use. 8) fields whose values are opaque to the downstream.

The list goes on and on, varying by your actual domain.
And here's the thing. Once we've taken this step, of shipping that local-runnable fake, we can *add* to that fake in ways that are insanely useful to downstream teams. Things the app-in-prod would *never* allow. One example will suffice...
http://.../myapp/api/developer/1/reset-the-damned-data-so-i-can-automate-my-tests

I'm betting you can guess what it does. Do we want it in prod? UNDER NO CONCEIVABLE CIRCUMSTANCES. But put that in your fake, and your downstream teams will paint russian-orthodox-icons of you.
Foreshadowing: I have built once and want to build again, as open source, a single app that will make all this ridiculously easy for downstream & upstream alike. More later on this, as I was very excited about it the first time, and will be even more excited to share it.
So, wrapping up, the message I'm aiming at is this: we can and should make every upstream we write local-runnable for our downstream teams. This is the single biggest step we can take in making parallel development possible in our service-centric architectures.
It's cheaper than it looks, folks, and it sidesteps a very large number of problems that prevent changing code, which is the central operation of professional software development.
Have a lovely rest-of-Sunday!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with GeePaw Hill

GeePaw Hill Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @GeePawHill

Nov 19, 2022
In today's episode of Stupid Geek Tricks, I just basically invented Smalltalk using Kotlin.

Before you get angry, please know that I don't approve of this. I don't approve of a lotta shit I do.
120 lines of code. A marker interface for messages, a one-API interface for objects. Any class can handle any message it chooses to. Didn't bother with doesNotUnderstand, but it'd be an easy add.

gist.github.com/GeePawHill/2d7…
Conceptually, it's straightforward: the Interactor wraps a Thing to give it a jump table that switches on the message subclass. It calls the Thing's register() to fill out that jump table. Any given Thing class can register any given Message+Handler pair.
Read 31 tweets
Nov 18, 2022
What is my favorite 20th c song from Broadway?

Oh my gosh, I'm so glad you asked.

[Editor's note: Nobody asxked this. No one. Not one. Nobody asked this.]
Well, of course, it's "At The Ballet" from _A Chorus Line_.

I grew up on stage, community and then semi-pro theatre. I worked 4-8 production a year from the time I was 7 until about 20 years old.

In *Kansas*, yo, in Kansas.
Read 13 tweets
Nov 18, 2022
Anyway, all and sundry, "geepawhill" is not a common moniker. Find me that way. I'm on mastodon, but I also have a whole website, geepawhill.org.
Backstory: "geepaw" means "grandfather", and now, to look at me, it seems obvious. Of *course* this bitter old fucker is a grandfather, just look at him. But "GeePaw" is actually a name I've had for over 30 years.
See, my wife is a little older than me, and when we first got to the bouncy-bouncy, her kids were already almost grown. I was present in the hospital room when my grandson was born. (It was gross.) And I became a grandfather at the ripe old age of 31.
Read 9 tweets
Nov 16, 2022
Please, I'm sorry, please, remember through all this Elon-is-evil-and-stupid shit, remember, please, I'm sorry, please.

This ass-clown *bought* this place where you made community, he didn't steal it. And he *bought* it from the people who sold it to him.
Baby, you were so sure you were the customer, all along, and so mad to discover you were product, all along.
*Fucking* mastodon. There's servers. There's CW's, and bitchy people on your server telling you to CW your random rage-tweets. There's no funded algo stuffing your timeline, just your server's locals and your follows and their follows.
Read 6 tweets
Nov 16, 2022
Jussi Bjorling, "Nessun Dorma".

I once did a bake-off. It was in the early days with spotify, and spotify is the king-hell site for bake-offs. Type in "nessun dorma" and get 500 takes.

So I listened to maybe 200 or so, and I put together a CD of about 20 of them.
And one night -- yes there were substances involved -- I played it for my wife, and we listened to all 20 takes, and we chose our top 3. No commentary. We just listened, and chose our favorites.
Read 6 tweets
Nov 16, 2022
Bob Marley & The Wailers, "Redemption Song". vimeo.com/390484832
Late at night, when no one's around, or they're all abed, or I'm drunk and I don't care, I sing this to the trees outside my house.
My range is very narrow, and it straddles right there, alto and tenor, and I'm old, a practioner of many vices, across many decades. But I sing it, and it fits in my range, and singing it makes me feel good.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(