The ultimate making app for a shipping multi-service system is actually a one-machine monolith with a UI. If your team is experiencing the most common pains from working in a large SOA environment, the productivity payback will be enormous.
It's important for me to take a second to remind you that there's much more to this world than geekery. Please keep working for change all around you, including, especially, outside the monitor.

Stay safe, stay strong, stay angry, stay kind.

Black Lives Matter.
We've talked a lot about the idea of having a shipping app for our customers and a making app for us. We can use the same source base to make multiple binaries. We target customer needs with one of those, and we target developer needs with the rest.
The economics of this approach are straightforward: As long as a making app's benefit — improved productivity on the shipping app — is less than its cost — the time spent working on something we don’t ship — we get a net gain in productivity.
"A one-machine monolith with a UI" is a lot to unpack. Let's talk about what it is, what production costs it mitigates, and, especially, how we can approach it in a stepwise change-harvesting fashion in real life.
A "one-machine monolith with a UI" is, on the outside, just a desktop app, same as your IDE, your word processor, and so on. And on the inside? Instead of connecting to the fifty remote services and databases of your shipping app, it *contains* them.
Now, be clear, we're not talking about simulating those services. We're talking about literally embedding their code inside the single binary that is the making app.
This sounds harder to do than it is, we'll get to that, but before we go there, why might we want it, what is the possible benefit of a beast like this? Much of what we'd get is best expressed as "negative cost", it's the waste being eliminated from our current approach.
Said another way, using the *shipping* environment for our myriad *development* purposes creates a bad fit between hand and tool. That bad fit costs us. The desktop-monolith fits much better, so it costs us much less.
Negative cost: we sidestep provisioning costs for a large swath of our work. Fifty services & databases is usually fifty cloud machines. That means buying virtual hardware, and it also means having deployment and ops specialists. It means *time*.
If there's one consistent feature of every enterprise SOA org I've worked with, it's the ferocious commitment to having the fewest possible environments provisioned.
I don't know the numbers, but they must be impressive, cuz I've seen teams struggle for *months* getting permission out of white-knuckled management. I swear they'd sooner buy every developer a new car than provision a new environment just for dev.
Negative cost: Nothing I can do as a developer in my one-box monolith can injure any other team's work. I can't bring down the environment. I can't spew garbage into the databases. I can't fill the logs with an infinite loop I let slip into the code.
Correspondingly, no other team's foolishness can keep me from working on the system. Those clowns who run the calendar service can't shut me down cuz they changed the URL. A gal with a backhoe can't keep me from programming by severing the backbone.
Positive benefit: the very most difficult outage causes -- implicit state and flow connections between services -- are easily found and tested for in a one-box making app.
Services constantly make decisions based on state fields in data. Consider adding a new role for one of your B2B flows. Add a new database record, yeah? Add the record, nobody's using it, now add your logic based around it. *Except*, odds are good someone *is* using it.
Services say things like if(role is X), but they also say things like if(role is not X). At the time they were written, they had closure over the available roles. You just added a role, tho, and you broke that closure with unpredictable results.
The only way to find that is to find it by meticulously probing complex UI scenarios across multiple services in a scarce resource, a provisioned environment.

Unless. Unless all the services are right here. Unless you don't have to use the UI to test a scenario.
Positive benefit: This thing is *fast*. You're eliminating latency, outages, SSO, VPN, passwords, you're eliminating every common cloud-ish tax that you normally pay when you work in the cloud. The UI has to be functional, it doesn't have to be pretty or branded.
Careful: a one-box monolith couldn't support your actual customer base. (Or, at least, I *hope* you dint use kubernetes to support a single user with a 100 transactions an hour.) But it doesn't *have* to. It only has to support the developer at the box.
People tend to associate unresponsiveness of tools merely with the simple cost in time. In fact, unresponsive tools carry two much heavier costs: 1) multi-tasking, and 2) batching.
Nearly everyone thinks they're good at multi-tasking. When the tool is slow, they start another task while it runs. Here's the thing, tho: nearly everyone is actually *bad* at multi-tasking. Encouraging people to multi-task is begging them to lose focus & forget details.
And when a tool is less responsive, I use it less often. Instead, I use it in batches. And these batches inevitably violate the limits of human mental bandwidth. Changes A, B, C, and D, all evaluated at once, must be evaluated in a far larger context than when they were made.
So, you see, at least in theory, the potential merit of this approach is quite high. This makes us turn now to practice, and specifically, the part of the practice that is "how would we get there and don't tell me it'll take 3 years and 30 million dollars?"
1) The crudest possible form of this, a first pass: provision a single cloud machine with a lot of disk and memory, and put every service binary and database on it. Essentially, the cost is that one cloud machine and a whole lot of YAML-jiggling.
If you're dockering, you can even skip the cloud machine. You might have to beef up dev hardware, but that's chump-change compared to provisioning. Just run the vm on your own box. It'll be slow, but not as slow as hitting a whole proviioned environment.
2) Now roll a custom dev UI on your desktop that connects to that monster you just made. We already talked about the kind of things you can do with such a UI, so I won't say more about that here.

geepawhill.org/2021/03/30/a-m…
3) Now take *one* service that's running in your monster and embed it in your new UI app. You can easily write that UI app so that it can be a UI for you *and* a service endpoint. I know you haven't done that, but I have, and in most modern frameworks, it's easy.
So your UI is hitting your VM for all the other services, but it's hitting *itself* for that one service. This will improve the UI's performance, but it will also enable you to make rapid changes in that one service without bouncing a whole docker instance.
4) Now, instead of having the UI-embedded service running as an endpoint, just call it's logic directly. It's embedded now, right, it's source code *is* the UI's source code. No need for a transport layer, you're already there.
This will have required you to separate transport from business logic, of course. (In some environments, a controller is already directly callable by a method. In some environments, not.) But there are compelling reasons to do this even without wanting a separate making app.
5) At this point, you can also make the UI start doing wicked things to the dataset that the embedded service is working with. This opens a huge range of testing capability, and dramatically increases the safety of making and studying the results of change.
Wanna wipe your embedded service's database? *Bam*. Wanna inject a "golden master DB" as the start? *Bam*. You're all in the same code base, you've eliminated the protective layer of the transport, you can do any of these things.
6) Pick another service, and embed *that* service in your making-UI app, too. Same stuff: the second one will go much faster than the first, cuz you'll have seen most of the mistakes you can make by then.
The only stopping points are when a candidate service is written in a language or a service infrastructure that is incompatible with the service you're working on. That is a real problem, of course, and it does happen. But in enterprise SOA environments, it's far less common.
So.

This is all a very high-level conversation. There are lots of details, and lots of variants, depending on what you're working with now.

But in real life, it comes down to a handful of insights.
1) The shipping app is fit to the customer's needs, not the developer's needs. Developing the shipping app using only the shipping app costs us directly and indirectly, in both lump-sum and tax-like ways.
2) We can develop the shipping app using a making app that fits a developer's needs far more closely. We can do it for less money than the costs we pay for trying to do it using the shipping app.
3) Our codebase is *source* not binary, and we can use one source to make many different binary images, including, in particular, a shipping app and any number of handing making apps.
4) The heart of our app is not in fact http transport, but the business logic it implements. Transport mechanisms are stable, cheap, not written by us, and not a major source of our defects, tho using them in development is a major source of our cost.
5) The code works for us, we don't work for the code.
Truly, the sky is the limit. You can do absolutely ingenious things, and make astonishing productivity leaps, just by thinking about how to dual-purpose your source code, arranging it in one way to suit your customer's needs, and another way to suit yours, both at the same time.
Thanks for hanging in. No soundcloud, but if you want full-text/audio in your inbox, free & spam-free, subscribe.

geepawhill.org/subscribe

And please keep working for change, in the code, in yourself, in the team, the org, the trade, and yes, the world.

Black Lives Matter.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with GeePaw Hill

GeePaw Hill Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @GeePawHill

12 Apr
Standup Braindump

The standup is a short recurring meeting used to (re-)focus the team-mind on effectively moving stories through our workflow. Here's my recommended approach to having standups be useful and brief.
The general sequence is 1) address team-wide emergency issues, 2) work story-by-story, 3) distribute new work, 4) address team-wide non-emergency issues.

Note that, quite often, there is no part 1, and no part 4. Sometimes there's not even a part 3.

Some general tips, then.
1) Don't over-engineer standups. Stay relaxed with pep. Don't go telling people I said these were the rules. All meetings involve humans, and once humans are involved, we have to flex.
Read 21 tweets
9 Apr
Ahhh, yes, the first thunderstorm of the new year. Spring is spranging. I love spring so much. But I hate thunderstorms. I'm here with Wally & Molly. Wally is indifferent. Molly is terrified.
I grew up in Kansas, and I'm with Molly.

Squishing against her, her against me, we manage.
I try not to freak out, cuz I'm taking care of her. She tries not to freak out, cuz she's taking care of me.

We both freak out, tho, really.
Read 10 tweets
4 Apr
My rice'n'garlic advice, "take many more much smaller steps," can be said another way: reject any proposed path that requires a step size larger than the limit you've set for that particular domain of activity.
Time for Sunday geek comfort. It's meant to be respite. There are more important things than geekery, so please remember to think outside the monitor.

Stay strong, stay safe, stay angry, stay kind.

Black Lives Matter.
"Rice'n'garlic advice" is blind advice, for when people ask you what to do, but you're not there & can't see. You have to guess. A professional chef I know, when asked to give blind advice, always says this:

1) That's too much rice.
2) That's not enough garlic.
Read 33 tweets
2 Apr
q: how many ways are there to partition a 5x5 tile square into 5 pieces, each containing 5 orthogonally contiguous tiles?
(Asking for a self who is considering a brute-force algorithm.)
I figure there's a tree. Every such partition has [0,0] in A. There are only three one-steps away, [0,1] in A, [1,0] in A, or both. And so on. If we mapped that tree one time, we'd have a number of cases. Times 4 for rotational symmetry. Times 2(?) for diagonal symmetry?
Read 5 tweets
28 Mar
Once armed with the idea of a shipping app and a making app, a whole range of possibilities open up. Among the most powerful: give your making app a UI just for making.
It's Sunday, which is geek comfort food day for me. Remember, tho, to think and feel and work outside the monitor. Please help me in opposing the multiple ongoing efforts to suppress the votes of millions of American citizens.

Black Lives Matter.
A "making app" is when we take the same sourcecode from the program we're shipping, and use it for another program at the same time. That program is one we develop and tailor expressly to enable us to work more effectively on the shipping app.
Read 39 tweets
28 Mar
My friend Steve, I was spozed to be the pro from dover, told me this thing, and for, idunno, 17 years or so(?), I've been holding on to it. He said it boils down to two things. Don't waste time. Accept the whole person.
NB: It doesn't mean "accept everyone". It means, if you accept my geek chops, or you accept my sex appeal, or you accept my brilliant theorizing, you gotta accept my (considerable) doofusness.
You can't take my good days and not accept my bad ones. And if you can't handle my awful, why are you prepared to cash in on my valuable?
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!