Microservices are maybe not what you think they are, so here's a #Thread to describe them...
1/14
Microservices are distinct from Services, although they are commonly confused.
Services, as an approach to software design, have been around for many decades. Service Oriented design was popular in the 1990s and early 2000s.
2/14
The big difference is not the size, despite the name, it is the independence of the service from other services.
Most definitions of Microservices include:
Independently Deployable
Loosely coupled
Organised around business capabilities
Owned by a small team
3/14
Notice how this definition says nothing about technology and completely focusses on something else, the degree to which these things are independent of one another, that is we can change a Microservice, without affecting other code that interacts with that service.
4/14
If you think about it, all of these properties are focused on that.
They are "Owned by a small team" so that team can make progress without collaborating with others.
5/14
They are "Organised around business capabilities" as a means of decoupling them naturally. We can change the SalesTax service independently of changing the CustomerRegistration service.
6/14
They are "Loosely coupled" so that we can make changes to one service without forcing changes on other services.
7/14
...and all of these things are mechanisms to allow us to achieve the last.
Microservices are "independently deployable"
This allows each small team to make progress independently of others, and when they have made a change, they can release it without affecting others.
8/14
Microservices are primarily designed to be an "organisational scalability" tool.
They free large orgs to make progress in many small teams, with each team working separately from the others.
9/14
So if you build your microservice, but before you release it into production, you need to test it with the current version of all the other services, it *isn't a microservice* it's something else.
10/14
The whole idea here is to prevent this coordinated, in-step, process of change, where teams can only make progress in lock-step with other teams.
11/14
"Independently deployable" is hard, but that is the game.
12/14
If you can't deploy your uServices independently, then you probably have "Services", and your services, however they are stored in repos, or built, are part of a monolithic system, because they need to be tested together before release.
13/14
Microservices are the most scalable way to build software, but if you don't need to scale, they are often a less efficient way to organise your work.
This video explains my thinking a bit further...
14/14
• • •
Missing some Tweet in this thread? You can try to
force a refresh
We could consider languages, how computers work, OS commands, understanding of design techniques and tools like editors and IDEs.
We could think of the need to collaborate, reducing dependencies & coupling, and focus on outcomes, but there's something more important.
2/18
I think that if I am to pick one thing, one pice of advice, it is this:
I confess that I am not a big fan of "The Test Pyramid" so here's a #Thread on what I think is a better focus for your automated testing strategy...
1/14
The Test Pyramid is usually described as something like this, though there are lots of different versions.
I don't think this helps much.
2/14
Part of the problem is that it is nearly right, we'd like to invest in our test strategy so that there are lots of the tests that are easy to write and give us the fastest feedback at the lowest cost and fewer of the more complex, more costly, slower tests.
In this tweet @christofebert says "optimising for velocity is dangerous" in software development. The article that he references is behind a paywall, so I haven't read it, but here are some thoughts, and some evidence that counters this view.
As ever it depends on what you measure, what does "optimising for velocity" mean?
Velocity is speed + direction. In software terms, I assume that we mean that speed is the rate at which we can deliver software.
2/17
The DORA metrics call this: "Throughput", which is a measure of the efficiency with which we can deliver software.
Start by measuring your Cycle Time, from "idea" to "working software in the hands of users"
2/14
Now optimise whatever it takes to reduce your Cycle Time in a series of steps. Each reduction will highlight the next steps that stop you going faster. Work to eliminate those barriers!
Unit Testing or Acceptance Testing?
Actually you need both for the best test-strategy.
a thread...
1/12
I think that the right answer is "Both!" because each delivers different things, and provides very different insights and advantages for the systems that we build.
A good testing strategy needs both kinds of test.
2/12
Unit tests are best produced as the output of TDD.
This creates better tests, but the real value of TDD at this level is that it applies a pressure on the design of our code. It makes us design our code from the perspective of a consumer of our code.
3/12
One of the few things that I can be absolutely definitive about is the definition of a "Deployment Pipeline", because I defined it. So here is a short thread that answers the question “What is a Deployment Pipeline".
1/9
Defines Releasability
The DP is an automated mechanism to determine the releasability of changes. It should be definitive for release. If the pipeline passes there is no more work to do prior to release.
2/9
Goals
The aim is to falsify not prove. However many tests we have, we can’t prove our change is good, but a single test failure “proves our SW is not good enough”.
3/9