@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy I think the naming is sloppy and creates a false dichotomy. (I imagine it exists for historic reasons). The naming suggests that integration events somehow are not messages that convey something has happened in the domain.
@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy I agree with the reasoning behind that, but I think the conclusion that people usually draw ("Never share domain events") is unnuanced and doesn't consider other forces. In other words, whether or not to share domain events should be a deliberate tradeoff.
@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy You could share all the context's domain events, or some of them, or translate them, or compose them into new events. And a single context may have multiple API endpoints, so each could have a different strategy for publishing domain events.
@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy Another problem I have with the "vs" in the Domain Events vs Integration Events framing is that events are not the only integration tool. Mainly Queries and Commands are essential types of integration messages that offer different tradeoffs.
@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy Roughly, the use of Events, Queries, and Commands, shift responsibilities between Bounded Contexts. Who owns the data, who owns the interpretation, who knows the business rules.
@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy A single endpoint could offer a combination of Commands, Queries, and Events; a single Bounded Context could offer different endpoints that use different strategies.
@stijnvnh@cesardelatorre@yreynhout@Indu_alagarsamy For example, (in DDD terms), an Open Host that offers a small set of Queries and Commands, and a second endpoint that offers much more detailed Events.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The right time to fix it, is right before the cost of fixing it becomes exponential. 1/
If the thing works and you don't have to add anything, don't improve it.
If you add something, and the addition of N raises the cost of improving it with N, be on high alert. 2/
If you add something and the cost raises with 2N, first improve the system to get that particular impact down to N.
If you add something of N and the cost of improving raises with N², stop the work and do system-wide improvements first. 3/
I’m having lots of conversations with @rebeccawb about Bounded Contexts in Domain-Driven Design. This is a small snapshot of some of the tensions involved in picking good boundaries.
🧵⬇️ (1/15)
Some context:
A Bounded Context is an “understandability boundary”, a boundary around a model and its language. You can understand the model and the language in isolation, without having to understand other Bounded Contexts. (2/15)
An Interface is the set of contracts or message types or APIs between Bounded Contexts. They translate from one model and language to another. (3/15)
The most important thing you can do when trying to learn Domain-Driven Design is still very much Eric's book amzn.to/3b1Uqrx People are not recommending this book enough because few have actually finished it.
It has a reputation of being hard to read, which is deserved. Read a little bit every day. Or read the bold parts first, then start over to read it thoroughly.
It also has a reputation of being too academic or too theoretical. This is undeserved: it is highly pragmatic, but it approaches software design from an angle that didn't exist anywhere else before, so it introduces many concepts that seem foreign at first.
The larger the client, the more likely they hire me because they want to "get it right the first time and avoid rework", and the more likely they end up not hiring me because before they do, they want to agree on the scope of what I will do for them.
"As small as possible" (DB partitions, message size, μsvcs, Bounded Contexts, class names, method arity, ...) is almost universally bad advice in software design. Some critical logic is going to cross those boundaries and result in poorly implemented, preventable workarounds.
But, "Whenever something is wrong, something is too big" (Kohr 1957) is also true for software. Big things are more obviously bad. Small things look simple, because the wrongness hides not inside the things, but in their connections.
Things usually tend to get bigger, rarely smaller or stable. @CarloPescio calls this gravity (things with mass acquire more mass) in the Physics of Software. Our usual reaction is to advocate smallness.
The problem is not that you shipped on Friday. The problem is that you have no way of knowing if shipping will break it. Most software is massively undermodeled and undertested.
Models (1) and tests (2) are two sides of software success: 1) do I understand this software so well that I can accurately predict the impact of a change in the system's behaviour 2) can I delonstrate confidence in its behaviour by repeatedly testing lots of scenarios
The irony is that both automated testing and modelling are crazy cheap compared to the perpetual burden and risk of undertested and undermodeled business-critical systems.