In order to modify code we must understand it. Understanding code requires effort, whether a little or a lot. We should minimize it, but I wouldn't generally call it waste.
What if that effort reveals that the code does nothing? 1/?
Sometimes it's as simple as identifying dead code. In other cases we follow parameters passed from one method to another or properties of objects and after digging through the code realize that we never use them. 2/?
Understanding code that does nothing is waste in its purest form.
Remember your first day working on a new project, struggling with the cognitive load of seeing dozens of folders full of classes and wondering how long it will be before you can work in it productively? 3/?
What if part of that load is looking at code that was never needed or wasn't removed when it was no longer needed? That load slows a developer down. It's wasteful. 4/?
Unnecessary code also self-perpetuates. It creates an increasingly dense thicket of code in which even more useless code can hide. Developers spend hours working around it, even including it in tests. 5/?
It's like clutter or an unwashed dish in a sink. It's so much easier to delete something the moment you see that it's not used than to let it pile up. Demand that every variable, parameter, method, and class provide a specific, non-hypothetical reason for its existence. 6/6
I wonder if too many of us saw Tron and we think that these pieces of code walk around inside "the system" looking and talking like us or someone we know. We're afraid that if we delete them they'll die screaming in pain and other code will be sad.. "I'm being deleted! Arrrgghh!"
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've worked in a few scenarios where we had a BFF web API that talked to a back-end API on behalf of the UI.
I see some benefits to having a web API tailored to the needs of a UI, but there are few problems that seemed to repeat: 1/18
- A lot of what was in the BFF was just redundant. We had requests that mapped to similar or identical requests. Sometimes the BFF and the back-end API would share requests in a library so that the same requests could be used from end to end. That seemed pointless. 2/18
- This made the application somewhat harder to follow because most activities involved two HTTP requests - one from the UI to the BFF and one from the BFF to the back-end API. 3/18
If documentation matters then we need to be better at figuring out
- What to document
- When to document
We don't need documentation that illustrates the obvious. The greatest evil is a diagram showing that our web app talks to the database, and then listing details like database columns.
It adds nothing that we can't see from looking at the code, and keeping it in sync with the code creates work with no value.
The same goes for other contracts, like listing the fields in a message. Why? It's right there in the code.
When I started writing code I knew I wasn't good at it. But I didn't know what good was. I asked around. I looked for it.
What I found was resistance to the very idea that we can be good at it. It surprised me then and I still can't get my head around it.
Just about everyone I worked with learned coding on the job starting with VBA and FrontPage, just like me. We got by. We automated tasks for people and they thought we were amazing. To borrow from Amadeus, everyone liked us. We liked ourselves.
Everyone knew that there was an outside world where people knew more than we did. They wrote software that didn't run in Excel. Some of them were in departments in our company.
We must be comfortable with uncertainty. We can have some vague idea of how we'll implement something, but the way we find out exactly what it's going to look like is by doing it. Until then we have uncertainty. 1/
An anti-pattern I see is that we try too hard to get rid of that uncertainty.
We try to plan all of the tasks for all of our stories at the beginning of the sprint.
We make "stories" so small that they do nothing. 2/
At best this is waste. Whether reality matches the plan or not, we do the work and then do the next thing and the next. Nobody cares that the plan was inaccurate. 3/
Hexagonal architecture makes less sense if we see our entire application in terms of CRUD, where the application gets some entity, modifies it, and then says, "Here's my updated version of that entity." 1/
If that's what we're doing then the database *is* the application and the rest of the architecture will feel useless and redundant. We're doing the same thing we always did but trying to make it look like something else. 2/
We should stop thinking about applications in terms of CRUD operations. For example, renaming a customer shouldn't mean
- Get the customer entity from the database
- Pass it to the UI
- User edits the customer and submits it
- Save the entity with the new name to the database. 3/
I'm experimenting with some ways to put guardrails on a new .NET app to hold off entropy and chaos.
Step 1: Get the dependencies pointing the right way. There are projects for data and HTTP clients. The Services project (logic) depended on them both, so I'm reversing that. 1/
I can't prevent future people (or myself) from putting low-level details where they don't belong, but this will make it harder. That's what I mean by "guardrail." I can't force anything, but I can guide it in the right direction. 2/
Step 2: Now that I'm defining repository interfaces in my higher-level code, I'm splitting them into two interfaces, one for reading and one for writing. Again, it can't prevent anyone from doing anything, but hopefully it will help support CQRS. 3/