I’ve created the image below from two stills from the first video above… it shows two different ends of the abstraction scale…
The top architecture is one I’m increasingly seeing becoming more and more popular with devs online, while the second one is often frowned upon as it’s not abstracted or ‘decoupled’ enough. The reality is that in many cases the second architecture can be used to meet a clients requirements just as well if not better than the top.
Do we abstract too much as developers?
For me personally I see a huge ‘abstraction fetish‘ in the .NET world, lots of developers seem to want to abstract everything including things like single implementation internal classes and DTOs to chase decoupling, reuse or to handle ‘just in case’ or ‘what happens if’ scenarios.
Of course abstraction isn’t bad but if you’re going to add all these layers of abstraction and indirection just know that they come at a cost… and it is not a one time cost. After we run ‘Extract Interface’ in Visual Studio we have paid the upfront cost only.
Ongoing cost of abstraction
Every layer added makes it harder to mentally understand, reason about and communicate the architecture. Additionally, debugging can be harder, PRs can be larger, tracking back bugs from source control history can be harder (as more files included in change set), onboarding new developers can take longer etc.
Consider adding an abstraction when it solves a problem you have in your specific app rather than adding one just because you read about some flavour of the month pattern online which ‘everyone else is using’.
IMHO it’s better to keep things simple and favour designs which provide actual value now (simpler easier to understand architecture) rather than theoretical value (what happens ‘if’) or the potential of value in the future.
How often are you seeing over architected software with tonnes of layers and abstraction?