Dependency Injection has a fancy name that makes some developers uncomfortable, but it's really just all about making the code easier to test. Basically everything that the class depends upon has to be informed during construction.
This can be done manually, but becomes a chore super fast - and will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing. I find it funny that people rally against those Frameworks so often.
To make things more concrete, let's say you have a method that gets the current date, and has some logic there (for example, it checks if today is EOM to do something). In Java, you could do `Instance.now()` to do this.
This will be a pain in the ass to test, you might need to test, for example a date when there's a DST change, or February 28th in a leap year, etc. With DI you can instead inject an `InstantSource` to your code, and on testing you can just mock the dependency to have a predictable date on each test.
You're talking from the perspective of Java, which has been designed from the ground up with dependency injection in mind.
Dependency injection is the inversion of control pattern at the heart, which is something like oxygen to a Java dev.
In other languages, these issues are solved differently. From my perspective as someone whoes day job has been roughly 60+% Java for over 10 yrs now... I think I agree with the central message of the article. Unless you're currently in Java world, you're probably better off without it.
These patterns work and will on paper reduce complexity - but if comes at the cost of a massively increased mental overhead if you actually need to address a bug that touches more then a miniscule amount of code.
/Edit: and I'd like to mention that the article actually only dislikes the frameworks, not the pattern itself
DI wasn't around when Java (or .Net) came out. DI is a fairly new thing too, relatively speaking, like after ORMs and anonymous methods. Like after Java 7 I think. Or maybe 8? Not a Java person myself.
I know in .net, it was only really the switch to .net core where it became an integral part of the frameworks. In MVC 5 you had to add a third party DI container.
So how can it have been designed for it from the ground up?
In fact, if you're saying 10 years, that's roughly when DI became popular.
You're wrong about other languages not needing it, yes statically typed languagess need it for unit testing, but you don't seem to realize that from a practical perspective DI solves a lot of the problems around request lifetimes too. And from an architectural context it solves a lot of the problem of how to stop bad developers overly coupling their services.
Before DI people often used static methods, so you'd have a real mess of heavily interdependent services. It can still happen now but.its nowhere near as bad as the mess of programming in the 2000s.
DI helped reduce coupling and spaghetti code.
DI also forces you to 'declare' your dependencies, so it's easy to see when a class has got out of control.
Edit: I could keep on adding, but one final thing. Java and .Net are actually quite cumbersome to use DI, and Go is actually easier. Because Go has implicit interfaces, but older languages don't and it would really help reduce boiler plate DI code.
A lot of interfaces in Java/C# only exist to allow DI tow work, and are otherwise a pointless waste of time/code.
> Dependency Injection has a fancy name that makes some developers uncomfortable, but it's really just all about making the code easier to test.
It's not just a fancy name. I'd argue it's a confusing name. The "$25 name for a 5c concept" quote is brilliant. The name makes it sound like it's some super complicated thing that is difficult to learn, which makes it harder to understand. I would say "dynamic programming" suffers the same problem. Maybe "monads".
How about we rename it? "Generic dependencies" or "Non hard-coded dependencies" or even "dependency parameters"?
It's not just about testing. When any code constructs its own object, and that object is actually an abstraction of which we have many implementations, that code becomes stupidly inflexible.
For instance, some code which prints stuff, but doesn't take the output stream as a parameter, instead hard-coding to a standard output stream.
That leaves fewer options for testing also, as a secondary problem.
Or don’t use a DI framework, and DI just becomes a fancy name for "creating instances" and "passing parameters". That’s what we do in Go and there’s no way I would EVER use a DI framework again. I’d rather be unemployed than work with Spring.
> DI just becomes a fancy name for "creating instances" and "passing parameters".
I literally addressed this in the post you are replying to. I think your problem is reading comprehension, not dependency injection or framework usage.
Then again, understanding Design Patterns will be difficult if you can't even parse you way through a handful of paragraphs written in plain English.
I'll bother to repeat myself:
>> This can be done manually, but becomes a chore super fast - and will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
>> Frameworks typically just streamline this process, and offers some flexibility at times - for example, when you happen to have different implementations of the same thing. I find it funny that people rally against those Frameworks so often.
There. Maybe stripped of the surrounding context will make it easier.
> I’d rather be unemployed than work with Spring.
That's really a you problem.
No need to be aggressive, I just disagree that DI frameworks streamline anything, they just make things more opaque and hard to trace.
> will be a very annoying thing to maintain as soon as you change the constructor of something widely used in your project to accept a new parameter.
That for example is just not true. You add a new parameter to inject and it breaks the injection points? Yeah that’s expected, and suitable. I want to know where my changes have any impact, that’s the point of typing things.
A lot of things deemed "more maintainable" really aren’t. Never has a DI framework made anything simpler.
> That for example is just not true. You add a new parameter to inject and it breaks the injection points?
Perhaps you never worked in a sufficiently large codebase?
It is very annoying when you need to add a dependency and suddenly you have to touch 50+ injection points because that thing is widely used. Been there, done that, and by God I wished I had Dagger or Spring or anything really to lend me a hand.
DI frameworks are a tool like any other. When properly used in the correct context they can be helpful.
I've written a couple large apps using Uber's FX and it was great. The reason why it worked so well was that it forced me to organize my code in such a way as to make it super easy to test. It also had a few features around startup/shutdown and the concept of "services" and "logging" that are extremely convenient in an app that runs from systemd.
All of the complexity boils down to the fact that you have to remember to register your services before you can use them. If you forget, the stack trace is pretty hard to debug. Given that you're already deep into FX, it becomes pretty natural to remember this.
That said, I'd say that if you don't care about unit tests or you are good enough about always writing code that already takes things in constructors, you probably don't need this.
Another downside of DI is how it breaks code navigation in IDEs. Without DI, I can easily navigate from an instance to where it's constructed, but with DI this becomes detached. This variable implements Foo, but which implementation is it?
Ye debugability and grepability is terrible.
DI seems like some sort of job security by obscurity.
If your IDE starts to decide how you code and what kind of architecture/design you can use, I kind of feel like the IDE is becoming something more than just an IDE and probably you should try to find something else. But I mainly program in vim/nvim so maybe it's just par for the course with IDEs and I don't know what I'm talking about.
In IntelliJ at least this is a non-issue.
How?
The IDE understands the DI frameworks and can show you which class or classes will be injected.
Not an issue in C#
I haven't really done any c# for 5+ years. What has changed?
I remember trying to effectively reverse-engineer a codebase (code available but nobody knew how it worked) with a lot of DI and it was fairly painful.
Maybe it was possible back then and I just didn't know how ¯\_(ツ)_/¯
If the rules of the dependency injection framework are well understood, the IDE can build a model in the background and make it navigable. I can't speak for C#, but Spring is navigable in IntelliJ. It will tell you which implementation is used, or if one is missing.
In a Spring application there are a lot of (effective) singletons, the "which implementation of the variable that implements Foo is it" becomes also less of a question.
In any case, we use Spring on a daily basis, and what you describe is not a real issue for us.
Is it ctrl+click takes you to the main implementation directly? If not it is reaaaaaallly annoying
In every language/IDE I've ever used ctrl-click would take you to the interface definition, then you have a second "Show implementations" step that lists the implementations (which is usually really slow) and finally you can have to select the right implementation from the list.
It's technically a flaw of using generic interfaces, rather than DI. But the latter basically always implies the former.
I think so.
Also, what I think is also important to differentiate between: dependency injection, and programming against interfaces.
Interfaces are good, and there was a while where infant DI and mocking frameworks didn't work without them, so that folks created an interface for every class and only ever used the interface in the dependent classes. But the need for interfaces has been heavily misunderstood and overstated. Most dependencies can just be classes, and that means you can in fact click right into the implementation, not because the IDE understands DI, but because it understands the language (Java).
Don't hate DI for the gotten-out-of-control "programming against interfaces".
This is the point, you need an IDE with advanced features while a text editor should be all you need to understand what the code is doing..
Why, as a professional, would you not use professional tooling. Not just for DI, but there are many benefits to using an IDE. If you want to hone your skills in your own time by using a text editor, why not. But as a professional, denying the use of an IDE is a disservice to your team. (But hey, everyone's entitled their opinion!)
Edit: upon rereading I realize your point was about reading code, not writing it, so I guess that could be a different use case...
Definitely still an issue in C#. C# devs are just comfortable with the way it is because they don't know better and are held hostage. Everything in C# world after a certain size will involve IOC/DI and the entire ecosystem of frameworks that has co-evolved with it.
The issues are still there. You can't just "go to definition" of the class being injected into yours, even if there is only one. You get the Interface you expect (because hey you have to depend on Interfaces because of something something unit-testing), and then see what implements that interface. And no, it will not just point to your single implementation, it'll find the test implementation too.
But where that "thing" gets instantiated is still a mystery and depends on config-file configured life-cycles, the bootstrapping of your application, whether the dependency gets loaded from a DLL, etc. It's black-box elephants all the way to the start of your application. And all that you see at the start is something vague like: var myApp = MyDIFramework.getInstance(MyAppClass); Your constructors, and where they get called from is in a never-ending abyss of thick and unreadable framework code that is miles away from your actual app. Sacrificed at the alter of job-creation, unit-testing and evangelist's talk-resume padding.
The system is composed of classes which are nicely encapsulated, independent and obey Liskov substitution and all that. You can connect them in different arrangements and they play along nicely.
But then some classes which use other classes hard code those classes in their constructor. They then work with those specific hard-coded classes. It's like if someone crazy-glued some of our Lego blocks together.
We recognize this problem and allow the sister objects to be configurable.
Then some opinionated nubmnut comes along and says, "hey, we should call this simple correction 'dependency injection'". And somehow, everyone listens.
Strangely I seem to have built all of my software without dependency injection. I must be a terrible programmer.
>Strangely I seem to have built all of my software without dependency injection
I'm going to guess that you've most likely used dependency injection without even thinking about it. It's one of those things you naturally do because it makes sense, even if you don't know it has an actual name, frameworks, and all that other stuff that often only makes it more confusing.
It just means testing can become a lot harder. I wouldn't say you are necessarily a bad programmer because you don't write a gazillion tests.
I would say you are a bad programmer for implying that DI is useless though.
You must not work in an object-oriented language, then? (Which is very possible.) Or did you mean that you have never built software with a dependency injection framework?
Can you expand on that?
Yeah I once got a job and after I got the job when they found out I'd never done dependency injection they said "we'd never have hired you if we knew that." Mind you that same manager also believed no code should ever be written if it doesn't have a test written first - real code is only ever an outcome of writing something to match what a test expects - poof - all the fun and creativity went out of programming there in an instant.
My philosophy of programming is "there's no right way to do it, do what works for you and makes you happy and if someone tell you you're doing it wrong, pay no attention - they're a bully and a fool".
This isn't about bullying someone into writing tests, it is about creating value that lasts over an extended period of time.
The value of tests doesn't generally come from when you first write them. It comes from when you're working on a codebase written by someone else (who has long ago quit, or been fired).
It helps me understand and be able to refactor their code. It gives me the confidence to routinely ship something to production and know that it won't break.
That only works if if what you're doing actually works - not just in terms of producing code that works once, but in terms of producing code that's maintainable. I don't know for sure that you're a "terrible programmer", but you're saying all the things that the terrible programmers I've worked with tended to say.
I think I can understand the boat you're in, bro. Both of the things that you don't do, I also didn't do for quite a long time, and I didn't particularly see the value in doing them (once upon a time); but I've been on a journey to make them part of how I code, and I'm pretty sure that I'm a better coder now than I was back then.
Writing tests for nearly all my code, in particular, is these days the only way I roll - and as for TDD (i.e. write the test and let it fail first, then write the actual code and make the test pass), I do it quite often, and I guarantee you that - contrary to your opinion - it makes coding a whole new kind of fun and creative. Dependency injection I still consider myself less of a ninja at, but I've done it (and seen it done) enough times now that I get it and I see the value in it.
I think it's a bit stupid for an employer to say "we'd never have hired you if we knew you had no experience in X" (sure, this doesn't apply to all skills, but I'd say it applies to quite a few). If you're worth hiring, then you'll pick up X within a few months on the job. I'm grateful to several past employers of mine, for showing me the ropes of TDD and DI (among many other things).
Anyway, I'm not saying that the above things are "the (only) right way to do it", and please don't take my above ramblings as making a judgement on your coding prowess. I agree, do what works for you. I'm just saying that there's always more to learn, and that you should always strive to be open-minded to new skills and new approaches.
What is there to be a "ninja" about when it comes to DI? As the article explains in the beginning it just means that you initialize and pass something into whatever depends on it instead of initializing it inside that thing.
It's too complicated of a term for what it is because we generally don't say we inject arguments into a function when we call a function.
But maybe you mean patterns building on that, e.g. repository/adapter patterns.
Which is rediculous as a taxi driver not getting the job if they have never taken a passenger with a trombone.
More like a carpenter not getting the job because he doesn't know how to frame a house.
Every so often a developer challenges the status quo.
Why should we do it like this, why is the D in SOLID so important when it causes pain?
This is lack of experience showing.
DI is absolutely not needed for small projects, but once you start building out larger projects the reason quickly becomes apparent.
Containers...
- Create proxies wrapping the objects, if you don't centralise construction management it becomes difficult.
- Cross cutting concerns will be missed and need to be wired everywhere manually.
- Manage objects life cycles, not just construction
It also ensures you code to the interface. Concrete classes are bad, just watch what happens when a team mate decides they want to change your implementation to suit their own use cases, rather than a new implementation of the interface. Multiply that by 10x when in a stack.
Once you realise the DI pain is for managing this (and not just allowing you to swap implementation, as is often the the poster boy), automating areas prone to manual bugs, and enforcing good practices, the reasons for using it should hopefully be obvious. :)
The D in SOLID is for dependency INVERSION not injection.
Most dependency injection that I see in the wild completely misses this distinction. Inversion can promote good engineering practices, injection can be used to help with the inversion, but you don’t need to use it.
Agreed, and I conflated the two since I've been describing SOLID in ways other devs in my team would understand for years.
Liskov substitution for example is an overkill way of saying don't create an implementation that throws an UnsupportedOperationException, instead break the interfaces up (Interface Segregation "I" in SOLID) and use the interface you need.
Quoting the theory to junior devs instead just makes their eyes roll :D
Honestly inversion kinda sucks because everybody does it wrong. Inversion only makes sense if you also create adapters, and it only makes sense to create adapters if you want to abstract away some code you don’t own. If you own all the code (ie layered code), dependency inversion is nonsensical. Dependency injection is great in this case but not inversion.
Agreed. DI Containers / Injectors are so fundamental to writing software that will be testable, and makes it much easier to review code.
It's not just not needed for small projects it is actively harmful.
It's also actively unhelpful for large projects which have relatively more simple logic but complex interfaces with other services (usually databases).
DI multiplies the amount of code you need - a high cost for which there must be a benefit. It only pays off in proportion to the ratio of complexity of domain logic to integration logic.
Once you have have enough experience on a variety of different projects you should hopefully start to pick up on the trade offs inherent in using it to see when it is a good idea and when it has a net negative cost.
DI is a very religious concept, people hate it or love it.
I myself am on the dislike camp, I have found that mocking modules (like you can with NodeJS testing frameworks) for tests gives most of the benefits with way less development hell. However you do need to be careful with the module boundaries (basically structure them as you would with DI) otherwise you can end up with a very messy testing system.
The value of DI is also directly proportional to the size of the service being tested, DI went into decline as things became more micro-servicy with network-enforced module boundaries. People are just mocking external services in these kind of codebases instead of internal modules, which makes the boundaries easier.
I can see strict DI still being useful in large monolith codebases worked by a lot of hands, if only to force people to structure their modules properly.
It always blew my mind that "dependency injection" is this big brouhaha and warrants making frameworks, when dynamic vars in Lisp basically accomplish the same task without any fanfare or glory.
There is absolutely fanfare and glory, even more than about dependency injection.
And "dynamic scope" is also a lofty-sounding term, on par with "dependency injection".
Because "big brouhaha" is what people really want.
They don't want simple and easy to read code, then want to seem smart.
[flagged]
Because in statically typed languages they require a bit more scaffolding to get working.
And it is a bit magic, and then when you need something a bit odd, it suddenly becomes fiddly to get working.
An example is when you need a delayed job server to have the user context of different users depending who triggered the job
They're pretty good in 95% of cases when you understand them, but a bit confusing magic when you don't.
> when you need a delayed job server to have the user context of different users depending who triggered the job
I feel this is just a facet of the same confusion that leads to creating beautiful declarative systems, which end up being used purely imperatively because it's the only way to use them to do something useful in the real world; or, the "config file format lifecycle" phenomenon, where config files naturally tend to become ugly, half-assed Turing-complete programming languages.
People design systems too simple and constrained for the job, then notice too late and have to hack around it, and then you get stuff like this.
Yeah, I get where you're coming from.
For the standard web page lifecycle it's fine, but for instances like this it really does become fiddly.
But often it's possible, but often a ideological stance the framework team have taken that leads to a poor documentation issue.
The asp.net core team have some weird hills they die on, and some incredibly poor designs that stem from an over adherence to trendy patterns. It often feels they don't understand why those patterns exists.
This results in them hardly documentating how to use the DI outside of their 'ideal' flow.
They also try and push devs to use DI for injecting config. Which no other language does and is just unnecessarily complicated. And it's ended up with a system no-one really understands while the old System.Configuration, while clunky, at least automatically rebooted the app when you edited the config. Which is the 95% use case most devs would want.
Love this article, Spring is a cancer in Java, its one of the reasons the language isn't fashionable.
It's still miles better than what was there before in Java ecosystem.
Cancer? It's poisoned blood and slayer of puppies, hopes and dreams. It's the lord of hell.
As a (mainly) Python dev, I'm aware that there are DI frameworks out there, but personally I haven't to date used any of them.
My favourite little hack for simple framework-less DI in Python these days looks something like this:
# The code that we want to call
def do_foo(sleep_func = None):
_sleep_func = sleep_func if sleep_func is not None else time.sleep
for _ in range(10):
_sleep_func(1)
# Calling it in non-test code
# (we want it to actually take 10 seconds to run)
def main():
do_foo()
# Calling it in test code
# (we want it to take mere milliseconds to run, but nevertheless we
# want to test that it sleeps 10 times!)
def test_do_foo():
mock_sleep_func = MagicMock()
do_foo(sleep_func=mock_sleep_func)
assert mock_sleep_func.call_count == 10
Mark Seemann has written extensively about the subject.
He a tremendous source of knowledge in that regard.
https://blog.ploeh.dk/2017/01/27/from-dependency-injection-t...
His AutoFixture C# NuGet takes away so much pain from unit test maintenance. It does have a learning curve.
I highly agree. I especially believe that manual DI should always be the starting point. Eventually one can evaluate if there really is a need for a framework. It's already dangerous if I have to change the code significantly just to satisfy the framework.
Isn't that true for every framework/library out there to some extent?
You probably don't need functional programming. Here is how to do it with a for-loop.
You don't see many articles written like that because it kinda would be obvious that the author hasn't bothered to understand the approach that he is critizing.
Yet when it comes to OO concepts people from "superior" platforms like Go or the FP crowd just cannot let go of airing their ignorance.
Just leave OO alone unless you are genuinely interested in the approach.
DI is fine, if it is fully typed, and objects are explicitly initiated by the user, and the DI only does thread-safe dependency resolution.
> But that reflection-driven magic is also where the pain starts. As your graph grows, it gets harder to tell which constructor feeds which one. Some constructor take one parameter, some take three. There’s no single place you can glance at to understand the wiring. It’s all figured out inside the container at runtime.
That's the whole point. Depdendency Inversions allows you to write part of the code in separation, without worrying about all the dependencies of each component you create and what creates what where.
If your code is small enough that you can keep all the dependencies in your head at the same time and it doesn't slow you down much to pass them all around all the time - DI isn't worth it.
If it becomes an issue - DI starts to shine. There are other solutions as well, obviously (mostly in the form of Object-Orientified global variables - for example you keep everything in GameWorld object and pass it everywhere).
Common sense is in short supply these days. It's a shame we need blog posts like these to outline how much you lose when you go with the "magic" approach. Devs just seem to be allergic to simple but verbose code.
I agree. I had to do what the article says in Node for a project for $reasons but secretly I loved not using a framework, and having the construction explicit. I've also seen bugs because tests may set up DI different to prod.
Looking forward to someone writing the Spring equivalent this on the JVM
Why? It would be nearly identical, just changing the names of the frameworks.
[flagged]
Or maybe they just don't have oversized egos.
The key to using a framework effectively, whether it's Spring in Java or SAP for your business, is to accept that the framework knows better than you - especially when it objectively does not- and when there's a difference between how you or your business think of things, vs. how the framework frames them, it's your thoughts and your business that must change. Otherwise, you're fighting the framework, and that's worse than just not using it.
Language and/or library issue. DI helps code be easier to follow, more decoupled and read with less boilerplate AND helps testing much easier.
If you are on node/ts look at effect-ts.
Can you show a project that effectively uses effect-ts? The docs is a tsunami of information that just looks to try to make a whole new language out of TS. If someone else had to review my code I doubt they knew what was going on
One thing that can motivate a dependency container is a complex chain of constructors.
IoC is nice (or DI as a concept in particular), but DI frameworks/libraries sometimes are a mess.
I've had my fair share of Java and Spring Boot projects and it breaks in all sorts of stupid ways there, even things like the same exact code and runtime environment working in a container that's built locally, but not working when the "same" container is built on a CI server: https://blog.kronis.dev/blog/it-works-on-my-docker
Literally a case where Spring Boot DI just throws a hissy fit that you cannot easily track down, where I had to mess around with the @Lazy annotation (despite the configuration to permit that being explicitly turned on too) in over 100 places to resolve the issue, plus then when you try to inject a list of all classes that implement an interface with @Lazy it doesn't seem like their order is guaranteed either so your DefaultValidator needs to be tacked on to that list manually at the end.
Sorry about the Java/Spring rant.
It very much feels like the proper place for most DI is at compile time (like Dagger does for Java, seems closer to wire) not at runtime, or just keep IoC without a DI framework/library and having your code look a bit more like this:
@Override
public void run(final BackendConfiguration configuration,
final Environment environment) throws IOException, TimeoutException {
// Initialize our data stores
mariaDBManager = new MariaDBManager(configuration, environment);
redisManager = new RedisManager(configuration);
rabbitMQManager = new RabbitMQManager(configuration);
// Initialize our generic services
keyValueService = new KeyValueService(redisManager);
sessionService = new SessionService(keyValueService, configuration);
queueService = new QueueService(rabbitMQManager);
// Initialize services needed by resources
accountService = new AccountService(mariaDBManager);
accountBalanceService = new AccountBalanceService(mariaDBManager);
auctionService = new AuctionService(mariaDBManager);
auctionLotService = new AuctionLotService(mariaDBManager);
auctionLotBidService = new AuctionLotBidService(mariaDBManager);
// Initialize background processes based on feature configuration
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isProcessBids()) {
bidListener = new BidListener(queueService, auctionLotBidService, auctionLotService, accountBalanceService);
try {
bidListener.start();
logger.info("BidListener started");
} catch (IOException e) {
logger.error("Error starting BidListener: {}", e.getMessage(), e);
}
}
// Register resources based on feature configuration
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isAccounts()) {
environment.jersey().register(new AccountResource(accountService, accountBalanceService, sessionService, configuration));
}
if (configuration.getApplicationConfiguration().getFeaturesConfiguration().isBids()) {
environment.jersey().register(new AuctionResource(
auctionService, auctionLotService, auctionLotBidService, sessionService, queueService));
}
...
}
Just a snippet of code from a Java Dropwizard example project, not all of its contents either, but should show that it's nothing impossibly difficult. Same principles apply to other languages and tech stacks, plus the above is unequivocally easier to put a breakpoint in and debug, vs some dynamic annotation or convention based mess.Overall, I agree with the article, even across multiple languages.
DI frameworks add confusion and require more unnecessary memory in advance
Don't hate a paradigm because you only experienced one bad implementation of it.
In IntelliJ, with the Spring Framework, you can have thorough tooling: You can inspect beans, their dependencies, you even get a visual bean graph, you can write mocks and test dependencies and don't even need interfaces anymore and if a dependency is missing, you will receive an IDE warning before runtime.
I do not understand why people are so excited about a language and its frameworks where the wheel is still actively being reinvented in a worse way.