I’m a Dev doing mostly C# and JS. I’ve heard some of the complaints on HN mentioning DI and really don’t get it. I honestly feel like it makes my code cleaner and more testable. For example, I don’t have to care where my IFooService comes from or how best to instantiate and dispose, I just request one in the constructor. For a framework to be flexible, you need _some_ plugin capability. Asp.net seems to do pretty good at giving you nice defaults but letting you swap out with DI. All my dependencies are laid out nicely in one place (Startup.cs) and then used throughout. DI gives you that and I’m glad that it’s baked in now as opposed to having to pick a 3rd party. Maybe I’m biased, though. Help me understand why NOT to use DI? Is it just moving people’s cheese or are there other reasons?
If you don't want to write any unit tests, for which there are a myriad of reasons not to, whether you're prototyping, throwing together a small once and done for a client, etc.
Then it's just a massive waste of time and gets in the way.
I really feel that you (and Microsoft) have completely lost sight of the reason why DI was ever even necessary and have become blind to just how much extra code it generates. The only reason was to inject mocks, and people hand-wrung for ages over it because the code to do so infects all the other code. Then people started to try and back justify it with IoC, but it's never really been a convincing argument.
It's also something that if you're not working regularly with it can be very mysterious and when it fails, fails at run-time, quite inscrutably sometimes.
From my perspective it's just boilerplate magic that if you actually need it, fair enough, but if you don't is a persistent, nagging, pain in the ass.
"The only reason was to inject mocks". Must disagree...the reason for DI is to support the Dependency Inversion Principle : "High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces)." Won't even go into the host of side-benefits that are gained from adhering to the DIP.
In C# the idiomatic way to achieve this is via constructor injection, which is made much easier to manage in ASP.Net with a DI library that works with the framework for you (either the built in one or AutoFac / SimpleInjector, etc.). But you don't have to use a library; many times I'm writing a simple-ish console app and just use good old "Poor Man's DI" where I manually construct my object graphs at startup.
This is the common misconception/defense of DI that drives me so cRaZy I've jumped to different languages. DI containers do NOTHING, NOTHING to make Dependency Inversion easier, nor does use of a DI container guarantee (as dozens of projects I've worked on prove) that DIP is correctly adhered to.
DIP is achieved precisely when the components (be they packages, projects, classes, types) are organized with the low-level modules depending on the high-level modules rather than vice versa. In English, this usually means the I/O and any heavy framework code is kept out of the program logic.
What DI does do is confuse this matter. Class relationships which are mere IMPLEMENTATION DETAILS of a module are elevated alongside the actual component architecture of the program. The object graph becomes obscured. Program execution order becomes nondeterministic. The callstack is completely ruined, undoing decades of enlightenment since Dijkstra's "GOTO statement considered harmful." And all my experience with the pattern proves, the DI container causes programmers to not even evaluate or understand whether their dependencies are even inverted, because all classes and their relationships just become one amorphous blob floating aimlessly inside the container.
Some fair points but I would disagree on the fundamental definition of the DIP. In my book it absolutely does not mean that low-level modules depend on high-level modules. That's just high coupling turned upside down. You have to have shared abstractions as the "loose coupling" between modules and nothing more, to say you are adhering to the DIP.
Simply, this means that no implementation module, high or low, depends on any other implementation module, ever. It only ever depends on abstractions which are shared between modules.
I don't see how using a DI container can change the program execution order, unless one is misusing it terribly - after all, its sole purpose is to provide the correct concrete implementation of a dependency to an object at the time of its construction, the execution order from the perspective of the program is 100% preserved. Sure, maybe the container itself creates my graph in a non-deterministic way, but why would I care? If my program depends on this that's just bad design imo, no amount of libraries is going to save it :)
And the object graph is not obscured, in fact it is clarified, because you look at a class constructor and immediately can see what it's invariants are! And I have yet to come across a DI library that wouldn't immediately halt and catch fire if you introduced a circular dependency chain, so it's literally not possible to have these amorphous blobs (great expression though!) in any proper DI container.
> DIP is achieved precisely when the components (be they packages, projects, classes, types) are organized with the low-level modules depending on the high-level modules rather than vice versa.
+1. This simple idea is the basis of Clean Code, Hexagonal Architecture, Onion Architecture, Haskell, Functional-Core-Imperative-Shell, among other good architecture ideas.
Page 150 of "Clean Code" says this about the DIP: "In essence, the DIP says that our classes should depend upon abstractions, not concrete details". This is very similar to pretty much any canonical definition you can find anywhere else. I'm quoting it here because the person who coined the principle is the same person whose name is on the book, so I am guessing his definition is correct.
If you're not using abstractions, you are not using the DIP. What is being described here with low-level modules depending on high-level modules is categorically not the DIP, and I am starting to wonder if this is perhaps part of the frustration that the poster we're replying to has experienced with DI, since turning your coupling upside down will have the same problems as just high coupling in general, except everything is upside down now and harder to read :)
Hi chunkyfunky. I recently read Clean Code, great book.
I agree with your take on DIP.
We're using Microsoft Service Collection and with the Scrutor nuget component we were able to easily decorate an implementation from another package to extend the functionality adhering to the open closed principle OCP.
DI also enables your code to be open for extension but closed for modification.
Scrutor looks great, I must take it for a spin! And that is a great point about DI - being able to extend code/behaviour by injecting a different implementation of a dependency
I'm not even a C# coder and I can't understand the complaints about this. If you want to code fast and fancy free, use Python or PHP. If you want to use enterprise patterns that will make your code clean, testable and nicely modular, then go for something like C# or Java. Why even use C# if you can't be bothered with DI because... too many lines? mindblown
Simply because not only does it not have to be like that, for years we weren't forced to use this stuff if we didn't want to.
I like C#, no, I adore C#. I have a bad memory for even the simplest method calls and the sheer power the intellisense in a statically typed language gives me to just not care means I can just code with joy.
My personal view on it has always been that someone in MS wrote MVC 1 in response to Rails on the sly. It was fantastic, a breath of fresh air into the MS constant misunderstanding of the web. Especially compared to webforms. For a while, everything was good. Then somehow 7 or 8 years ago the ASP.Net team got obsessed with ramming "best practices" down our throat and everything's gone a bit downhill from there.
You still are not forced to use it. The documentation will heavily emphasize it, but you can still manually new things up to your heart's content if you want.
This isn't even about C# at all, actually. This is about the web application framework ASP.NET Core.
And they don't even have a point in this regard, because nothing is forcing you to use the DI. You wanna make a database connection with ADO.NET and a static connection string in your Controller? Go ahead. You can do that. It's no more effort than it ever was.
You want to grow your app to run it in multiple environments and be confident you don't clobber your configuration during migrations? Use ASP.NET Core's DI.
Every app framework has some "magic convention over configuration". I personally think ASP.NET Core's "magic" is a lot less pervasive than in, say Django, or RoR. When I was first learning ASP.NET Core--coming from WebForms--there was a learning curve. You don't just throw everything into a Web.config XML bag of doom anymore, with a single, static configuration reading tool that reads just that one file. That particular magic has changed. I mean, if you want to do that, you can, just read the file yourself. But it's not wired up by default anymore. And the new way of doing things was easily learned with a couple of afternoons of reading the documentation.
Which you can do, because there is actually documentation, and a lot of it. I think part of the problem might be that people are used to other web app frameworks where the documentation is pretty lacking, so they don't even think to go looking for documentation on ASP.NET Core and think they can just jump in and figure it out. I don't think you could do that going from Django to RoR or vice versa, but people complain when they can't go from ASP.NET WebForms to ASP.NET Core without learning new ways of doing things.
I hope that's a typo because it's literally the opposite of what you're saying in #2.
It is precisely when you use a DI container that the problem is hidden, whether or not it's solved.
In most projects that use a DI container, the problem is usually not solved correctly, but the developers are oblivious and overconfident. They think, as you do, using a DI container means they did it right. False! The DI container only guarantees everything is hoisted up to the object constructor. This is NOT DEPENDENCY INVERSION. The objects can be in the constructor, but the dependency arrow can still point the wrong way. An obvious example of this I've seen a million times is an interface sitting right next to its only implementation. That's NOT Dependency Inversion.
If you don't use a DI container, the problem also may or may not be solved, but it is never obscured, it is completely visible.
It's not a typo - without DI (i.e. by using new operator freely whenever dev feels like) code still has dependencies, just this time hard-coded and invisible until you read the source.
Also, one can use DI without DI containers. Actually I prefer not using DI containers when possible.
That multiple usages of the same concrete types requires changes on multiple places. For small programs, the di container itself can probably be skipped (i.e. instantiate an object graph by newing up). For larger programs, this is too cumbersome and therefore error-inducing.
Yup, unit tests, and in my mind, that's bad. There's two reasons:
#1 Because the language provides no means to arbitrarily mock fields of a class, an entire design pattern and framework needs to be created just to do so. Why can't the language just have such a feature?
#2 This actually temps people in building worse design, because instead of creating pure units that don't have dependencies to begin with, it facilitates the opposite of creating deeply nested chains of dependencies, because "DI framework wires it up for me". Where if you didn't have that luxury, you'd be pushed towards pure units that don't have state dependencies, and integ tests instead, which in my mind is much better overall.
#1 is just moving the goalpost. You jumped from "why new is bad" to "why can't language mock fields".
#2 What stops you from creating a complex system without having dependencies, thus avoiding DI? Also, I think it would make an interesting case study if you're willing to write it.
> #1 is just moving the goalpost. You jumped from "why new is bad" to "why can't language mock fields".
I don't think I did. Someone says, well if you use new, how are you going to unit test your class?
And one answer from a user of the language would be: Ya, I guess you'd need to change the way you're creating dependencies so they happen in the caller, and then you'd need to change your class so it takes the instance on its constructor, etc.
And this is the original OPs issue with C#, all the ceremony involved.
Another answer would be that the C# language designer could build a language level feature that avoids having to do that and allows mocking new inside a unit so it can be easily tested without shenanigans.
In fact, in Java-land, there is a library that lets you unit tests classes that use new, it's called Powermock. So it is feasible.
> #2 What stops you from creating a complex system without having dependencies, thus avoiding DI? Also, I think it would make an interesting case study if you're willing to write it.
I've moved to a functional language instead personally. That said, I do still like C# and Java, and think they are great languages. I'm also not against DI, or other enterprise patterns, some have legitimate uses. But I have definitely seen what the OP is complaining about, those languages are too deep in their own rabbit hole sometimes. To the point that half the developers don't know why they use DI, that's just part of the template they used to bootstrap their app. There's no thought process about, wait, what is the issue in my code structure and design that I'm facing, and what could I do to solve it. Instead it's just, I'm using all the "best practices", they are the best, and I'm using them all, all the time, so my code is the best it can be. Without any thought about what advantages they're even getting out of it.
That's why I asked about the "new". Most people don't really know the pros/cons. They only repeat what they read blindly: "New" bad bad, no use "new", never, very bad, antipattern, here link proof, must use DI instead.
And sometimes when they know of a cons, they don't think about tradeoffs against alternatives. They just say, well it has this one con, so it's very bad! An antipattern!
I think that creates a setting where you get monstrous framework and code bases, where people just throw every known pattern at it, no matter if it was called for or not.
I spent maybe 15 years using C# and DI (for most of that time), and understand it pretty well. I still dislike it.
> just they'll be hidden and tightly coupled
They're more hidden with DI than without, in my experience.
At any rate, having left the .NET stack around 5 years ago, I certainly don't miss DI. My current code has more and better tests than my C# code ever did, so DI didn't really help me there (nor did it hinder me-- it was neutral). But my current code is much more explicit, direct, and a fair bit more compact. I'm definitely happier with it.
Compare 2 classes. Class A expects all its dependencies to be passed as constructor parameters. Class B has a default no-arg constructor, but buried inside 5 of its methods calls Dep1 dep1 = new Dep1(); or Dep2 dep2 = new Dep2(); Now, which class has obvious and loosely coupled dependencies? Which class has hidden and tightly coupled dependencies?
To me, answer is clear - class A has obvious and loosely coupled ones. Lo and behold, class A comes from program which uses DI. Class B doesn't.
> the reason for DI is to support the Dependency Inversion Principle : "High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces).
You can still depend on abstractions without DI. Just use an interface or asbtract class in your code.
DI only plays into how you acquire yourself an instance of the concrete class for that abstraction you depend on.
So it can be given to you by your caller (DI). You can go fetch it somewhere (ServiceLocator). You can create it through a utility (Factory). Or you can create it yourself old fashion way with new.
> But you don't have to use a library; many times I'm writing a simple-ish console app and just use good old "Poor Man's DI" where I manually construct my object graphs at startup
I will support that. I'd like people to be precise in their criticism, do you find the DI pattern troublesome, or some particular DI framework?
Neither, I'm a fully paid up and card-carrying DI-club member :)
I was kind of making the same point as you did, that you don't need to use any kind of DI library to achieve the DIP, but that in something like ASP.Net it is there and simple to use.
One of the issues with solving a problem via constructor injection is that anytime you need to access a new object in the class is you need to DI it. We'll now you have to go change a 100 unit tests that are failing because of this.
I am not sure what is a better way to solve this problem, but the original OP is right - it's far too much ceremony.
Yes, this can be a problem, that is why I have come to the conclusion that unit tests in larger projects (not libraries) is mostly a waste of time, because as soon you change the dependencies of a class you should rewrite your unit test for that class anyways because the class is not really the same class anymore (it works differently), you are just keeping the class name for convenience. And there is always a danger of changing a unit test, are you sure you are testing the same thing?
I think it is ok to depend on a DI-container in your unit tests for dependencies you are not directly testing, instead of stubs or manually instantiating them, to avoid updating unrelated unit tests when some class changes it's dependency somewhere in your project.
Instead I prefer system/integration tests where you test lets say a complete request from start to finish. Now your code base can change as much as you like, dependencies can be added or removed or classes or packages can be completely deleted, the tests stays the same. And now you are having tests that much closer matches reality instead of tons of mocks and stubs that can easily fool you to believing that they represent a good estimate of reality.
By doing this way a constructor based DI does not become a nuisance instead it helps you to organize your project, as it should be. Using a DI container makes it natural to break down your classes and automatically share the same decencies between them.
Counterargument to my argument is that if you don't use constructor based DI you can rewrite your class in anyway you like and the test stays the same, that is all true, however what DI gives you is the possibility to change the behavior of a class from the outside by injecting different dependencies, this makes your code better structured and easier to reuse.
If you want to accomplish that in the non-DI case you have to go thru trouble of configurable proxy classes which just introduces one more layer of bureaucracy, the thing you wanted to avoid by not doing DI.
My own criticism of DI containers is that they introduce magic to the project, personally I truly dislike code magic, however a DI container is nothing more than on the fly factory creator. In theory you could remove the DI container from a project by manually writing factories for each execution path, for me that becomes an acceptable level of magic as long as the DI container doesn't do lots of other things that factory couldn't solve.
Sometimes I find this useful. This should also help the test errors you mention.
public class MyTest()
{
private readonly IServiceProvider _serviceProvider;
public MyTest(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public void MethodA()
{
var serviceA = _serviceProvider.GetService<IServiceA>();
}
public void MethodB()
{
var serviceB = _serviceProvider.GetService<IServiceB>();
}
}
Unit tests don't do anything to validate the existence of DI. Most people don't even use DI in their tests. DI is just spaghetti code.
The main thing holding C# back is the culture around the DI and weird hostbuilder code associated with ASPNET. If you just skip that part, it's great. Unfortunately a lot of web examples, solutions, and code libraries themselves are ASPNET-focused, and more and more of the code becomes some truly bizarre, unreadable framework injection in an inside-out fluent builder.
Do you mean DI (dependencies being added through the constructor abstracted by an interface) or DI Containers (magic code that magically creates objects for you)?
> Then it's just a massive waste of time and gets in the way.
I find it very useful for several purposes that have nothing to do with mocking or testing.
> and have become blind to just how much extra code it generates
It doesn’t generate any extra code at all.
For the record, I think that built-in .NET DI is pretty bad. It would be better if they managed to avoid including that abstraction in the framework, even if I personally choose to use DI in most of my projects.
//MyController.cs (or Page, etc) and others get to use it with no ceremony:
public MyController(MyService foo){ this.foo = foo; }
vs no DI, where the config "goo" has to be repeated in every place I intend to use it (or in this case it could also be buried in MyService.cs):
public MyController(){this.foo = new MyService(new MyDbContext(ConfigurationManager.GetConnectionString("foo")));}
public Dispose(){this.foo.Dispose();}
If I knew MyService would never be swapped out and only ever used in one place, I may opt for the no-DI route. I'll often start there then move it to DI when I want testability, reuse of config, plugability, etc..
The way to manage this without DI would be a static method someplace (Startup.cs or elsewhere) which creates the new MyService, stores it, and returns a reference. Then any controllers or pages that need the reference call the static method.
The method could return a MyService, or an IMyService if you want to be more flexible. It can always create a fresh object, manage and return a singleton object, or manage a pool of objects. For the new and pool cases, you can have another static method for 'returning' the object when the controller/page is done with it, so you can manage disposal or pool availability.
The biggest advantage I see to this approach is in debugging: there is a clear stack trace back to the source of the object, and also to its implementation if you're not using an interface. With DI you often can't tell where an object came from, and sometimes it takes some digging to even find out its exact type (unless you only have one implementation of each interface, which is often the case.)
It seems like GP meant a static method specialized to whatever resource the program needs rather than a pluggable factory. All of the issues in that post appear to arise from the fact that they expect the locator to need configuration before it will work, while my reading of GP is that their configuration is which type the method returns, no registering a class or allocator necessary. Yes, this restricts you to a series of services known at build time, but... it seems to me that you knew what your services were anyway, whether it was directly in the locator, in an xml file, or in a registration call.
Absolutely nothing in that blog post applied to what I intended, because I did not say that a generic method that looks up implementations in a registry should be used. My static method would be explicitly coded to return a particular implementation, and there would be a separate static method for each kind of service you need. (Eg: one per interface if you're using interfaces for the return types.)
No registration or configuration is needed, because the types are hard coded. If you need mocking support, a single IsTest config setting could be used in a conditional statement to determine which implementation to return.
That's very little additional coding if you use a ternary statement, and it lets you opt-in to the mocking mechanism if/when/where you need it, instead of permiating your entire architecture with it.
ServiceLocator has you create a runtime service registry where you can dynamically register and fetch instances too and from. And often that means it bypasses static type checking, as you might be adding and fetching services by name using a string.
This is different in that it's static. You don't add a service to it, it creates it the first time you ask for it (if singleton), or everytime you ask for it (otherwise), or pools it, etc.
I recon some similarities, but most enterprise "service locator" involve a lot more than this, and often are meant to give you this runtime dynamism where you can load and unload services as the app is running, etc. making them way more complex for apps that don't care about this.
Actually, as written, myService never gets set, but I think I know what you mean. This is not thread-safe and assumes you want singleton. To use it you would in each controller do:
var service = Startup.getMyService();
And potentially know if the controller should dispose service.
This doesn't seem much less "in the way" or "massive" than:
Not having used .NET recently, is what they've implemented something like Spring? The description of DI everywhere and inscrutable failures at run-time remind me of that.
Not sure how Spring does it, but for example in ASP.Net Core, it's really simple. In the app startup class there is a method that's automatically called by the framework whereby it supplies a services container. In this method you "wire up" your interface->concrete mappings
public void ConfigureServices(IServiceCollection services)
{
services.AddScoped<IMyDependency, MyDependency>();
}
Later on when for e.g. an APIController is instantiated, once you've declare a constructor dependency on IMyDependency, it gets resolved for you automatically and whatever concrete class is mapped to gets created and passed to the constructor.
Outside of ASP.Net (console app for example) you can either do this manually, or you can get a ServiceCollection object from the framework, or you ca nuse a 3rd party DI library, etc.
Generally speaking if you don't configure this correctly you get a pretty simple message telling you that the container could not service the request for an IMyDependency or whatever and then it's pretty simple to figure out what you missed in the wire-up.
If you understand Spring then Asp.net Core will feel right at home. I think .Net tends to be more explicit in the configuration, with DI being no exception. For instance, in Spring you can just annotate a service with @Service and the DI framework will pick that up. In Asp.net Core, you actually have to say IThing is backed by Thing as a singleton when running the application and TestThing when running tests.
Edit: Also as others have noted, constructor injection is the way to do DI.
Forget mocking, the major benefit of DI for me has always been proxies.
For our multi-tiered system, this means managing caching and logging of certain calls and events at the level of data access. And at the service level, it means dealing with things like authorisation.
Smart use of DI sets the system up in a way were it's easy to handle these types of cross-cutting concerns where appropriate, and with minimal boilerplate. It also simplifies refactoring.
What exactly is the issue with DI? How else are you going to pass dependencies? How are the errors "inscrutable" when they tell you the exact type that it can't load?
Do you just want to `new SomeClass()` instead? If so, what's stopping you from just doing that?
This was going to be my question. For your own classes and services, nothing is stopping you. You can new up classes without DI all day long. ASP.NET does layer on it's things using DI, but it's boilerplate built into templates. For example, OOB you may use the built in UserManager to authenticate users and store their info in a db. If you want to customize that and NOT use the built-in, you may have 10-20 lines to write in Startup.cs to inject a different UserManager or configure the built-in one differently. I've never felt it was massive or wasted my time vs Node/express work I've done.
OMG! I was wondering you might be proposing this. You should not be using "new" keyword anywhere in your code. If you have hard coded implementations like that in your code then how will you mock them in unit tests? I hope you'll not reply-with saying "don't write unit tests".
"OMG! I was wondering you might be proposing this. You should not be using "new" keyword anywhere in your code"
That sounds like dogmatism, and I'm going to take a special note of the emotionally charged language you're using here; it's symptom of some of the underlying issues I see in our industry (more to be said on this).
That said, I do agree with the sentiment in general, but you can (and sometimes should) use the `new` keyword in your code, especially when writing tests. In some cases, it makes testing so much easier and helps with the maintainability.
"I hope you'll not reply-with saying "don't write unit tests"
After 2 decades in this profession, the only think I'm sure of is there is no "silver bullet". Of course, as a general rule, it's a good idea to have a test coverage of your codebase but sometimes it's not valuable and not worth adding it. So it's a "it depends" from me!
I am a big fan of DI, but you can often get a lot of useful unit testing done when you try and keep I/O mostly seperate from the rest of the code. If you have ever programmed in Haskell you will know what I mean.
> If you don't want to write any unit tests, for which there are a myriad of reasons not to, whether you're prototyping, throwing together a small once and done for a client, etc.
Then it's just a massive waste of time and gets in the way.
A massive waste of time? Adding a constructor and one line to startup.cs is a massive waste of time? Just how long does it take you?
> It's also something that if you're not working regularly with it can be very mysterious and when it fails, fails at run-time, quite inscrutably sometimes.
If DI fails it’s usually just because you’ve forgotten to add that one line to the startup.cs, and the error you get will tell you that. Can you give me a concrete example of how DI is inscrutable?
To be fair there are some very questionable design decisions with ASP.NET Core. The whole way of initialising a Host and configuring things around it is a huge myriad of nested factories and builders which honestly looks like a huge mess. It's hugely overcomplicated for 95% of web applications in my opinion.
Do use DI, it's great. It helps with structuring into entity/model/service/controller and treats DI providers as a global resource. But don't require configuration and instantiate lazily as a class member like a sane framework instead of having to pass it to constructors.
Constructor DI is extremely important though, because time and time again lazily injected dependencies via properties results in circular dependencies and sometimes missing dependencies that fail at runtime. I'm not talking about simple cycles either, every time I have seen lazy dependency injection done has ended up in an A -> B -> C -> D -> A scenario.
Passing required dependencies by constructors makes this extremely apparent, not just to the DI framework but to the developers themselves. With constructor injection I can immediately grasp exactly what dependencies an object depends on without even going into the full code flow to look at all the code to find all the IOC container calls.
Edit: Also constructor injection is critical for testing dependency injection. IT allows you to have automated tests that construct every relevant object using the same DI setup as your live application and you can verify that the DI framework can resolve dependencies for everything.
If your DI is lazily done this cannot happen unless you explicitly call all code flows, opening yourself up to runtime errors that may be missed. Unit tests won't catch this because the dependencies of your production site will be different than the dependencies of your test scenarios.
I'd go a step further and say that if calls to the IoC container are scattered all around your code, you're not actually doing DI. You're following the service locator antipattern.
Dependency injection is just using passing your dependencies as interfaces, especially in the constructor. Nothing more. Dependency injection frameworks are DI combined with a service locator.
Dependency Injection Frameworks have the same issues:
- understanding what is going on is harder than injecting objects by hand.
- you have limited control over object lifetimes (when objects are created and disposed). Permanent, scoped and transient might be enough most of the time for a web service, but as soon as you fire off tasks that can outlive the requests or when you use it in different circumstances like UI it’s getting a problem.
The core of a DI container is generally extremely simple — objects are constructed by scanning collections of registered dependency types for matches and constructing them with the same process before passing them on. It has a recursive structure that I would even call elegant.
We are currently on a long term process of migrating a large C++ codebase from a service locator model to just passing dependencies through constructors. New code is significantly simpler to both use, understand and test.
C# 9.0 records with init only setters is a large step to solving this issue. Still has security concerns with injection into serialized objects...it's a difficult problem to solve.
> In .NET, the usual convention is that you make required things constructor arguments, so that there's no room for confusion.
Except in Windows Forms, where the norm is a zero-argument constructor and setting everything via properties, because that approach works with component-based design-time tools. Come to think of it, I think WPF and its descendants do the same.
It's been a while since I touched either WinForms or WPF, but, if I recall correctly how things worked, I don't think this is actually an exception to the convention. Don't they also make everything optional? In which case, the correct way to follow the pattern would be to also leave the constructor parameterless and configure everything via properties.
You can tell the designer how a particular type has to be created if it doesn't have a parameterless ctor, but yeah, it's generally easier to do it that way.
NodeJS has had various libraries to mock dependencies for a long time. So for me, I've always seen it as an extra layer on top of something that was already possible.
Going off of the dot net core 5 release thread, I assume it’s because of misunderstanding or lack of familiarity with the platform. There was a very confused top level comment that seemed to be complaining about DI and dependency drilling down several layers. That’s just not using DI at all. In general DI is a great pattern that encourages lose coupling and testability, so I also don’t get the complaint.
If you're talking about my comment, you couldn't be further from the truth.
I've actually worked with C# for over 15 years now and written apps with (in order of age) VBScript, webforms, web services (the MS SOAP stuff), MVC 1, MVC 2, MVC 3, MVC 4, Web API, Web API 2, the OData one I forget the name of and asp.net core. Oh and Silverlight. I've probably missed something. On yeah, WCF. And even WWF! Basically EVERYTHING web-related to C#. Plus enough work in PHP, Python, Ruby, Express, etc. to be able to compare different approaches. And even sometimes VB.net, I've maintained and then migrated entire code-bases from that to C#.
I'm not entirely sure what more familiarity I need?
I'd also note that there are a lot of other people agreeing with my in that thread, almost all of whom show a decent technical understanding. And it's got a lot of upvotes.
But, even if I "misunderstood" or have a "lack of familiarity", that speaks of how poor the design/documentation actually is. If you can so easily shoot yourself in the foot with it, it's not fit for purpose.
None of that is familiarity with DI, which wasn't as prominent or included in the framework itself until .NET Core. Perhaps it's just not a fit for you.
Again, none of that means familiarity with DI. Why deflect about the language? This has nothing to do with C#.
You never answered what your actual issue with DI is (the concept, implementation, containers, etc), or what your alternative would be, even though numerous people have asked. That makes your complaint seem largely unfounded or a combination of confusion and lack of experience with DI.
Not the OP, but I never used .NET core, but used DI pretty heavily for most of my 15 years writing C#. It was there before .NET Core. Maybe you mean it's improved with .NET Core?
"wasn't as prominent or included in the framework itself"
There was no built-in DI container before, you had to use your own like Windsor/Autofac/Ninject/etc, and it wasn't as popular because of it as many people skipped it for smaller/less enterprise projects. I'd say exposing more people do it, even if the built-in DI is rather basic, helped improve many new projects that otherwise might not have chosen to do so.
I don’t know if it was your comment, but you can tell me you’re the queen of Scotland for all I care. Just look at your complaint above in this thread about DI and how no one can make sense of it. If you have so much experience you hide it well.
Funny, I've just checked those comments and again, a bunch of people agreeing with me. Are you wearing your "I can only see comments that back my biases" glasses?
To me, it's clear that some people love DI. SOME. Use it if you want. I don't want to. Stop dictating what's "good" code to experienced developers.
Yeah, I can see the thread just fine, I can also see the majority of posters and the more substantive posters disagreeing with you. Also the only person who comes close to dictating anything here is you.
And I’m still not seeing the benefit of all that experience by the way.