I'm not the parent poster, but I also like to avoid containers when I can. For instance, if there is a bug in some library or common dependency (think libssl or bash) it's easy to update it in one place rather then make sure a whole bunch of containers get updated. Also, when writing software I find that targeting a container keeps you from thinking about portability (by intrinsically avoiding the "it works on my machine" problem) and results in a more fragile end product.
If you aren't getting the binary from your repo's package manager the "update in one place for bugfixes" thing often no longer applies. At least with a container management system the various not-distro-managed things have something akin to a standard way to version bump them vs "go download this from that ftp, go pull this from that repo, etc."
As someone who does use containers: It depends™ on how you do things, but lots of containers are used as a way to consume mystery meat easily. Who made that image? What's in it? Do you trust the binaries in it? How often does it get updates? Are you keeping up with updates that are available? All of these are solvable, of course, but a lot of containers are "just docker run randomsource/whatever:latest and never think about it again".