People who know enough to consider architectures like this aren't the ones most likely to accidentally expose databases to the internet. It happens, but most often these mistakes are made by people who just don't have the experience to be wary.
I think software like docker have a responsibility to encourage secure-by-default configurations, but unfortunately "easy" is often the default that wins mindshare.
I agree with you, but since Docker is kind of a given, how can one learn the necessary stuff about networking as to not make these mistakes?
I always see best practices like this, but they don't really help in grokking what's happening and why. I'd like to know more about the networking stuff, but whenever i look something up it's very specific, so you don't really learn why it's bad.
How can a regular user understand how the network stack works? At least enough to get an instinct why something would be bad.
It's difficult for me to answer how other people should learn these things, since I personally just... tried to figure things out? It's been so long since I found basic networking mystifying that I'm not sure how to explain it to someone who doesn't have the same intuition. If you have something that's very specific, maybe make a guess on how it could be generalized and then test that guess. Try to build a mental model, and test that model.
I don't like using systems that are complete black boxes, so whenever I use something, I try gain a reasonable understanding of how it works under the hood. If a system claims to make X easy, I want to know at least what is involved in accomplishing that, even if the implementation details aren't relevant knowledge. I don't often need to dig into the nitty-gritty of how the Linux TCP stack works, but even having a broad idea of how the TCP protocol works is pretty useful, and especially how it relates to other networking protocols.
I guess for practical networking, it helps to first focus on IP addressing and routing; ie. how does a packet sent from your computer actually get through all the switches and routers to the destination computer? The short answer is that every node (including your computer) makes a routing decision on where to send the packet, and then it's sent forward. This happens at each "hop" until it appears at the end (or gets dropped by a firewall).
And from this simple logic and some fancy tools to help you make dynamic routing decisions in response to changes in network topology (router went down? update local route information and send the packet to the other router that's still up), you can build the internet in a fault-tolerant manner.
I guess you are cutting straight to the chase and overlooking the fundamentals. I took a lot from the Well-Architected framework from AWS and applied in all my projects.
Take a look at the security pillar with extra care. For the cloud I would suggest you take a basic practitioner exam, or at least a preparation course in a platform like whizlabs. There you would get a basic understanding of how networking is laid on the cloud.
For private, on-premises projects, it really comes down to what you have at hand. In this case maybe the Google SRE book would be good. You take good practices in maintaining a data center and apply the distilled knowledge to what makes sense to your infrastructure:
Read this book as in topics, not sequentially, coming back to the fundamentals when you feel lost, otherwise you might end up lost in technicalities that make little sense to your work.
Also take a look at the shared responsibility principle. There it is exposed what are the client and cloud provider responsibility. When you have a private on-premises project all you have to do is implement the entire responsibility stack that the cloud does for you.
I am not sure why you were downvoted. I agree with that. I prefer technologies that are restrictive by default and more flexible and potentially harmful configurations hidden behind explicit and well structured options.
Either an exception should be raised or a default safe behaviour should be adopted when the example is encountered. I prefere breaking as soon as possible because the alternative is harder to debug.
I think software like docker have a responsibility to encourage secure-by-default configurations, but unfortunately "easy" is often the default that wins mindshare.