I can kind of understand why people went away from this, but this is how we did it for years/decades and it just worked. Yes, doing this does require more work for you, but that's just part of the job.
For performance reasons alone, you definitely want to host as much as possible on the same domain.
In my experience from inside companies, we went from self-hosting with largely ssh access to complex deployment automation and CI/CD that made it hard to include any new resource in the build process. I get the temptation: resources linked from external domains / cdns gave the frontend teams quick access to the libraries, fonts, tools, etc. they needed.
Thankfully things have changed for the better and it's much easier to include these things directly inside your project.
There was a brief period when the frontend dev world believed the most performant way to have everyone load, say, jquery, would be for every site to load it from the same CDN URL. From a trustworthy provider like Google, of course.
It turned out the browser domain sandboxing wasn’t as good as we thought, so this opened up side channel attacks, which led to browsers getting rid of cross-domain cache sharing; and of course it turns out that there’s really no such thing as a ‘trustworthy provider’ so the web dev community memory-holed that little side adventure and pivoted to npm.
Which is going GREAT by the way.
The advice is still out there, of course. W3schools says:
> One big advantage of using the hosted jQuery from Google:
> Many users already have downloaded jQuery from Google when visiting another site. As a result, it will be loaded from cache when they visit your site
Be good at a time when Google manually ranks domains, then pivot to crap when Google stops updating the ranking. Same as the site formerly known as Wikia.
> For performance reasons alone, you definitely want to host as much as possible on the same domain.
It used to be the opposite. Browsers limit the amount of concurrent requests to a domain. A way to circumvent that was to load your resources from a.example.com, b.example.com, c.example.com etc. Paying some time for extra dns resolves I guess, but could then load many more resources at the same time.
Not as relevant anymore, with http2 that allows sharing connections, and more common to bundle files.
Years ago I had terrible DNS service from my ISP, enough to make my DSL sometimes underperform dialup. About 1 in 20 DNS lookups would hang for many seconds so it was inevitable that any web site that pulled content from multiple domains would hang up for a long time when loading. Minimizing DNS lookups was necessary to get decent performance for me back then.
Using external tools can make it quite a lot harder to do differential analysis to triage the source of a bug.
The psychology of debugging is more important than most allow. Known unknowns introduce the possibility that an Other is responsible for our current predicament instead of one of the three people who touched the code since the problem happened (though I've also seen this when the number of people is exactly 1)
The judge and jury in your head will refuse to look at painful truths as long as there is reasonable doubt, and so being able to scapegoat a third party is a depressingly common gambit. People will attempt to put off paying the piper even if doing so means pissing off the piper in the process. That bill can come due multiple times.
Maybe people have been serving those megabytes of JS frameworks from some single-threaded python webserver (in dev/debug mode to boot) and wondered why they could only hit 30req/s or something like that.
I can kind of understand why people went away from this, but this is how we did it for years/decades and it just worked. Yes, doing this does require more work for you, but that's just part of the job.