What AWS gives you is the ability to spin up dozens if not thousands of hosts in a single click.
If you run your own hardware, getting stuff shipped to a datacenter and installed is 2 to 4 weeks (and potentially much longer based on how efficient your pipeline is)
What really needs thousands of hosts nowadays? Even if you have millions of users. Computers are plenty fast now and leveraging that is not any harder if you choose the right stack.
And even if you are building with microservices, most standard servers can handle dozens in a single machine at once. They are all mostly doing network calls with minimal compute. Even better actually if they are in the same host and the network doesn’t get involved.
If you want to, there are simple tools to hook a handful of them as a cluster and/or instantly spawn extra slightly costlier VMs in case of failure or a spike in usage, if a short outage is really a world-ending event, which it isn’t for almost every software system or business. These capabilities have not been exclusive to the major cloud providers for years.
Of course we are generalizing a lot by this point, I’d be happy to discuss specific cases.
If you own your own hardware, but you can provision a leased dedicated server from many different providers in an hour or three, and still pay far less than for comparable hardware from AWS.
I suspect that if you broke projects on AWS down by the numbers, the vast majority don't needed it.
There are other benefits to using AWS (and drawbacks) bit "easy scaling" isn't just premature optimisation because if you build something to do something it's never going to do that's not optimisation it's simply waste.
I'm not familiar with Hetzner personally, but maybe they mean the uplink? I've found that with some smaller providers, advertising 10Gbit, but you rarely get close to that speed in reality.
If you run your own hardware, getting stuff shipped to a datacenter and installed is 2 to 4 weeks (and potentially much longer based on how efficient your pipeline is)