The Internet itself is inherently unreliable. Your users will traverse multiple hops (many networks) before connecting to your servers at any provider, as will any external monitoring systems you put in place. Brief issues along the paths between them and your servers can and will occur. While the Internet as a whole is designed to route around problems, this doesn't happen instantaneously in most cases, although it can happen very quickly.
This is not to say that internal network issues won't happen at any provider on occasion as well, because they certainly will. No network is perfect, least of all the massive system of interconnected networks known as the Internet, and you need to accept and plan for this. To believe or expect otherwise is simply unrealistic.
To say that a database has to fit entirely in memory to achieve good performance is a ridiculous proposition, and simply shows you have zero actual knowledge of modern database server internals or administration. Countless sites happily serve oodles of pageviews per day with actual memory usage far below the disk space used by their databases. Hint: they're not swapping, either.
In general, if you really believe what you're saying, you either (1) have a very poorly designed application, (2) have a very poorly designed database environment, or (3) are speaking to a specialized application that wouldn't reflect the majority of environments operating in real life. This isn't to say it isn't a combination of these options, mind you. I didn't even start on utilizing caching in applications, because it's clear there are other hurdles to overcome first.
I don't think he is saying that. He is saying that in a non-dedicated environment, you share the same spindles with other tenants who may have different I/O access patterns than your application. Careful choice of indexes, good data locality for fast reads, making sure writes are sequential - all that goes out the window if some other application is causing the disk to seek all over the place.
While Linode's billing is monthly by default, it's also prorated. This means that when you remove a Linode from your account, the account is issued a prorated service credit for the unused time in the billing period. Service credit is always used for further services before charging the card on file.
In other words, you can spin up multiple Linodes, remove them the next day, and all you're really paying for is the day you had them deployed. There's a document on Linode billing located here: http://library.linode.com/linode-platform/billing/
Linode doesn't "burst" CPU usage. Processor time is shared fairly among Linodes on a host, and you can use any time that isn't used by others. It's worth noting that each Linode has access to four cores, so you can go up to 400% CPU utilization. For a good idea of how Linode's CPU performance routinely exceeds that of competitors, try this review: http://journal.uggedal.com/vps-performance-comparison
Semantics. The end result is the same: there is almost always a large amount of spare CPU cycles on each physical box, which can be utilized by instances to "burst" above their allocated capacity.
It's more than semantics. The term "burst" carries the implication that you only get something for a short time. That's really not a good reflection of reality in this situation.
Words have meanings and carry implications. By your logic, we might as well use the term "banana" instead. The point is that the concept doesn't reflect reality, and thus your point has no merit.
This is not to say that internal network issues won't happen at any provider on occasion as well, because they certainly will. No network is perfect, least of all the massive system of interconnected networks known as the Internet, and you need to accept and plan for this. To believe or expect otherwise is simply unrealistic.