It would be a much better read without the attitudes declared through statements like "The horrifying existence of abominations like Hulu and the iTunes Music Store..."
I get the technical details, but the attitude is obnoxious at best and diluting the point at worst.
People seem to like Hulu, Netflix, Amazon, eBay, Youtube, etc.
They haven't stopped the internet from the 1990s from existing, its still there and available. Most people just seem to like this other corporate internet a lot more.
It seems to me that in many arguments, it isn't about whether it's possible for people to have their non-corporate, idealized internet (because it is), it's about not allowing others to also have the internet they want (Hulu, iTunes, Facebook, etc), because ultimately people aren't visiting the idealized, non corporate internet, and aren't going to -- because the majority of people prefer the other one.
Ultimately, it seems to be a disappointment rooted in the behaviors and wants of the majority, tho its rarely ever framed as such.
People don't always choose the things that make them happy, especially collectively. If we step back a bit from the internet, this point is obvious: the people of the US, for example, collectively chose to have the Civil War and then abolish slavery, when abolishing slavery without the war would have been a much better choice; and alcoholics who die of exposure in the street would probably have preferred, in retrospect, to never start drinking. Some of them will tell you that explicitly even before dying.
So it's not really disappointment. It's advocacy: there is a better way, a brighter future. And I have a selfish interest in telling people about it, because despite what you say, it's not a choice I can make entirely on my own.
> They haven't stopped the internet from the 1990s from existing; it's still there and available.
In the 1990s, I could email my mother from the mail server in my house; I could walk to the local bookstore and browse the books on the shelves; I could chat online with my friends without giving Microsoft, AOL, Yahoo, Google, and Facebook minute-by-minute updates on when I'm at my computer; I could read and post to Usenet, which had useful discussions on it, without broadcasting my reading habits to whoever was wiretapping (because my ISP had a local news server); and I could run years-old software on my computer without becoming a victim of the latest worm.
So the two internets you're talking about do not exist independently of each other. They interact in a lot of ways. Sometimes the interference is constructive (internet access is available in a lot more places now, and bandwidth is cheaper), and sometimes it's destructive (Usenet is dead.)
I don't think the problem is really even the wants of the majority. I think the problem is that the majority of people are getting tricked into choosing things they don't want, both because they aren't aware of the implications of their choices and because there are prisoner's-dilemma games going on. (Recruit all your friends to Farmville and your farm will be bigger!)
Part of the reason for stating my values so clearly is that the primary audience for this post is people who share those values. I don't want people who share those values to misattribute my conclusions to a conflict of values.
As for people who don't think Hulu and the iTunes Music Store are horrifying abominations, they probably already think ADSL is fine and dandy, so there's no point in tailoring the article to them.
This is a sensible post but note that there are interesting and important applications for overlay networks that are not "peer to peer" (ie, that do not use last-mile bandwidth for transit).
The alternative to using last-mile transit is for everyone to have a virtual server somewhere.
Even this seems to be a bit off since 90% of all people only ever consume content and can't be bothered with anything like creating their own content.
It would be cheaper to buy big servers with enormous amounts of RAM and use visualization than to have thousands of raspberry pis. Unless you mean each person has a raspberry pi on there home network, in which case you're still using expensive last-mile bandwidth.
I have been following this math /very closely/ - and I am fairly certain that if someone wanted to build a board like this in quantity, with more ram, of course, and if I could get my hands on a suitable power supply so I would have one psu for every twenty of these servers, it would likely come out pretty close to the cost of a big box.
The problem is that all boards I can get for a reasonable price in the quantities I can afford have a bunch of crap on them I don't need; video and the like, which pushes the cost up higher than I'd like. I've been talking with a friend of mine who does ARM stuff about maybe seeing if we could get venture funding, as I think we could get into reasonable quantities in the low-six figure range.
But, like I said, as far as I can tell, right now the cost difference is 'close' - even if I could get a custom board with lots of ram and no video crap, for most things it wouldn't be dramatically better than virtualizing a massive supermicro box, and considering that design time would mean that the thing would be kinda old by the time we were done, there probably isn't a lot of profit to be had. But, if trends continue, this may change; and I can tell you right now, customers are willing to pay a premium for dedicated hardware, and for good reason.
(Yes, you have the management problem of thousands of servers, but most of the things that don't have PXE have some way to boot from serial, and I can deal with that. Having managed thousands of virtuals and tens of thousands of hardware nodes, I'm not convinced that virtuals are all that much easier to manage, once you have a good system in place.)
What your not taking into account is that thousands of small servers will likely fail faster than dozens of big servers. I think you'll find that this and the fact that you'd have to maintain much more hardware (the servers themselves and switches) tips the balance.
I've managed clusters of 60K plus large servers. I mean, I didn't manage them by myself; my primary job was automating hardware failure detection, but I took my turn on the pager. My personal focus was to see to it that bad hardware didn't go back into production; as someone who shared pager, that was... offensive. But really, hardware failure is less of an issue than people seem to say. I mean, sure, it happens. But not nearly as often as software issues that bring down the box, unless you count bad hard drives; and those happen all the time.
Even with 60K servers, I quickly learned that if I found a lot of bad servers that were bad in similar ways, I needed to be careful before having them sent back to the vendor. More than once I embarrassed myself blaming software problems on the hardware. For the first 3-5 years of it's life, if the computer was properly assembled, very likely the only things that will need replacement are the disks.
My thought would be to treat the things as hard drives. two options: For the lowest tier of support, if your system fails, I leave it and provision a new one. Leave the dead ones in the rack until it's time to upgrade. The second option is to put the boards on some sort of hot-swap power backplane, and when the thing fails, pop it out and replace it like a hard drive.
The question would be "how expensive is it to make the hot-swap system, and does the manufacturer give me a warranty that makes it worth my time to swap 'em out and send 'em back."
I suspect that if I can get 'em to give me a discount to not have a warranty, my best bet would be to set it up so I could hot swap whatever i used for storage, but if anything but storage failed, I'd just leave the system in place.
sure, maybe I'd want to offer 'premium' support with mirrored storage where I'd swap your storage if it fails, and yeah, in that case, expenses would be higher than a giant server with fewer disks. (by fewer disks, I mean fewer disks per 256mb guest. Obviously, it has more disks total) I/O performance is probably also going to be worse on the giant server with fewer disks, so I can charge more for the 'premium' dedicated hardware with dedicated I/O.
I mean, you get into the 'all troubleshooting must be automated' range much faster with those than with physical hardware, but somewhere between a couple hundred and a couple thousand hardware nodes, you are going to get there either way.
OK, well I'm certainly more convinced now by your idea, but I feel that there are too many unknowns to be able to say confidently that it would actually be a viable alternative.
oh yeah, it's definitely still in the "maybe" zone; and right now, things still lean towards VPSs being enough more efficient that it doesn't make sense to try little ARM servers seriously. I'm just saying, things are moving towards the point, as far as I can tell, where tiny arm servers could be competitive, maybe, with VPSs.
The big problem that is stopping me from betting my money on little ARM servers is that nobody makes commodity-priced ARM boards that use the latest ARM CPUs and have reasonable amounts of ram and don't come with a bunch of stuff like video that I don't want to pay for.
I mean, even if I had that, it's still pretty borderline and quite possible that I'm wrong and it still wouldn't make sense; but it's close enough that I'd be willing to bet a large (by my standards) chunk of my money on it.
* It seems likely that there exists an ARM IC that would suit your purposes, you'd just need to commission a PCB and if you'd planning on having 10s of thousands of them, then that would be no great cost.
* Having more memory will increase the board area and cost significantly.
* You might be better off PXE booting and using a SAN rather than having local disks.
* If you wanted to test the waters, you could try stuffing dozens of raspberry pis into a 4U case with a switch and co-locating. If this took off, you could develop your own custom hardware.
Well, a SAN has other problems; the primary advantage of a dedicated over a VPS is I/O, and SANs, well, you know with software, you say "good, fast, cheap; pick two" - with a SAN, really, the hard part is software. In this case, "good" means "reliable" and you have to be really careful to get two; usually you only get one.
I have a lot of admiration for Amazon's engineering; but even they can't come up with a reliable, performant, and cost-effective SAN solution; and they are able to charge a lot more per gigabyte and per iop than I can.
Now, it's a tradeoff, I mean, local hard drives will /massively/ increase the failure rates, and moderately increase the cost of the individual servers.
I've got someone close to me who is an ARM person who has done PCB layouts and has participated in making custom SBCs; according to this person, the nice, fast dual-core chips you see in cellphones that we'd like to use are often unavailable in quantities of around 1000, which is where I want to start out (even this, 1000 at a target cost of $100 each, is going to require investment. I can drop maybe 1/5th to 1/10th of that of my money, but there is zero chance of custom hardware at the volume I can pay for being economical. My understanding is that you start to see reasonable pricing at around 1000 units.)
Memory, yeah, will be a big cost. But it's also really, really important; Until you have enough ram, nothing else matters.
Ok then, it seems unlikely that you'll be able build custom hardware without a an investment and it also seems unlikely you'll be able to get investment without a proof of concept.
How about, as a proof of concept, raspberry pis, each with a 16GB SD card. I know there are obvious problems: ethernet speed, non-volatile storage speed and only 256MB of memory, but do you think that it would work as a proof of concept?
Depends on what you are testing. The big problem is that the 256MiB ram server market is selling to different people than the 2048MiB ram server market. Different people buy them for different reasons. I mean, there is some overlap, but the 256 people? Mostly hobbyists, some really cheap people doing development or test. The 2048? people who are well off doing test and development, or cheap people doing production. 2048 is the top end of my people and the bottom end of 'serious people' - I wanted to sell dedicated servers as something higher end than my VPSs, and you'd need at least 2048MiB ram for that. Preferably double that or more, but we take what we can get.
On the other hand, if I just wanted to demonstrate I could build a system to manage those small servers, then yeah, your suggestion is great. The management would be about the same. But I'd be selling to a different market, so it wouldn't work as a market test, but I could show the investors that I could make the things go.
I am interested in the 256MiB market; I mean, the hobbyists are my core customers, the people I know how to sell to and serve best. But really, having a stand-alone server is much less compelling, technically speaking, for those people than for the 2048 customers. (and having a sd card is less compelling than a pair of 2.5" drives) really, I'm pretty uneasy about anything with unmirrored storage; the hobbyists, while data loss has a lower dollar value than commercial customers, are usually less prepared with backups and the like to deal with hardware failures. Amazon can sell disks that go away; I can't.
Still, a bunch of 256 rasberry pi and two SD cards, if priced in the half-sawbuck range, would probably sell pretty well to the demographic I know. Of course, that's good profit before you count the storage, and assumes the things can handle Power over ethernet or some other cheap psu. In hosting you usually expect the hardware to earn itself out after 4-5 months if you don't count other costs.
For those people, the ethernet speed (or really the sd card speed) won't be a huge problem. for ten bucks a month, what do you want?
now, the other thing to think about is that VPS prices are about to drop again. (I mean, resources for the same money will increase) I mean, I probably won't drop mine until signups slow or Linode makes a move, but you can get 8GiB reg. ecc ddr3 ram modules for under a hundred bucks. this is silly cheap. You are now paying for your servers in four months paying frigging dell prices (and nobody at my end of the market pays anything like the dell tax.) If my $8 plan (which right now has 256M ram) doubles to 512m? suddenly you go from paying two bucks more for your own hardware to paying two bucks more and getting half the ram. I mean, I'd still get takers, but not nearly as many.
And really, I don't know. I know 'my people' would be willing to pay me money for ARM servers; I really have no idea how that would translate to the behavior of 'serious people' so maybe it's best for me to play around the 256 level first?
Man, I sure wish these things had ECC. that's important.
drat, and they also don't have PoE; that's going to add to the cost and management hassle (I mean, PoE is nice for this 'cause the switch is also a rebooter if the thing gets really hung. down the port.) without PoE, I will have to wire up a power supply /and/ a way to hard reboot the servers, which I think I could do in one go with PoE.
I've met so many people who've got this silly fantasy that there's something 'democratic' about hosting a web server at home. It's fine for a toy web site, but if you want your site to be fast and reliable, it's much more sensible to keep your server in a data center... in most cases, cheaper too.
I think the decision by Google to not buy Skype is great. Skype uses supernodes/decentralized address book and other trickery to keep the most of the load off their servers/NAT proxies. Whatever advantage Skype got from decentralization when it started in 2003 is now long gone. Bandwidth is ridiculously cheap and voice calls don't really take more than 16kbps with modern codecs.
If two Skype clients can't connect to each other directly due to NAT, they proxy the connection via supernode. Any Skype client with a public IP address can be a supernode, so they might use your bandwidth and computer to route calls.
Skype brand and userbase alone are not worth $8,5B. Their P2P decentralization technology is worth ~$0.
The middle half or so of my post is devoted to explaining why SDSL is a bad idea, which I think justifies the more general title. I take it you disagree? Why?
If the last-mile bandwidth is provided by something that flexibly distributes bandwidth between "upstream" and "downstream" (thus inverting the meaning of the terms on a millisecond's notice!), or by fiber optics, or by shared media like a neighborhood Ethernet or cable modems, or by hop-by-hop forwarding in something other than a star topology, the reasoning doesn't apply. It specifically applies to DSL.
With physical connections, one high bandwidth connection will always be cheaper than many low bandwidth connections. So long as your p2p network is working over the internet, the reasoning still holds. Using expensive, last-mile bandwidth any more than necessary is not a good idea.
If you have a shared neighborhood Ethernet or geographically-based hop-by-hop forwarding, it's cheaper to get a file from your neighbor than from a data center, because it avoids using expensive last-mile bandwidth. If you have an inherently-symmetric medium like fiber, traffic in the less congested direction is free, and data-center space isn't.
So peer-to-peer works better than warehouse-scale computing in those cases.
The dynamic-allocation case is a little more complicated, and I'm less certain of my reasoning there. It's true that peer-to-peer architectures will still use more last-mile bandwidth overall in that case than data-center architectures, as much as twice as much, and so they'll still be less efficient. The difference is that you can make that tradeoff on a moment-by-moment basis, instead of choosing to permanently cut your bandwidth in half when you sign up for the service, before you know anything about the internet. That's a much lower bar.
It is amazing to see that current HN's audience is unable to notice that consumers are switching from PC+ADSL scheme to laptop+HSDPA USB stick or some Android/Apple device and that global GSM carriers were already won the market with cheap data-only plans.
But it is OK that typical US citizen hardly knew that other countries exists. ^_^
btw, I'm currently in Sweden, where HSDPA coverage even in rural areas is a defacto standard. Same trends going on everywhere in the world (Asia and Europe), except, of course, US. ^_^
I get the technical details, but the attitude is obnoxious at best and diluting the point at worst.