Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Traditional bare metal involved complicated packaging and release processes. You would literally have a diagram of "these are the physical servers running this service (LB, web server, etc.)" and be able to point to them. You could overprovision workloads to tolerate failure but not just shuffle a service from one machine to another.

When I used to work in a similar environment we would develop code, then we would give it to a QA team. They would test it and give it to an Ops team. The Ops team would schedule a maintenance window and roll out the new code on each server. This happened maybe once a quarter because testing and releasing was a week-long process.

Racking new servers and provisioning them also required some manual labour. We had a process to use PXE to provision the machines but it was still toil. Virtualization was a big benefit because you could at least create and blow away VMs without having to re-image a whole server from scratch.

"Running your own cloud" implies that developers can treat instances like cattle and interact via an API. But it also means there's a standard set of tooling for fleet management. None of this stuff is entirely new but in small or mid-sized orgs it was out of reach 10 years ago.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: