Limited upgrade possibilities, tiny storage, poor airflow, not ECC RAM, no redundancy in network interfaces or power supplies, no remote console and you have to bastardise the things to get two disks in.
You can do a lot better for $50/month without the initial up front cost of the machine!
Yes. This happened to a company I worked for in the UK in the last 1990s. We had a half rack full of NT4 machines and someone got into our kit and used it to run a pr0n FTP. They took us to court to pay up and the magistrate said we had no bill to pay and that it was a waste of court time as it wasn't intentional.
You're right about reporting a crime though even if the police don't take it seriously. A crime ref number goes a long way on its own.
Edit: we had to move our kit sharpish though as the company exercised their right to throw it on the street within 24 hours.
No they're not. I've worked with them. They are pretty good at turning around typical enterprise tech platforms that are in trouble. I'm not talking about CRUD stuff but heavy integration and workflow stuff with hundreds to thousands of tables that scales to thousands of concurrent users. They know the tech very well and know how to get from A to B cheaply which is incredibly difficult. They know how to organise large teams as well and have a good tech relationship with a lot of vendors which is really important.
Not joking but what we threw at them had a codebase that would scare a lot of people and they sucked it up and spat it out in good shape and they did the same with the team too.
As for Fowler, the analysis and formal descriptions he wrote up for PoEEA are rather good. Read the book at least once. You'll understand why things like hibernate were written and what problems they solve.
My wife has a 2011 MBP blessed with a 1280x800 screen and the Helvetica face looks pretty bad from a type point of view. It's not unreadable but the type has lost its definition completely.
Unfortunately "after yesterday", things are expensive I.e. to get our definition back, we have to shift the entire unit with an i7, 16Gb of RAM and Samsung 840 Pro and buy a new unit with similar spec, which isn't cheap because you have to buy up front rather than upgrade now.
Hmm.
(I have an X201 with same res as my daily driver and its pretty good with ClearType on windows)
Well actually a bad example of their business model because they can piss off they think I'm buying another one. She's getting the innards chucked in a T420 with a 1440x900 screen and windows 8.1
Not a scientific measure by ANY measure, but a similar core I googled appears to kick out about 200 bogomips whereas a virtual Xeon E5-2690 v2 core on one of my machines knocks out 5984 bogomips.
I have 20 of those Xeon cores and 128Gb of RAM in a 2U.
Comparing the ratio of bogomips you'd have to get 598 of those ARM machines in a 2U to get the same bogomips.
Like I said this isn't even slightly scientific but is at least interesting trivia.
ARMv7 means they are probably using 32-bit Cortex A9 processors. Those are quite old, and probably on a 40nm process. The state of the art right now are these from Applied Micro:
I found other source where they said it's around 1200 bogomips for a single core. That would means that you only need 5 times more core which is far from being an issue, 100 cores, which means only 25 processors.
Yes, but considering that newer CPUs do not have increasing frequencies, I guess you are more or less doomed to scale horizontally, and not vertically anymore.
Not tried to push it but it has 20 Windows Server 2012 R2 instances running on it at the moment all with 8Gb of memory (this is overcommitted dynamic memory). Disk is on a SAN larger than my kitchen. I span up a Linux VM quickly to do a bogomips on :)
I can probably push 40 of those onto it without it bending too terribly. If I knock the RAM down to 2Gb an instance I could probably quite happily get 64-100 on it in theory. I think memory bandwidth might kill it before CPU does.
We have two almost full (18 each) 42U racks of those machines (bar switches) so across the 720 E5 cores with 4.6TiB of RAM there is about 4.3 million bogomips.
Fun :)
(most of this is corporate fileservers, exchange, AD, various crappy apps, network appliances, web servers, SQL servers and idles at around 20% in use). If it all went off you'd need earplugs and fireman's equipment.
Not really a deep Linux user here but systemd brings the bad bits of windows to Linux: Abstract stateful configuration, black boxes and communications to do simple tasks. As someone who supports lots of software built on this approach (windows desktop) then I will assure you it's a pain in the butt big time.
Sure it may work for you but if N machines have state A and P machines have state B then getting all N and P machines to state B is incredibly difficult. It looks easy until you do it. That's probably one of the largest and most common things we do when deploying more than a trivial single machine.
To put it into perspective, everything becomes as much of a PITA as RPM packages without yum.
That was immediately obvious to me when I installed CentOS 7 on a couple of test machines to replace our costly Riverbed appliances and timedatectl threw "failed to issue RPC" on one machine but not the other. Same steps to install.
That's where we don't want to go.
We are now running FreeBSD and nginx on a HyperV cluster.
Abstract stateful configuration might be a bad thing, but having a configuration file execute arbitrary, complicated code sounds like a much worse thing to me. I would be interested in differing opinions, but I think in another context the words "executable configuration file" would be a cause for deep concern.
I disagree. If nothing else, there is a lot of precedent for executable code in configuration roles. MTAs, build systems, desktop environments, text editors (vim, emacs, joe, others) and probably a good percentage of the uses of tcl. Problems arising from this don't seem to be universal in any category (except possibly build systems, depending on how you look at it.)
But I have a question: where is this coming from? What makes you think it's "a cause for deep concern?" I would agree it's overkill for most software, but an init system for an operating system like Linux (or configurations for most of the kinds of software mentioned above) is never going to be simple.
I don't know of any executable configuration files on any Unix derivatives. There are init scripts, but they are not configuration; they are instructions.
There are scripts with metadata attached (rc.d items) which may be ambiguous but that is no different from a shared library containing an export table or a set of runtime linker dependencies.
Systemd unit files, as far as I am aware, are functional replacements for init scripts. They do contain executable code, which is perhaps unavoidable. However, they are primarily what you would call a configuration file. Given that they are functionally equivalent to init scripts, init scripts must also handle configuration.
You're making a semantic argument, not a technical one. Why is it a good thing that your init system consist entirely of executable scripts? Because to me that sort of arbitrary executable code would be something to minimize: abstract out common functionality, DRY, and have each component do as little as necessary. If you believe otherwise, please support that idea.
If there isn't one, I'll take two! Cheaper than Azure!