Because there isn't any type of x86 processor that beats a comparable ARM processor for efficiency. If you could make an efficient x86 processor Atom would be it, and it's less efficient than ARM.
The x86 ISA fundamentally takes more silicon to implement than ARM. More gates = more power.
Everything Intel sells today clobbers any currently-marketed ARM chip on per-unit-energy computation performed. The race is not even close. ARM is only of interest if you are constrained by something other than compute (phones) or you don't know how to program and you are wasting most of the performance of your Xeons. The latter category contains nearly the entire enterprise software market and most other programmers as well.
Or, your program is entirely constrained by IO so most of the power of Xeon is wasted, while you still have to pay the premium for it.
This chip is interesting not because of the cpu core in it, but because it has two presumably fast 10GbE interfaces and possibility for a large amount of ram in a cheap-ish chip.
There's another variable to throw into the mix: all gates are not created equal. A 28nm (this new processor) takes a lot more power than a 22nm (new intel processors) gate.
Do you have a source for any of this? x86 is much more powerful than ARM by watt, being exponentially faster at most math. I've never had anyone seriously propose that ARM is more efficient than x86 at anything then not pulling watts from a Li Ion battery.
Can you elaborate what you mean by "exponentially"?
For ARMv7 vs x86, yes, x86 just destroys ARMv7 (Cortex A15 etc.) in double (float64) performance.
While I do think x86 is still faster vs ARMv8, the gap is likely much less per GHz, because ARMv8 Neon now supports doubles much like SSE. Of course Haswell has wider AVX (256-bit) and ability to issue two 256-bit wide FMAs per cycle (16 float64 ops). Cortex A57 can handle just 1/4th of that, 4 FMA float64 ops per cycle.
That said, low to mid level servers are not really crunching much numbers. They're all about branchy code such as business logic, encoding / decoding, etc. Or waiting for I/O to complete.
So why would you care about math in a low end server CPU if it's not being used anyways?