Hacker Newsnew | past | comments | ask | show | jobs | submit | fulafel's commentslogin

> baffled that Intel seriously thought they would've been able to persuade anyone to switch to it from x86

They did persuade SGI, DEC and HP to switch from their RISCs to it though. Which turned out to be rather good for business.


I suspect SGI and DEC / Compaq could look at a chart and see that with P6 Intel was getting very close to their RISC chips, through the power of MONEY (simplification). They weren't hitting a CISC wall, and the main moat custom RISC had left was 64 bit. Intel's 64 bit chip would inevitably become the standard chip for PCs, and therefore Intel would be able to turn its money cannon onto overpowering all 64 bit RISCs in short order. May as well get aboard the 64 bit Intel train early.

Which is nearly true 64 bit Intel chips did (mostly) kill RISC. But not their (and HP's) fun science project IA64, they had to copy AMD's "what if x86, but 64 bit?" idea instead.


SGI and DEC, yes, but HP? Itanium was HP's idea all along! [1]

[1] https://en.wikipedia.org/wiki/Itanium#History


You're right of course.

I don't think Clojure belongs there. It was never as big as Kotlin, but it's got great community, longevity and takes backwards compatibility very seriously, and 10 year old Clojure projects seem to be aging at least as well as 10 year old Java projects.

Yes, but it would move very slowly compared to current freight ships, think an order of magnitude lower average speed. (You can compromise on the freight features to get some more speed of course, but it's still going to be slower unless you do something dramatic like fly a huge PV array as a kite or something)

Besides PV, there's a long history of wind powered ships of course.


Vimeo link is paywalled.

Not "paywalled". They just want you to log in. This costs nothing. Not does the account registration.

The solution seems to be to have the video "rated" in some way. Or host it elsewhere.


>They just want you to log in. This costs nothing.

You pay with your data...


The whole 2.5 G spec is a weird step for ethernet speeds too. It's unfortunate it took off.

They said the same thing about 40G but hey, I've loved it for bridging the gap between my two (10G and 100G, respectively) Mikrotik switches. You can have a dozen Gigabit ports, as well as up to four true 10G devices on your aggregation switch, and neither would be bottlenecked by traffic to and from the backside. This has been a massive boon. However, when it comes to 2.5G, I struggle to find one good reason to use it; such a tiny step-up in bandwidth, and for what?

> However, when it comes to 2.5G, I struggle to find one good reason to use it; such a tiny step-up in bandwidth, and for what?

Portability and heat. You can get a small USB 2.5G adapter that produces negligible heat, but a Thunderbolt 10G adapter is large and produces a substantial amount of heat.

I use 10G at home, but the adapter I throw into my laptop bag is a tiny 2.5G adapter.


The power has come down a lot and drops in shorter runs, from ~15w per port at the max 100M cable length back in 2006, to ~2w with todays tech and short cable run.

There's of course fiber too...


I’m sure it depends on the model, but in my experience if you force a 10G copper transceiver to 2.5G the insane heat generation goes away. I don’t have any Thunderbolt 10G adapters, but I’m kind of surprised they’re much larger. A SFP+ transceiver is the same size as a SFP one.

I think a major reason for the size is for heat dissipation, because it has to be prepared to handle the heat of a full 10G copper connection. Mine runs hot.

Most of my cables coming out of the aggregation switch are DAC and fiber, but there is no 10G copper because my PC came with 10G copper NIC integrated. Anyway, the difference in heat between this transceiver is shockingly large.

I knew it runs hot before I deployed it, but I wasn't aware that you have to wait for it to cooldown before unplugging, or you get burnt.


1x PCIE 3.0 has 8 Gbps raw speed - for 2.5Gbps duplex Ethernet you'll need 6~7 Gbps of raw link to CPU.

For 5Gbps and higher, you'll need another PCIE line - and SOHO motherboards are usually already pretty tight on PCIE lanes.

10GbE will require 4x3.0 lanes


> 10GbE will require 4x3.0 lanes

3.0 PCIE is irrelevant today when it comes to devices you want on 10G. I'm pretty sure the real reason is that 2.5G can comfortably run on cable you used for 1G[1], while 10G get silly hot or requires transceiver and user understanding of a hundred 2-3 letter acronyms.

Combine it with IPS speeds lagging behind. 2.5G while feels odd to some, makes total sense on consumer market.

[1]: at short distances, I had replaced one run with shielded cable to get 2.5G, but it had POE, so it might contribute to noise?)


PCIe is full duplex. And there's no requirement for ethernet ports to be able to do full tilt. Even with a 1x PCIe 3.0, a 10G port will be much much better than a 2.5G one.

(But PCIe 3.0 of course is from 2010 and isn't too relevant today - 4.0, 5.0, 6.0 and 7.0 have 16/32/64/128 Gbps per lane respectively)


Are motherboards commonly using PCIe 3.0 for onboard peripherals these days? I wouldn’t expect it to save them much money, but my PCIe knowledge is constrained to the application layer - I know next to nothing about the PHY or associated costs.

This is got to be it!

40G on Mikrotik is just channel bonding of 4 10G links at layer 2. It’s not like the vast majority of 100G that does layer 1 bonding. I really don’t know why they did it other than to have a bigger number on the spec sheet - I can’t imagine they save any money having a weird MAC setup almost nobody else uses on a few low-volume models.

Does the dns auto registration from dhcp work well with v6 as well in dnsmasq?

No, the local name → IP resolution will work for IPv4 only

Think back to the x86 32->64 bit transition, but much worse, since ARM is more niche and there are more arch differences.

You need all your 85 3rd party middlewares and dependencies (and transitive dependencies) to support the new architecture. The last 10% of which is going to be especially painful. And your platform native APIs. And your compilers. And you want to keep the codebase still working for the mainstream architecture so you add lots of new configuration combos / alternative code paths everywhere, and multiply your testing burden. And you will get mystery bugs which are hard to attribute to any single change since getting the game to run at all already required a zillion different changes around the codebase. And probably other stuff I didn't think of.

So that's for one game. Now convince everyone who has published a game on Steam to take on such a project, nearly all of whom have ages ago moved on and probably don't have the original programmers on staff anymore. Of course it should also be profitable for the developer and publisher in each case (and more profitable & interesting than whatever else they could be doing with their time).


The instruction set has marginal impact. But many power efficient chips happen to be using the ARM instruction set today.

I think that's still highly debatable. Intel and AMD claim the instruction set makes no difference... but of course they would. And if that's really the case where are the power efficient x86 chips?

Possibly the truth is that everyone is talking past each other. Certainly in the Moore's Law days "marginal impact" would have meant maybe less then 20%, because differences smaller than that pretty much didn't matter. And there's no way the ISA makes 20% difference.

But today I'd say "marginal impact" is less than 5% which is way more debatable.


> And if that's really the case where are the power efficient x86 chips?

Where are the power inefficient x86 chips? If you normalize for production process and put the chips under synthetic load, ARM and x86 usually end up in a similar ballpark of efficiency. ARM is typically less efficient for wide SIMD/vector workloads, but more efficient at idle.

AMD and Intel aren't smartphone manufacturers. Their cash cows aren't in manufacturing mobile chipsets, and neither of them have sweetheart deals on ARM IP with Softbank like Apple does. For the markets they address, it's not unlikely that ARM would be both unprofitable and more power-hungry.


Jim Keller goes into some detail about what difference the ISA makes in general in this clip https://youtu.be/yTMRGERZrQE?si=u-dEXwxp0MWPQumy

Spoiler, it's not much because most of the actual execution time is spent in a handful of basic OPs.

Branch prediction is where the magic happens today.


>Spoiler, it's not much because most of the actual execution time is spent in a handful of basic OPs.

Yet, on a CISC ISA, you still have to support everything else, which is essentially cruft.


Does that matter? I lean towards the yes-the-ISA-matters camp, but I'm also under the impression that most silicon is dark.

Jim Keller has to say that.

The stage is yours if you choose to refute him.

Intel spent years trying to get manufacturers to use their x86 chips in phones, but manufacturers turned them down, because the power efficiency was never good enough.

Well, they were targeting Android, and the apps were emulating ARM on x86, and they were going against a strong incumbent. Accounts on the web of this failure seem to bring up other failings as the main problems.

Eg this review of the AZ210 phone from 2012 seems to think the battery life was good: https://www.trustedreviews.com/reviews/orange-san-diego

"Battery life during our test period seemed to be pretty good and perhaps slightly better than many dual-core Android phone’s we’ve tested."


> the apps were emulating ARM on x86

They weren't (except some games maybe). Most apps were written in Java and JITed.


Well, apps tuned for performance and apps using native code have more than a little overlap. Even back then there were a lot of apps besides games that used native code for the hot code paths. But games of course are huge by themselves, and besides performance you need to have good power efficiency in running them.

Here's some more details: https://www.theregister.com/2014/05/02/arm_test_results_atta...

(note it's a 2-part, the "next page" link is small print )


You're basically reiterating exactly what I just said. Intel had no interest in licensing ARM's IP, they'd have made more money selling their fab space for Cortex designs at that point.

Yes, it cost Intel their smartphone contracts, but those weren't high-margin sales in the first place. Conversely, ARM's capricious licensing meant that we wouldn't see truly high-performance ARM cores until M1 and Neoverse hit the market.


> Intel had no interest in licensing ARM's IP, they'd have made more money selling their fab space for Cortex designs at that point.

Maybe, but the fact remains that they spent years trying to make an Atom that could fit the performance/watt that smartphone makers needed to be competitive, and they couldn't do it, which pretty strongly suggests it's fundamentally difficult. Even if they now try to sour-grapes that they just weren't really trying, I don't believe them.


I think we're talking past each other here. I already mentioned this in my original comment:

  ARM is typically [...] more efficient at idle.
From Intel's perspective, the decision to invest in x86 was purely fiscal. With the benefit of hindsight, it's also pretty obvious that licensing ARM would not have saved the company. Intel was still hamstrung by DUV fabs. It made no sense to abandon their high-margin datacenter market to chase low-margin SOCs.

Just like laptop dGPUs.

What is says first is: "SQLite is for phones and mobile apps (and the occasional airliner)! For web servers use a proper database like Postgres!"

Though I'd say it's for a broader set of applications than that (embedded apps, desktop apps, low-concurrency server apps etc).

Phones and mobile apps installations of course outnumber web app deployments, and it doesn't say what you paraphrased about servers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: