Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

https://oxide.computer/ has a simple mission. Build hyperscale racks with open firmware/drivers - cut the wasted components like HDMI ports that Dell/HP still throw on racks by default. ARM/RISCV/x86/GPU/TPU/storage ... those are just configuration options.

Companies like Lightedge will take on the HVAC and physical security. The future is chiplets and computational memory: https://www.researchgate.net/profile/Michael_Stumm/publicati...



>>ARM/RISCV/x86/GPU/TPU/storage ... those are just configuration options.

But ISA still matters. Switching from Intel to AMD likely isn't much of a switch, but switching to a different ISA in the datacenter will continue to be an uphill battle.

As the article notes, the uphill battle will take longer for RISC-V and if Nvidia follows the strategy laid out they hope to profit in the intervening gap.


> but switching to a different ISA in the datacenter will continue to be an uphill battle.

It really depends on what kind of stuff the datacenter runs.

I know of datacenters where almost all the code running is Java. For those kind of datacenters, so long as JDK supports ARM (which it does), I can't see why it would be such an uphill battle.

The uphill battle is really for sites who run lots of closed source COTS software, especially that which is written in C/C++/etc, where moving to ARM needs support from the vendors and the vendors might hesitate due to the amount of work involved. There are sites where close to everything is either open-source or else developed in-house in managed languages (Java, .Net, Python, Ruby, PHP, JavaScript, etc), and those sites are likely to find it a lot easier.


Even running an interpreted language we've found that there's just a hundred small papercuts when it comes to switching over to an ARM distro. Missing native modules, third party deps, even some shell scripts mysteriously not working properly. Over time this will hopefully ease, and having all developers on Mac using ARM will help massively, but right now there's just enough friction to make it not quite worth it.


> even some shell scripts mysteriously not working properly

I'm wondering how this could happen.

I'm guessing the script is using uname to detect the platform, and gets confused by Linux on ARM.

I used to see a lot of shell scripts which detected Linux vs Solaris vs AIX vs HP-UX and did different things on each, especially due to differences in what commands and options are available. Given those commercial Unices are now shadows of their former selves, you don't see that so much any more.

But I still see it in scripts that have to run on both macOS and Linux. I've even written a few of those scripts recently.

This shouldn't be an issue, though, if you just do `uname -s` – you should get e.g. `Linux` on both ARM and x86. Maybe some people, for whatever reason, are checking `uname -p` or `uname -m` instead or as well, or even trying to parse the output of `uname -a` – not a very good practice


Ehh, once you get to a certain point you wind up poking things under /sys and /proc that are a little different per-architecture, or building snippets of C code that make syscalls which behave differently per-architecture. Not to mention that ARM distros are usually a little different, either lagging behind or a little ahead of what exists for x86 - although it's better than it used to be, especially with ARM64.


You are just defending yourself. This is simply not true. Get an arm server on packet, install software, notice nothing.


> But ISA still matters. Switching from Intel to AMD likely isn't much of a switch, but switching to a different ISA in the datacenter will continue to be an uphill battle.

I'd expect datacenter to be far easier than in a laptop or such; datacenters run a lot more software that's either open source (and already working on multiple architectures, most of the time) or in-house (in which case the happy path is "add an extra build job that builds for the new systems"). It's not like consumer space where most users are tied to proprietary software that they couldn't port if they wanted.


Apple took the spearhead by promising ARM laptops. As soon as that happen it unlocks the final puzzle for ARM based datacenter.

Running on local is essential - things are so much easier to debug, especially things deep in the dependency tree.


I almost guarantee the delay in the build out of their Waukee Iowa data center is so they can fill racks with server grade Apple Silicon.


Why Apple Silicon? Things are hard primarily because of different ISA and the necessity to recompile everything, of which some doesn't. Apple Silicon is still ARM.


> cut the wasted components like HDMI ports

Saving a few pennies is fine, but it hardly seems like a game changer.


HDMI firmware overhead can be amortized, but the wasted space is huge - racks should be designed to hook up with HVAC/Power/Network - not so techs can plug in monitors.


Not much details about that Oxide.

The team size I see on the photo don't give off much confidence about them pulling that out.


For whatever it's worth, the talk I gave at Stanford in February[1] goes into (much) more detail -- and given some of your other comments here, you will find many aspects of the talk educational.

[1] https://www.youtube.com/watch?v=vvZA9n3e5pc




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: