Hacker Newsnew | past | comments | ask | show | jobs | submit | MochaDen's commentslogin

Finding someone lost in the ocean is a huge challenge but wouldn't we expect a boarding team to be well prepared for this sort of thing? Don't they carry locator beacons, radios, flares or super-bright strobes, etc...? It seems strange that they could not find them doesn't it?


Hopefully this will just get better and better as charging networks improve, as batteries improve, and charging technology improves. We're already seeing some good steps in all these areas.


This is such a really important comment. I feel like so much of our discourse as a society suffers terribly from overloaded and oversimplified terminology. It's impossible to have a useful discussion when we aren't even in sync about what we're discussing.


That's kind of handy as a simple display/browser sanity test. It's very easy to see little processor hiccups on a busy machine.


Low-power is great but running a big RAID long-term without ECC gives me the heebee jeebies! Any good solutions for a similar system but more robust over 5+ years?


I think the trick is to go with a generation or two old Supermicro motherboard in whatever ATX case you can scrounge up, and then use either a low power Xeon or a Pentium/Celeron. Something like the X11SAE-F or X12SCA-F (or maybe even older) is plenty, though maybe not quite as low power. I still use an X9SCA+-F with some very old Xeon for a NAS and to run some LXC containers. It idles at maybe 20-30W instead of 5, but I've never had any issues with it, and I'm sure it's paid itself off many times over.


Even better, Supermicro will pick up the phone/answer emails, even if you bought a years-old secondhand server. They have the manuals, and are more than happy to help you out.

Love my X9 and X11 boards.


If you're on a budget, a used HP Z-Series workstation supports ECC ram. A bare-bones one is cheap, though the ECC memory can be expensive since it's not the (plentifully available) server type RDIMMs. Not a low-power setup either :)


Embedded SOCs like AMDs which are used by Synology etc such as AMD V2000.

If you want to step up to being able to serve an entire case or 4U of HDDs, you’re going to need pcie lanes though, in which case w680 with i5-12600k and a single ecc udimm and a SAS HBA in the pcie slot with integrated Ethernet is probably as low wattage as you can get. Shame w680 platform cost is so high, am4/zen2 is cheaper to the point of still being viable.

You can also get Xeon, embedded Xeon, am5, am4 (without an iGPU).

There’s nothing inherently wrong with running a raid without ecc for 5 years, people do it all the time and things go fine.


Been thinking to just get a Synology with ECC support, but what I find weird is that the CPUs they use are 5+ years old. Feels wrong to buy something like that “new”

Same with TrueNas mini


For the most part, these are computers which are meant to stick around through 2-4 upgrade cycles of your other computers. Just doing various low power 24/7 tasks like file serving.

You could be like “well that’s stupid, I’m going to make a balls to the wall build server that also serves storage with recent components” but the build server components will become obsolete faster then the storage components, it can lead to incidental complexity to try and run something like windows games on a NAS operating system because you tried to consolidate on one computer, being forced to use things like ECC will compromise absolute performance, you’ll want to have the computer by your desk potentially but also in a closet since it has loud storage, you’re liable to run out of pcie lanes and slots, you want to use open cooling for the high performance components and a closed case for the spinning rust, it’s all a bit awkward.

Much simpler is to just treat the NAS as an appliance that serves files, maybe runs a plex server, some surveillance, a weather station, rudimentary monitoring, and home automation. Things for which something like a v2000 is overkill. Then use breeding edge chips in things like cell phones and laptops. Then have the two computers do different jobs. Longer product cycles between processors makes things like support cheaper to maintain for long term periods of time and offer low prices.


I have a 3u Nas I built in 2012 or something with a two core sempron running windows and using storage spaces and it still holds up just fine.


It depends what your requirements are. Ive been using a low end synology box for years as a home dev server and it is more than adequate.


Serving files is not compute intensive at all.


I'm running truenas on a used e3 1245 v5 ($30 on ebay) and an Asus workstation Mobo with 32 GB ECC and 4 spinning drives. Not sure individually, but the nas along with a i5 12400 compute machine, router, and switch use 100W from the wall during baseline operation (~30 containers). I'd consider that hugely efficient compared to some older workstations I've used as home servers.


I've been running a E3-1230v3 for over 10 years now. With 32GN ECC, 3 SSDs and 4 HDD and separate port for IPMI I'm averaging 35 W from the wall with a light load. Just ordered a Ryzen 7900 yesterday, and I guess the power consumption will be slightly higher for that one.


Agree. Didn’t even see ECC discussed.

Apparently this board supports ecc with this chip: Supermicro X13SAE W680 LGA1700 ATX Motherboard

Costs 550.

One option is building around that and having some pcie 4.0 to nvme boards hosting as many nvme drives as needed. Not cheap though but around home affordable.


You need workstation chipsets to have ECC on intel desktop CPUs.

And yes they start at around 500.


If you go back a few generations, the C246 chipset can be had on boards costing 200, and if you pair it with an i3-9100T you get ECC as well as pretty damn low power usage.


You are limited to pcie 3.0 speeds there though. But good suggestion.


That's true, but if your goal is low power, that's not necessarily going to be a bottleneck - even if you dedicate all 16 PCIe lanes to NVMe storage it's going to be more than fast enough for 99% of home server needs.


That's why I went with an i3-9100T and an Asrock Rack workstation board, ECC support (although UDIMM vs RDIMM)


This sounds similar to a build I'm planning. I cannot find the workstation mainboards at a reasonable price though. They start at like 400€ in Europe.


There's an Asus one that's available as well, the ASUS C246 PRO - it's about 250 GBP.

I did build mine 2 years ago, so the 246 motherboards are less available now, the C252 is another option which will take you up to 11th gen Intel.


I would never run a self-hosted nas when a synology/qnap are available as a dedicated appliance for around the same price.

The hardware is much more purpose equipped to store files long term and not the 2-3 years between consumer SSDs'

It's not to say self-hosting storage can't or shouldn't be done, its just about how many recoveries and transitions have you been through, because it's not an if, but a when.


> The hardware is much more purpose equipped to store files long term

What hardware would that be, specifically? The low end embedded platforms that don't even support ECC?

> how many recoveries and transitions have you been through

3 or 4, at this point, using the same 2 disk zfs mirror upgraded from 1TB to 3TB to 10TB.


Qnap and synology among other have plenty of reasonably priced ecc enabled equipment.

I’m not sure where low end comes from, most people start with a Pi, old computer or something, and grow from there.

Storage like a home appliance is a good thing.


The hardware is basically the same as self-hosted NAS, the motherboard could even be of a lower quality. The software though is closed source and most consumer NAS only get support for 4-5 years which is outrageous.


You're not buying from the right brand.

Synology supports their hardware for about 10 years since release. They are the "Apple"-like of NAS.


I think the average # of years between buying a NAS could easily be 5-10 years.


I bought a QNAP about a decade ago under the same assumption, but my experiences [0] there means I'm unlikely to buy a SOHO-level storage appliance ever again.

The tl;dr of my rant was around shortcomings in NFS permission configuration, and a failure of the iSCSI feature (the appliance crashed when you sent it data).

Further, these appliances invariably use vanilla RAM sticks, so you're exposed to gentle memory-based file corruption you probably won't notice for years.

So I'd argue the hardware is 'better equipped', and I'd also argue the software as shipped matches the marketing promises accompanying same.

Things have doubtless changed - I'm sure those bugs are long gone now - but unless you're looking at an ECC appliance, I'd say you're better off building your own white box.

[0] https://jeddi.org/b/brief-rant-on-trying-to-use-iscsi-on-a-q...


> but unless you're looking at an ECC appliance, I'd say you're better off building your own white box.

Synology actually allows ECC DRAM and even sells it and list which models would accept them.

But yeah, at the price of a full featured model with an x86 CPU and SO/DIMM RAM and 4+ drives you are in the territory of building your own, with a lot more of control and without DSM (in Synology case) shenanigans.

EDIT: actually the biggest problem here is actually finding a good case, because even ATX cases now usually don't have more than 2-3 3.5" bays by default and often don't have 5.25" at all.

https://www.synology.com/en-us/products/DDR4


A decade ago might as well have been 40 years ago.

Both Qnap and synology wt the least offer ECC capable nas. So much so they are getting better established in the smb and enterprise space. It’s a good thing you’ve pointed out to include ECC in your appliance search.

My preference and statement for storage as appliance includes experience of building my own boxes and working with servers in my own rack in a data centre for over a decade, out of my pocket.

The cloud is someone else’s computer, but I want my own storage and cloud and something easily recommendable to others when it comes up, even when they aren’t techies.

Besides, if we’re going that back in time we could just run a scsi raid array with a fibre channel connected to a power hungry server :)


I get your point - though neither company was young a decade ago (QNAP founded 2004, Synology in 2000) - and my problems were not related to 'what technology was like in the old days', rather the device was advertised with a feature that did not work (and indeed it did not work in a fashion that reliably crashed the machine).

The other feature - NFS - I'd say is one of two core features for a NAS (the other being SMB/CIFS) and it was lacking basic granularity in its ACL.

I'll note that at the time the 'app store' associated with that product has a hundred or more rinky-dinky add-ons & plugins - php admin, helpdesk product, that kind of thing. I suppose some customers found those useful.

But I'd argue those were less useful features for a NAS than being able to reliably receive & send files, or conveniently secure access to those files.

So my concern is not around ephemeral tech / feature failures, but the marketing-driven design, complexity of troubleshooting where it met the opacity of the software, and the barely-there value-add / extra-cost of a stack like that (especially for people handy with screwdrivers and a CLI).


Technology not having a feature at all, or majorly in the way advertised is a massive pain. The WD hard drive fiasco was bad too.

Looking at your list, I ended up going with QNAP to be safe .. for flexibility to get anything I needed the way I wanted to. The choices at the time were a ARM based CPU or intel celeron with very low power. I went with the latter, to realize I probably would have been good as well with the Synology I was eyeing.

The bliss of set and forget, and multiple types and locations of backups happening with good reliability is thanks to one change in the NAS space.

Existing players were expensive, and to the degree that there is profit in large enterprise storage customers wanting a backup locally in each field office for example, the NAS industry has been able to provide enterprise class features down market relatively quick and affordably.

Your posts made me recall how much I had to research the enterprise features to try and find what I wanted, equivalently in a NAS, and it ended up definitely on the Prosumer/SMB models, but ultimately in the high throughput NAS' that focused on video.

I somehow ended up with 2.5 GbE so long ago. In my case I could have just bought used fibre channel scsi enclosures, but I wanted to be mindful of power needs, and in turn backup power. It wasn't an ideal time to buy then, but is much better now. In fact, I can probably buy a second or third used version on what I have and it will work together pretty OK, including ECC if I want.

I try and remember to follow a rule to spend and buy 10-15% more capacity than I need when buying, if not one level up to get true long term utility value. For example, always at least 2 more empty drive bays than you need. If you're buying a car, consider a hatchback. If your'e buying a hatchback, consider a small SUV, etc. Buy a little bigger and nicer to let your life grow into it, or buy thrice.


This is where I need to be eventually. I'm hesitant because I'm worried that I won't find any competent installers in my area and I'll end up with leaks, polluted well water, etc... Any troubles with the ground loop so far?


No issues whatsoever with the ground loops. They will supposedly outlive the lifetime of the home according to the installer (~120 years).

Only issue I’ve had at all is with the thermostat, which is an Ecobee Pro. While these smart thermostats are okay, they seem to mess up some of the more basic requirements of a thermostat, compared to the older dumb models.

For context, the system went live in January, all permits cleared in November. So about a year of use.


PGPR is polyglycerol polyricinoleate -- an emulsifier made from glycerol and fatty acids. It's used to modify the flowability of molten chocolate and is typically less than 1/2 percent of the chocolate mix. If a company is using more than that the chocolate probably isn't going to harden... It's one of the hundreds of "mostly harmless" additives you see everywhere these days not a filler. Whether all these additives are really ok once you put them in every single thing has yet to be seen.


From the article it looks like there were 5 different executives overseeing the project during an approximately 4-year time frame. This sounds like a management failure to me rather than some failure of the American workforce or ingenuity.

Interestingly, the main executive involved got in some trouble with the SEC: https://www.sec.gov/news/press-release/2023-111

"Ansell received undisclosed compensation that consisted, in part, of $280,000 in personal expenses he charged to the company."

So there may be more going on than just poor oversight...


Most large corporations in America have a weird incentive structure problem. Nobody cares if the real work is done as long as their own jobs are protected.

This obviously means that if a savvy exec is putting the heat on the VP, the heat will transparently propagate to the lowest level leaf nodes worker. None of the middle layers take no responsibility ever. The managers will even go to the extent of firing the leaf nodes than admitting failure.

If the incentive structure were changed so that management gets fired first, everything will get produced automatically.


You can delegate authority, but you cannot delegate responsibility.

Companies regularly ignore this with enormous consequences to actual productivity


The theory seems to be that a voltage spike can mangle an ADC re-calibration which results in large "corrections" to the acceleration position sensor resulting in unintended accelerations which "look" just like someone pushed down hard on the gas pedal.

Perhaps a better article here: https://www.carscoops.com/2023/07/feds-assess-allegations-th...

A very detailed paper by the fellow mentioned in that article here: https://www.autoevolution.com/pdf/news_attachements/breaking...


So cool I'm going to have to go play with the PDP-11/15 in my basement to get my old hardware jones on.

This video of building a custom Apollo electroluminescent glass panel DSKY display is also fantastic! https://www.youtube.com/watch?v=Z2o_Sp2-aBo


Yeah, a lot of vintage stuff hitting the HN front page lately. I'm loving that.


best channel on youtube hands down


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: