Hacker Newsnew | past | comments | ask | show | jobs | submit | Netcob's commentslogin

Probably quite a lot, being a specialist in multiple domains is getting more difficult.


It's not about not knowing about an optimization. The challenge is to know when to apply it, so that it does not cause regressions for cases that can't benefit from it. It may be less risky in specialized systems, like BI systems typically don't need to worry about regressing OLTP workloads. Postgres absolutely needs to be careful of that.

I believe that's one of the reasons why it took about ~8 years (the original patch was proposed in 2017).


I don't think anyone would be investing `billions and billions` into AI if their endgame wasn't putting an expert salesperson right in front of every human in the world. Someone who knows all about them and who can not just sell things, but make the target think it was their idea all along.


I get that transforming a bunch of facts into prose is boring.

As a reader, I can't get over the fact that I'm supposed to read a text that nobody could be bothered to write.

I wonder how often we waste energy nowadays by telling an AI to turn a one-sentence-prompt into a full e-mail, only for the other side to tell the AI to summarize it in one sentence.


I would be happy to read boring facts without the fluff


Yeah, that's the one problem I'd have with stickers.

I'm personally not interested, but I also would never make fun of people expressing themselves.

On the other hand... mandatory fun, mandatory self-expression, any anything that takes something very personal and turns it into official or unofficial company policy makes me sick. I'm glad it's not too common here in Germany.

It's like HR forcing you to listen to punk songs because the company wants to promote a rebellious spirit as long as it's compatible with "disruption". It's also a bit like being asked "why are you so quiet" by someone who said everything worthwhile 5 minutes after getting out of bed but never stopped yapping.


If Windows keeps going in this direction, I will try again.

But in the past 20 years I tried using Linux on the desktop a couple of times.

It always ends the same way - out of the blue it refuses to boot. Of course there's usually a solution, but I just really don't like that my PC can just suddenly decide that I'll be troubleshooting for the rest of the day, usually in front of some very minimal "maintenance" CLI. And that's if I got the time - I may have to use my laptop for the rest of the week, now dreading the weekend instead of welcoming it.

Right now I'd have to do a bunch of research first. Would I still be able to play all the games I play with my friends once a week? I have 3 monitors, one of them has a different DPI than the others, did they fix that by now? I got a stream deck, will that be essentially useless? Is my webcam / mic supported? Do I need to learn about various audio architectures before I can ever use a mic again? Which ones of the dozens of apps I use every day can be made to run under Linux?

It'll probably take a 40-hour work week to get to like 90% of where I was on Windows, and then I'd consider myself lucky that I got that much to work at all. And then I'd start waiting for the first "troubleshooting day".

With all that negativity I have to also say that I adore Linux on the server. When all you need in terms of hardware is basically a CPU and any number of storage devices and all you get in terms of UI is SSH, Linux is far superior to anything else.


If you want to avoid boot issues, stay away from Arch-based platforms. Their goofy pacman installer has borked my boot numerous times. I prefer Debian-based or specifically for recent-enough-packages-and-stable desktop, Debian Testing.


Wouldn't all boot issues caused by pacman shenanigans be solved by setting up snapper or equivalent? Luckily haven't experienced one so far


Yes I do that. Half the time it just deletes the grub generated image for some reason. So the solution is to often mount the drive and grub update. Ridiculous that it even happens though.


That's what I said too, and the answer was "No, just because it cannot be decrypted today does not mean it cannot be decrypted in the future. The data must be deleted"


My guess is that the reason why AI works bad for some people is the same reason why a lot of people make bad managers / product owners / team leads. Also the same reason why onboarding is atrocious in a lot of companies ("Here's your login, here's a link to the wiki that hasn't been updated since 2019, if you have any questions ask one of your very busy co-workers, they are happy to help").

You have to very good at writing tasks while being fully aware of what the one executing it knows and doesn't know. What agents can infer about a project themselves is even more limited than their context, so it's up to you to provide it. Most of them will have no or very limited "long-term" memory.

I've had good experiences with small projects using the latest models. But letting them sift through a company repo that has been worked on by multiple developers for years and has some arcane structures and sparse documentation - good luck with that. There aren't many simple instructions to be made there. The AI can still save you an hour or two of writing unit tests if they are easy to set up and really only need very few source files as context.

But just talking to some people makes it clear how difficult the concept of implicit context is. Sometimes it's like listening to a 4 year old telling you about their day. AI may actually be better at comprehending that sort of thing than I am.

One criticism I do have of AI in its current state is that it still doesn't ask questions often enough. One time I forgot to fill out the description of a task - but instead of seeing that as a mistake it just inferred what I wanted from the title and some other files and implemented it anyway. Correctly, too. In that sense it was the exact opposite of what OP was complaining about, but personally I'd rather have the AI assume that I'm fallible instead of confidently plowing ahead.


I fully agree with this take and think a lot of people at this point are really just being uncharitable to those using AI productively + unwilling to admit their own faults when they fail to see this.

How can anybody who has managed or worked with inexperienced engineers, or StackOverflow developers, not see how helpful AI is for delegating the kinds of tasks with that particular flavor of content and scope? And how can anybody who is currently working with those kinds of developers not see how much it's helping them improve the quality of their work? (and yes, it's extremely frustrating to see AI used poorly or for people to submit code for review that they did not even review or even understand themselves. But the fact that that's even possible, that it often times still works, really tells you something... And given the right feedback, most offenders do eventually understand why they ought not to do this, I think)

Even for more experienced engineers, for the kind of "unimportant / low priority, uninteresting" work that requires a lot of context and knowledge to get done but isn't really a good use of experienced engineers' time, AI can really lower the barrier to starting and completing those tasks. Let's say my codebase doesn't have any docstrings or unit tests - I can feed it into an LLM and immediately get mediocre versions of all of that and just edit it into being good enough to merge. Or let's say I have an annoying unicode parsing bug, a problem with my regex, or something like that which I can reproduce in tests or a dev environment: a lot of the time I can just give the LLM the part of the code I suspect the bug resides within, tell it what the bug symptoms are and ask it to fix it, and validate the fix.

To be honest and charitable to those who do struggle to use AI this way, since it's most likely just a theory of mind issue (they don't understand what the AI does and doesn't know, and what context it needs to understand them and give them what they want), it could very well be influenced by being somewhere on the autism spectrum or just difficulty with social skills. Since AI is essentially a fresh wipe of the same stranger every time you start a conversation with it (unless you use "memory" features designed for consumer chat rather than coding), it never really gets to know you or understand your quirks like most people that regularly interact with those with social difficulties. So I suppose to a certain extent it requires them to "mask" or interact in a way they're unfamiliar with when dealing with computer tools.

A lot of people for whatever reason seem also to have decided to become emotionally/personally invested in "AI stupid" to the point that they will just flat out refuse to believe there is value in being able to type some little compiler error or stacktrace into a textbox and 80% of the time get a custom fix in 10% of the time it would have taken to do the same thing on google search+stackoverflow.


Same - replaced my smaller Synology with a UGREEN, put TrueNAS on it first thing, runs great. The HDD thing was only the final nail in the coffin, but before that, there were plenty of ridiculous "upgrades" that made products worse than in the previous generation. Literally removing features, or continuing to use the same outdated hardware. That's what companies do that don't think they have competition.


ASUSTOR's latest gen hardware is ridiculous. Ryzen processors, upgradeable ECC RAM, 4xHDD + 4xNVMe, 10GbE plus a PCIe slot...

You need to add an external GPU for TrueNAS installation, but they have an official video for that. On top of that, they connected the flash which stores the original firmware to its own USB port, and you can disable it. Preventing both interference and protecting the firmware from accidental erasure.

All over great design.

Yes, it's not cheap, but it's almost enterprise class hardware for home, and that's a good thing.


ASUSTOR looks interesting but none of their desktop units appear have PCIe expansion slots so you can't put a SFP28 card in there. It might be possible via expensive USB4 adapter.


I misremembered that Gen3 hardware had a spare PCIe slot, my bad.

You can either forego NVMe slots (which looks like an add-on card on [0]) and get the slot, or use one of the USB4 interfaces. OTOH, it has 2x10GbE on board, you can just media-convert it.

[0]: https://www.youtube.com/watch?v=wWgc8W-hIWM


That seems like a lot of effort - is there no ability to boot a custom thumb drive that loads something like an SSH terminal, or dummy display for VNC?


The problem is not getting TrueNAS on a disk. You can do it externally, but you need to disable the on board flash storage and change the boot order from the BIOS.

That box is "just" an I/O optimized PC which can boot without a GPU.

Older hardware with Intel processors have an iGPU on board. You can use the HDMI output on these directly.


Same, but I went with minisforum (another well known mini-pc brand): https://store.minisforum.com/products/minisforum-n5-us

Installed unraid on it and it's been working great. So long, Synology.


Do all the models support ECC ram? If not, does the website say clearly which do?

I've been looking on and off for a smallish NAS for some use, but I'd really like it to have ECC. As it stands, I'm considering more and more compromising on the size aspect and getting some ASRock + AMD combo.


If I understood it correctly, all G3 hardware running AMD processors support ECC RAM. It's clearly labeled though.

The one I'm planning to get is at [0]. It clearly states ECC RAM.

[0]: https://www.asustor.com/en/product?p_id=86


I bought a small ASUSTOR NAS at work to check it out and I like it, it's definitely faster than comparable Synology units, however the camera system is quite underdeveloped compared to Synology. Synology's surveillance station rocks and ASUSTOR has a long way to go in that niche.


Thanks, good to know. I just want my files, and a couple of containers doing my backups, that's all.


For some more recent crimes against society and humanity, I'd also compare it to Stasi. Plenty of people alive today who lived with that.

Around 1 in 30 people was secretly telling on their neighbors. After unification, it was presented as a dark chapter in German history that had finally come to an end. People would get to look into their own "file" to see what and how much had been written about their daily activities. I was a bit young at the time, but I do remember frequent discussions on TV about how to move on from this, and how to make sure it doesn't happen again.

And now we're talking about reading everyone's private messages on a scale that would be the Stasi's wet dream.

I wonder - if the Stasi had been presented as a legitimate way to fight CSAM - would that have been okay?


The Stasi while more recent and more correct a name to use here are still something not everyone knows about to the same extent as the gestapo.


In Germany they certainly do.


indeed. Gestapo is "a long time ago", Stasi is something half the country at least knows people personally affected by.


Stasi works better if this is a purely German question, but this is an international issue. Gestapoware is way more obvious than Stasiware for people outside Germany, while both surely resonate inside the country.


They did say these protesting on the street are outlaws who also rape and kill the little children.


Unsurprisingly the Stasi and Gestapo types always say things like that.


The trouble is that the Stasi are not seen in as negative of a light as the Gestapo.


In Germany?

I'm not German but the German people I do know don't see them positively. But could be selection bias


Only some rare soviet nostalgy people will see them as something positive. But there are indeed not considered as bad as Gestapo and I tend to agree.


Fern did a phenomenal video about Stasi: https://www.youtube.com/watch?v=Aj7HX7I8KHs


Once in a while I save ~10 minutes by using AI. About as often as embarrassing myself by having to admit that my primary source was an AI while researching some topic.

The main thing that changed is that the CTO is in more of a "move fast, break things"-mood now (minus the insane silicon valley funding) because he can quickly vibe-code a proof-of-concept, so development gets derailed more often.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: