Hacker Newsnew | past | comments | ask | show | jobs | submit | illuminator83's commentslogin

Installation of software has usually become simple and easy enough that I feel more safe if i just look it up on the official source and run some curl or package manager command to get it installed. I trust that more than letting an LLM figure it out and then having to worry that I it got hijackeed and installs something based on out-dated or wrong info.

But configuring / setting up complex pieces of technology is something in which I let LLMs help me regularly. I'm happy that I don't have to RTFM that much anymore to get something done. And yes, I'd hate to figure out IAM policies myself or decipher a truckload of error message of third-party systems by myself.

So, yes, I expect LLM help with these kind of things is going to become the norm.

For an LLM to work well, the installer should still exist, the UX should also be kind of self-explanatory and the error message must also have relevant and clear info.

So in that regard, not much has changed.


I and everybody else here call BS on that. People make mistakes all the time. Arguably at similar or worse rates.

Intelligent people tend to reproduce a lot less than other people. You wanna be average (or slightly above) for the best chance at successful procreation. And hyper-intelligent people are especially bad at procreation.


It's not really about the implementation of Java (might be bad, I don't know). It is the specification.

- People talked about null being an issues and that is a big one.

- The entire idea of OOP extremism Java implemented was a mistake - though just a consequence of the time it was born in. Much has been written about this topic by many people.

- Lacking facilities and really design for generic programming (also related to the OOP extremism and null issue

So much more more you can find out with Google or any LLM


Especially since the US is not going to have any allies anymore soon.


I'm hoping for a future in which humankind looks back with embarrassment at this silly period in its history in which people used to think a leaky and bad abstractions like garbage collection was ever a good approach to deal with resource life-times.


Still the whole world runs on GC-ed languages so it must be an abstraction at least some people like to work with.

And I'm pretty sure using a GC in some cases it's the only option to not go crazy.


I think we are just used to it. Like we are used to so many suboptimal solutions in our professional and personal lives.

I mean, look something like C++ or the name "std::vector" specifically. There are probably 4 Trillion LoC containing this code out there - in production. I'm used to it, doesn't make it good.


Monkey's paw: you get your wish, but so does someone who wants RAII and single-use-malloc to be left behind as a leaky and bad abstractions.

We all happily march into a future where only arena allocation is allowed, and when the arena is overfull it can only be fully reset without saving data. Copying still-used data out if it before reset is not allowed, as that's a copying half-space garbage collector. Reference counting is of course not allowed either as that's also garbage collection. Everyone is blessed...?


Well, to be fair, RAII is a leaky abstraction. For example, if your programme crashes there's no guarantee that you'll ever give the resources back.

See https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...


> See https://en.wikipedia.org/wiki/Resource_acquisition_is_initia...

This example is specific to C++

> (..) if your programme crashes there's no guarantee that you'll ever give the resources back.

What guarantees can you have from a "crashing program", and by what definition of crashing?

> RAII is a leaky abstraction

Any abstraction is leaky if you look close enough.


> What guarantees can you have from a "crashing program", and by what definition of crashing?

You might like https://www.usenix.org/conference/hotos-ix/crash-only-softwa...


Some problems are just fundamentally easier to solve using cyclic data structures whose lifetime exceeds the scope where they were created, which would be quite difficult to clean up properly in any other way.


Indeed. I also hope we stop using all of these "high-level" languages. So much overhead just so people don't have to learn how to write proper optimized machine code. It's super-trivial to write a website directly in that too, and it only takes a bit longer, but it is almost twice as fast.


I'm a big fan of high-level languages and abstractions. I'm just not a fan of bad abstractions.


Did you know the Linux kernel has a tracing garbage collector in it, specifically for Unix socket handles? It seems to be a recurring solution to a common problem.


There are lots of suboptimal solutions for lots of problems out there. I don't know why it would matter if the Linux Kernel does the same mistake. And I'm sure that wasn't the only solution. Just something somebody implemented and noone bothered to change it because it worked "well enough". But I wouldn't be surprised if this is known to cause the kind of issue GCs are known to cause such as race conditions, resource exhaustion and stalling.

Let me do some quick research:

https://gist.github.com/bobrik/82e5722261920c9f23d9402b88a0b... https://nvd.nist.gov/vuln/detail/cve-2024-26923


I do not know the guy, and I do not care who he is. This really is not "slop". I can attest to the validity of almost all of his points based on my own career. And even if he used ChatGPT assistance to help with the writing, the content clearly was not invented by ChatGPT. This is valuable advice for people in our industry.


You must not have many engineering leaders in your LinkedIn. These are all rote points that are spouted on there daily.


Are you sure? I've been confidently wrong about stuff before. Embarrassing, but it happens.. And I've been working with many people who are sometimes wrong about stuff too. With LLMs you call that "hallucinating" and with people we just call it "lapse in memory", "error in judgment", or "being distracted", or plain "a mistake".


True, but people can use classifier words like "I think …" or "Wasn't there this thing …", which allows you to judge their certainty about the answer.

LLMs are always super confident and tell you how it is. Period. You would soon stop asking a coworker who repeatedly behaved like that.


Yeah, for the most part. But I've even had a few instance in which someone was very sure about something and still wrong. Usually not about APIs but rather about stuff that is more work to verify or not quite as timeless. Cache optimization issue or suitability of certain algorithms for some problems even. The world is changing a lot and sometimes people don't notice and stick to stuff that was state-of-the-art a decade ago.

But I think the point of the article is that you should have measure in place which make hallucinations not matter because it will be noticed in CI and tests.


It’s different. People don’t just invent random API that doesn’t exist. LLM does that all the time.


For the most part, yes. Because people usually read docs and test it on their own.

But I remember a few people long ago telling me confidently how to do this or that in e.g. "git" only to find out during testing that it didn't quite work like that. Or telling me about how some subsystem could be tested. When it didn't work like that at all. Because they operated from memory instead of checking. Or confused one tool/system for another.

LLMs can and should verify their assumptions too. The blog article is about that. That should keep most hallucinations and mistakes people make from doing any real harm.

If you let an LLM do that it won't be much of a problem either. I usually link an LLM to an online source for an API I want to use or tell it just look it up so it is less likely to make such mistakes. It helps.


Again with people it is a rare occurrence. LLM does that regularly. I just can’t believe anything it says


I do agree. I still think that the article articulates a very interesting thought... the better the input for a problem, the better the output. This applies both to LLMs but also for colleagues.


It's the tragedy of the commons all over again. You can see it in action everywhere people or communities should cooperate for the common good but don’t. Because many either fear being taken advantage of or quietly try to exploit the situation for their own gain.


The tragedy of the commons is actually something else. The problem there comes from one of two things.

The first is that you have a shared finite resource, the classic example being a field for grazing which can only support so many cattle. Everyone then has the incentive to graze their cattle there and over-graze the field until it's a barren cloud of dust because you might as well get what you can before it's gone. But that doesn't apply to software because it's not a finite resource. "He who lights his taper at mine, receives light without darkening me."

The second is that you're trying to produce an infinite resource, and then everybody wants somebody else to do it. This is the one that nominally applies to software, but only if you weren't already doing it for yourself! If you can justify the effort based only on your own usage then you don't lose anything by letting everyone else use it, and moreover you have something to gain, both because it builds goodwill and encourages reciprocity, and because most software has a network effect so you're better off if other people are using the same version you are. It also makes it so the effort you have to justify is only making some incremental improvement(s) to existing code instead of having to start from scratch or perpetually pay the ongoing maintenance costs of a private fork.

This is especially true if your company's business involves interacting with anything that even vaguely resembles a consolidated market, e.g. if your business is selling or leasing any kind of hardware. Because then you're in "Commoditize Your Complement" territory where you want the software to be a zero-margin fungible commodity instead of a consolidated market and you'd otherwise have a proprietary software company like Microsoft or Oracle extracting fees from you or competing with your hardware offering for the customer's finite total spend.


About 7 or 8 years ago I worked at a startup which got money from Softbank / Masayoshi Son. Our founder and our CTO went to meet him in LA IIRC to pitch.

They came back telling us he was basically asleep during the pitch meeting which was scheduled for only 10 minutes anyway.

Our business/product really had no chance of succeeding at this point and most knew it. We got some money from Softbank anyway - forgot how much. Our management was basically laughing about how easy it was to get funding from Softbank.

I jumped ship a year later or so and that was good timing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: