Hacker Newsnew | past | comments | ask | show | jobs | submit | siffin's commentslogin

I'm sure the trip to the police station and immediate release is a real setback for these people. Unless they're breaking more serious laws, no one is paying to put these people behind bars for any length of time.

I mean, you're right in theory, but in the real world things are very different.


I don't know, all the places I lived in (https://news.ycombinator.com/item?id=46029488) manage just fine. Must be some crazy black magic rocket science they are doing over in Germany or Britain or Turkey or Singapore or Australia to keep non-payers off their public transport.

And in many places I haven't lived but only visited, too.


Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery).

Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do.


Why on earth would the fallback when a prompt is under specified be to do something no human expects?


“People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors.


Extremely talented, studied, hard working humans perform complex tasks all the time, and never with 100% win rate over all time.

In other examples, almost every single person has had the experience of saying, "turn right", "oh I meant left sorry, I knew it was right too, I don't know why I said left". Even the most sophisticated humans have made this error. A computer would never.

Humans are deeply flawed and after pre-selection require expensive training to perform complex tasks at a never perfect success rate.


Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom.

So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling.


Spreadsheet would have been the better analogy.


Which wouldn't be an analogy, because spreadsheet programs can be considered and often are a database.


It could never be anywhere near as irresponsible as the original bad security practices, though. At some point, if you wanna make money by handling people's sensitive data, you are the responsible party, not everyone else.


I don't think it's the environmentalists stopping atmospheric tampering, it's other things like the economics of it, or some countries being very against it.

Geoengineers have talked seriously about this for a long time, but it's a mostly a political issue, then who actually wants to pay to do the science and pay for the outcomes, when you've got no real idea who will get destroyed in the long run.

Is Europe going to help fund it, with a consequence being they have less rainfall? Nobody really knows, so no one really wants to pay to do it at scale, forever.


If I remember correctly the size of the rolling window differs, more modern vehicles may allow about 100 code discrepancy before ignoring the transmitter, while old models might have been 5 to 10.


I guess there is a difference in the fact that gift certificates are always devaluing with monetary inflation, whereas prepaid credit can be used to purchase a service which is just as energy intensive (cost to provider) in 10 years, as it is if used in the next month. Yet in 10 years, the cost to the provider will be more to deliver the same energy (due again to monetary inflation).

There doesn't seem to be a neutral option here, because it's very hard to account for inflation without the holding party paying dividends at the exact rate to offset it.


That's true, but isn't it so awkward to type, or maybe you're disabled so it's difficult or impossible.

So you use a voice memo to capture your words and email them, but it seems almost as silly as calling up and chatting to an LLM, which has the added benefit of being able to confirm it has understood your request and maybe even begin actioning it.

However, this is silly talk, the real future is just gonna be your agent who you talk to directly, who then talks to the contractors' agent, who passes the info on to them in the exact format they like.


I would sooner make the argument religion is.


If people are actually relying on LLMs for validation of ideas they come up with during mental health episodes, they have to be pretty sick to begin with, in which case, they will find validation anywhere.

If you've spent time with people with schizophrenia, for example, they will have ideas come from all sorts of places, and see all sorts of things as a sign/validation.

One moment it's that person who seemed like they might have been a demon sending a coded message, next it's the way the street lamp creates a funny shaped halo in the rain.

People shouldn't be using LLMs for help with certain issues, but let's face it, those that can't tell it's a bad idea are going to be guided through life in a strange way regardless of an LLM.

It sounds almost impossible to achieve some sort of unity across every LLM service whereby they are considered "safe" to be used by the world's mentally unwell.


> If people are actually relying on LLMs for validation of ideas they come up with during mental health episodes, they have to be pretty sick to begin with, in which case, they will find validation anywhere.

You don't think that a sick person having a sycophant machine in their pocket that agrees with them on everything, separated from material reality and human needs, never gets tired, and is always available to chat isn't an escalation here?

> One moment it's that person who seemed like they might have been a demon sending a coded message, next it's the way the street lamp creates a funny shaped halo in the rain.

Mental illness is progressive. Not all people in psychosis reach this level, especially if they get help. The person I know could be like this if _people_ don't intervene. Chatbots, especially those the validate, delusions can certainly escalate the process.

> People shouldn't be using LLMs for help with certain issues, but let's face it, those that can't tell it's a bad idea are going to be guided through life in a strange way regardless of an LLM.

I find this take very cynical. People with schizophrenia can and do get better with medical attention. To consider their decent determinant is incorrect, even irresponsible if you work on products with this type of reach.

> It sounds almost impossible to achieve some sort of unity across every LLM service whereby they are considered "safe" to be used by the world's mentally unwell.

Agreed, and I find this concerning


What’s the point here? ChatGPT can just do whatever with people cuz “sickers gonna sick”.

Perhaps ChatGPT could be maximized for helpfulness and usefulness, not engagement. an the thing is o1 used to be pretty good - but they retired it to push worse models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: