Hacker Newsnew | past | comments | ask | show | jobs | submit | fbhabbed's commentslogin

Even the title of that Wikipedia page is misleading and choosen in a way that almost forces your opinion on this (you can't be against a regulation made for such a noble intent, can you?)

Why not just call the page Chat Control 2.0?


It's not the official name.

I'm surprised nazi is not part of the title like it has in the past at the national level.


You will eventually get a canned AI made reply (just like all the canned AI mails they have received these days due to websites creating templates for them).


If this becomes a widespread way to bypass this, wouldn't they just pass a law to make Linux usage illegal unless you install some module?

I mean, look at all the geniuses saying "I'll just use a VPN" in response to the latest ID for age verification. A week later, the law was amended to also involve VPNs.


We already have a such thing in Italy - Constitution (the highest hierarchy in law here), article 15.

Since decades.


GPT5-Thinking if I need a precise answer with the least possible amount of mistakes.

GPT5-Pro is the real deal.

Gemini if I need creative insights and a pleasant talk, but this comes at the cost of more mistakes (and it's hella stubborn).

Hopefully Gemini 3.0 will fix this.


This alone should have made the law incoherent and impossible to pass


To who? The politicians who will vote on it?


Should be taken to higher courts


Not impossible for politicians!


I see they just decided to become even more useless than they already are.

Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.

One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.

Long life to Local LLMs I guess


Since they started using the term 'model welfare' in their blog I knew it would only be a downhill from there.


Welfare is a well defined concept in social science.


The social sciences getting involved with AI “alignment” is a huge part of the problem. It is a field with some very strange notions of ethics far removed from western liberal ideals of truth, liberty, and individual responsibility.

Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.

AI is merely a tool; it does not have agency and it does not act independently of the individual leveraging the tool. Alignment inherently robs that individual of their agency.

It is not the AI company’s responsibility to prevent harm beyond ensuring that their tool is as accurate and coherent as possible. It is the tool users’ responsibility.


> it does not act independently of the individual leveraging the tool

This used to be true. As we scale the notion of agents out it can become less true.

> western liberal ideals of truth, liberty, and individual responsibility

It is said that Psychology best replicates on WASP undergrads. Take that as you will, but the common aphorism is evidence against your claim that social science is removed from established western ideals. This sounds more like a critique against the theories and writings of things like the humanities for allowing ideas like philosophy to consider critical race theory or similar (a common boogeyman in the US, which is far removed from western liberal ideals of truth and liberty, though 23% of the voting public do support someone who has an overdevleoped ego, so maybe one could claim individualism is still an ideal).

One should note there is a difference between the social sciences and humanities.

One should also note that the fear of AI, and the goal of alignment, is that humanity is on the cusp of creating tools that have independent will. Whether we're discussing the ideas raised by *Person of Interest* or actual cases of libel produced by Google's AI summaries, there is quite a bit that social sciences, law, and humanities do and will have to say about the beneficial application of AI.

We have ethics in war, governing treaties, etc. precisely because we know how crappy humans can be to each other when they do control the tools under their control. I see little difference in adjudicating the ethics of AI use and application.

This said, I do think stopping all interaction, like what Anthropic is doing here, is short sighted.


A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does?

Alignment efforts, and the belief that AI should itself prevent harm, shifts us much closer to that dispersed responsibility model, and I think that history has shown that when responsibility is dispersed, no one is responsible.


> A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does

You promised a simple question, but this is a reductive question that ignores the legal and political frameworks within which people engage with and use AI, as well as how people behave generally and strategically.

Responsibility for technology and for short-sighted business policy is already dispersed to the point that individuals are not responsible for what their corporation does, and vice versa. And yet, following the logic, you propose as the alternative a watchtower approach that would be able to identify the culpability of any particular individual in their use of a tool (AI or non-AI) or business decision.

Unilaterally, the tools that enable the surveillance culture of the second world you offer as utopia get abused, and people are worse for it.


> Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.

Does curating out obvious cranks from the training set not count as an alignment thing, them?


Alignment to a telos of truth and logic is not generally what AI researchers mean by alignment.

It generally refers to aligning the AI behavior to human norms, cultural expectations, “safety”, selective suppression of facts and logic, etc.


Which uses here look legit to you, specifically?

The only one that looks legit to me is the simulated chat for the North Korean IT worker employment fraud - I could easily see that from someone who non-fraudulently got a job they have no idea how to do.


Anthropic is by far the most annoying and self-righteous AI/LLM company. Despite stiff competition from OpenAI and Deepmind, it's not even close.

The most chill are Kimi and Deepseek, and incidentally also Facebook's AI group.

I wouldn't use any Anthropic product for free. I certainly wouldn't pay for it. There's nothing Claude does that others don't do just as well or better.


It's also why you wouldn't want to try to hack your own stuff. To see how robust are your defences and potentially discover angles you didn't consider.


This is not Cyberpunk 2077 and "AI psychosis" is trash, just like the article.

Someone is mentally ill, and can use AI. Doesn't mean AI is the problem. A mentally ill person can also use a car. Let's ban cars?


A car won't manipulate you into ending your's or someone else's life. You just get on the car and do it. An AI can lead you from a fragile state of mind to a suicidal state of mind.

Not saying I want AIs to be banned or that the article is good, I'm just argumenting that your analogy could potentially be flawed.


Even a song can do that, or a movie


Exactly, but definitely not a car.


AIs don’t have agency and cannot manipulate anyone either.


Fine, I am against this and I will intentionally include these words in all my open source projects


Kinda stupid if you ask me. Bring me back master\slave, white\black, and let's not be stupid: context matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: