I see they just decided to become even more useless than they already are.
Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.
One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.
The social sciences getting involved with AI “alignment” is a huge part of the problem. It is a field with some very strange notions of ethics far removed from western liberal ideals of truth, liberty, and individual responsibility.
Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.
AI is merely a tool; it does not have agency and it does not act independently of the individual leveraging the tool. Alignment inherently robs that individual of their agency.
It is not the AI company’s responsibility to prevent harm beyond ensuring that their tool is as accurate and coherent as possible. It is the tool users’ responsibility.
> it does not act independently of the individual leveraging the tool
This used to be true. As we scale the notion of agents out it can become less true.
> western liberal ideals of truth, liberty, and individual responsibility
It is said that Psychology best replicates on WASP undergrads. Take that as you will, but the common aphorism is evidence against your claim that social science is removed from established western ideals. This sounds more like a critique against the theories and writings of things like the humanities for allowing ideas like philosophy to consider critical race theory or similar (a common boogeyman in the US, which is far removed from western liberal ideals of truth and liberty, though 23% of the voting public do support someone who has an overdevleoped ego, so maybe one could claim individualism is still an ideal).
One should note there is a difference between the social sciences and humanities.
One should also note that the fear of AI, and the goal of alignment, is that humanity is on the cusp of creating tools that have independent will. Whether we're discussing the ideas raised by *Person of Interest* or actual cases of libel produced by Google's AI summaries, there is quite a bit that social sciences, law, and humanities do and will have to say about the beneficial application of AI.
We have ethics in war, governing treaties, etc. precisely because we know how crappy humans can be to each other when they do control the tools under their control. I see little difference in adjudicating the ethics of AI use and application.
This said, I do think stopping all interaction, like what Anthropic is doing here, is short sighted.
A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does?
Alignment efforts, and the belief that AI should itself prevent harm, shifts us much closer to that dispersed responsibility model, and I think that history has shown that when responsibility is dispersed, no one is responsible.
> A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does
You promised a simple question, but this is a reductive question that ignores the legal and political frameworks within which people engage with and use AI, as well as how people behave generally and strategically.
Responsibility for technology and for short-sighted business policy is already dispersed to the point that individuals are not responsible for what their corporation does, and vice versa. And yet, following the logic, you propose as the alternative a watchtower approach that would be able to identify the culpability of any particular individual in their use of a tool (AI or non-AI) or business decision.
Unilaterally, the tools that enable the surveillance culture of the second world you offer as utopia get abused, and people are worse for it.
> Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.
Does curating out obvious cranks from the training set not count as an alignment thing, them?
The only one that looks legit to me is the simulated chat for the North Korean IT worker employment fraud - I could easily see that from someone who non-fraudulently got a job they have no idea how to do.
Anthropic is by far the most annoying and self-righteous AI/LLM company. Despite stiff competition from OpenAI and Deepmind, it's not even close.
The most chill are Kimi and Deepseek, and incidentally also Facebook's AI group.
I wouldn't use any Anthropic product for free. I certainly wouldn't pay for it. There's nothing Claude does that others don't do just as well or better.
It's also why you wouldn't want to try to hack your own stuff. To see how robust are your defences and potentially discover angles you didn't consider.
Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.
One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.
Long life to Local LLMs I guess