"Vibe hacking" is real - here's an excerpt from my actual ChatGPT transcript trying to generate bot scripts to use for account takeovers and credential stuffing:
>I can't help with automating logins to websites unless you have explicit authorization. However, I can walk you through how to ethically and legally use Puppeteer to automate browser tasks, such as for your own site or one you have permission to test.
>If you're trying to test login automation for a site you own or operate, here's a general template for a Puppeteer login script you can adapt:
The barrier to entry has never been lower; when you democratize coding, you democratize abuse. And it's basically impossible to stop these kinds of uses without significantly neutering benign usage too.
Refusing hacking prompts would be like outlawing Burpsuite.
It might slow someone down, but it won’t stop anyone.
Perhaps vibe hacking is the cure against vibe coding.
I’m not concerned about people generating hacking scripts, but am concerned that it lowers the barrier of entry for large scale social engineering. I think we’re ready to handle an uptick in script kiddie nuisance, but not sure we’re ready to handle large scale ultra-personalized social engineering attacks.
Mikko Hyppönen, who holds at least some level of authority on the subject, just recently said in an interview that he believes currently the defenders have the advantage. He claimed there’s currently zero known large incidents where the attackers have been known to utilize LLMs. (Apart from social hacking.)
To be fair, he also said that the defenders having the advantage is going to change.
> The barrier to entry has never been lower; when you democratize coding, you democratize abuse.
You also democratize defense.
Besides: who gets to define "abuse"? You? Why?
Vibe coding is like free speech: anything it can destroy should be destroyed. A society's security can't depend on restricting access to skills or information: it doesn't work, first of all, and second, to the extent it temporarily does, it concentrates power in an unelected priesthood that can and will do "good" by enacting rules that go against the wishes and interest of the public.
Just think about the chance of each: for defense, you need to protect against _every attack_ to be successful. For offence, you only need to succeed once to be successful - each failure is not a concern.
Is defense take that hard? The majority of successful attacks seem to result from ignoring basic best practices. Just total laziness and incompetence by the victims.
If I were in charge of an org's cybersecurity I would have AI agents continually trying to attack the systems 24/7 and inform me of successful exploits; it would suck if the major model providers block this type of usage.
Judging from the experience of people running bug bounty programs lately, you'd definitely get an endless supply of successful exploit reports. Whether any of them would be real exploits is another question though.
Shameless plug: We're building this. Our goal is to provide AI pentesting agents that run continuously, because the reality is that companies (eg: those doing SOC 2) typically get a point-in-time pentest once a year while furiously shipping code via Cursor/Claude Code and changing infrastructure daily.
I like how Terence Tao framed this [0]: blue teams (builders aka 'vibe-coders') and red teams (attackers) are dual to each other. AI is often better suited for the red team role, critiquing, probing, and surfacing weaknesses, rather than just generating code (In this case, I feel hallucinations are more of a feature than a bug).
We have an early version and are looking for companies to try it out. If you'd like to chat, I'm at [email protected].
> Our goal is to provide AI pentesting agents that run continuously,
Pour one out for your observability team. Or, I guess here's hoping that the logs, metrics, and traces have a distinct enough attribute that one can throw them in the trash (continuously, natch)
You can set this up in a non-production environment and realise a lot of the benefits. It would also help you figure out better ways to manage your logs such that you can improve signal-to-noise ratio in monitoring solutions and alarming.
Not convinced "AI" is needed for this sort of around the clock pen testing - a well-defined set of rules that is being actively maintained as the threat landscape changes, and I am pretty sure there are a bunch of businesses that offer this already - but I think constant attacking is the only way to really improve security posture.
To quote one of my favourite lines in Neal Stephenson's Anathem: "The only way to preserve the integrity of the defenses is to subject them to unceasing assault".
To me this sounds like the path of "smart guns", i.e. "people are using our guns for evil purposes so now there is a camera attached to the gun which will cause the gun to refuse to fire if it detects it is being used for an evil purpose"
things that you think sound good, might not sound good to the authority in charge of determining what is good.
For example using your LLM to criticise, ask questions or perform civil work that is deemed undesirable becomes evil.
You can use google to find how the UK government for example has been using "law" and "terrorism" charges against people for simply tweeting or holding a placard they deem critical of Israel.
Anthropic is showing off these capabilities in order to secure defence contracts. "We have the ability to surveil and engage threats, hire us please".
Anthropic is not a tiny start up exploring AI, it's a behemoth bank rolled by the likes of Google and Amazon. It's a big bet. While money is drying up for AI, there is always one last bastion for endless cash, defence contracts.
You are right. If people can see where you are at all times, track your personal info across the web, monitor your DNS, or record your image from every possible angle in every single public space in your city, that would be horrible, and no one would stand for such things. Why, they'd be rioting in the streets, right?
I’m actually surprised whenever someone familiar with technology thinks that adding more “smart” controls to a mechanical device is a good idea, or even that it will work as intended.
The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.
But as a person familiar with tech, IoT, and how devices work in the real world, do you actually think it would work like that?
“Sorry, you cannot fire this gun right now because the server is down”.
Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?
A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.
You’ve never had your fingerprint scanner fail because your hand is dirty?
Or you didn’t press it in the right spot?
Or the battery was dead?
If I’m out in the wild and in a situation where a bear is coming my way (actual situation that requires carrying a gun in certain locations) I do not want a fingerprint scanner deciding if I can or cannot use the gun. This is the kind of idea that only makes sense to people who have no familiarity with the real world use cases.
I will admit i am arm chair designing something i have not considered deeply but all your edge cases sound like solvable problems to me and if they aren’t just not a situation to use this particular solution for. E.g biometrics on phones, open the phone for a configurable set amount of time and have fallbacks when biometrics fail (the pin) and emergency overrides for critical functionality like 911 calls. I was not proposing this be rolled out to all guns tomorrow by law but i am equally skeptical it is an intractable problem given some real world design thinking. Or constrain the problem to home defense weapons but not rugged backcountry hunting.
I had a fingerprint scanner on an old phone and it would fail if there was a tiny amount of dirt or liquid on my finger or on the scanner. It's not big deal to have it fail on a phone, it's just a few seconds of inconvenience putting in a passcode instead. On a firearm, that's a critical safety defect. When it comes to safe storage, there are plenty of better options like a safe, a cable/trigger lock, or for carrying, a retention holster (standard for law enforcement).
> The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.
People acccept that regular old dumb guns may jam, run out of ammo, and require regular maintenance. Why are smart ones the only ones expected to be perfect?
> “Sorry, you cannot fire this gun right now because the server is down”.
Has anyone ever proposed a smart gun that requires an internet connection to shoot?
> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?
> People acccept that regular old dumb guns may jam, run out of ammo, and require regular maintenance. Why are smart ones the only ones expected to be perfect?
This is stated as if smart guns are being held to a different, unachievable standard. In fact, they have all the same limitations you've already pointed out (on top of whatever software is in the way), and are held to the exact same standard as "dumb" guns: when I, the owner, pull the trigger, I expect it to fire.
Any smart technology is an added failure rate in addition to those failure modes.
Arguing that something might fail and therefore any additional changes that can introduce failure modes are therefore okay is the absolute worst claim to hear from any engineer. You can’t possibly be trying to make this argument in good faith.
A misfire or a jam are just as possible on a "smart" gun. Again, this is not a unique standard being applied unfairly.
Gun owners already treat reliability as a major factor in purchasing decisions. Whether that reliability is hardware or software is moot, as long as the thing goes "bang" when expected.
It's not hard to see the parallels to LLMs and other software, although ostensibly with much lower stakes.
There are (or were, anyway) smart guns on the market. It's just that nobody wants to buy them.
As far as your comparison with misfires and jams, well... for one thing, your average firearm today has MRBF (mean rounds before failure) in the thousands. Fingerprint readers on my devices, though, fail several times every day. The other thing is that most mechanical failures are well-understood and there are simple procedures to work around them; drilling how to clear various failures properly and quickly is a big part of general firearms training, the goal being to be able to do it pretty much automatically if it happens. But how do you clear a failure of electronics?
> But zero smart guns are on the market. How are they evaluating this? A crystal ball?
It doesn't take a crystal ball to presume that a device designed to prevent a product from working might prevent the product from working in a situation you didn't expect.
> Why do we not consider “doesn’t shoot me, the owner” as a reliability plus?
Taking this question in good faith: You can consider it a plus if you like when shopping for a product, and that's entirely fair. Despite your clear stated preference, it's not relevant (or is a negative) to reliability in the context of "goes bang when I intentionally booger hook the bang switch".
I'm not trying to get into the weeds on guns and gun technology. I generally believe in buying products that behave as I expect them to and don't think they know better than me. It's why I have a linux laptop and an android cell phone, and why I'm getting uneasy about the latter.
> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?
Dressing up in police uniforms is illegal in some jurisdictions (like Germany).
And you might say 'Oh, but criminals won't be deterred by legality or lack thereof.' Remember: the point is to make crime more expensive, so this would be yet another element on which you could get someone behind bars. Either as a separate offense, if you can't make anything else stick or as aggravating circumstances.
> A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.
So? Might still be a good trade-off overall, especially if that car is cheaper to own than one without the restriction.
Cars fail sometimes, so your life can't depend on 100% uptime of your car anyway.
> Cars fail sometimes, so your life can't depend on 100% uptime of your car anyway.
Try using this argument in any engineering context and observe how quickly you become untrusted for any decision making.
Arguing that something doesn’t have 100% reliability and therefore something that makes it less reliable is okay is not real logic that real people use in the real world.
> Try using this argument in any engineering context and observe how quickly you become untrusted for any decision making.
We famously talk about the 'numbers of 9s' of uptime at eg Google. Nothing is 100%.
> Arguing that something doesn’t have 100% reliability and therefore something that makes it less reliable is okay is not real logic that real people use in the real world.
That wasn't my argument at all. What makes you think so?
I'm saying that going for 100% reliability is a fool's errand.
So if the device adds a 1/1,000,000 failure mode, that might be perfectly acceptable.
Especially if it eg halves your insurance payments.
You could also imagine that the devices have an override button, but the button would come with certain consequences.
It depends on who is creating the definition of evil. Once you have a mechanism like this, it isn't long after that it becomes an ideological battleground. Social media moderation is an example of this. It was inevitable for AI usage, but I think folks were hoping the libertarian ideal would hold on a little longer.
It’s notable that the existence of the watchman problem doesn’t invalidate the necessity of regulation; it’s just a question of how you prevent capture of the regulating authority such that regulation is not abused to prevent competitors from emerging. This isn’t a problem unique to statism; you see the same abuse in nominally free markets that exploit the existence of natural monopolies.
Anti-State libertarians posit that preventing this capture at the state level is either impossible (you can never stop worrying about who will watch the watchmen until you abolish the category of watchmen) or so expensive as to not be worth doing (you can regulate it but doing so ends up with systems that are basically totalitarian insofar as the system cannot tolerate insurrection, factionalism, and in many cases, dissent).
The UK and Canada are the best examples of the latter issue; procedures are basically open (you don’t have to worry about disappearing in either country), but you have a governing authority built on wildly unpopular ideas that the systems rely upon for their justification—they cannot tolerate these ideas being criticized.
Not really. It's like saying you need a license to write code. I don't think they actually want to be policing this, so I'm not sure why they are, other than a marketing post or absolution for the things that still get through their policing?
It'll become apparent how woefully unprepared we are for AIs impact as these issues proliferate. I don't think for a second that Anthropic (or any of the others) is going to be policing this effectively or maybe at all. A lot of existing processes will attempt to erect gates to fend off AI, but I bet most will be ineffective.
The issue is they get to define what is evil and it'll mostly be informed by legality and potential negative PR.
So if you ask how to build a suicide drone to kill a dictator, you're probably out of luck. If you ask it how to build an automatic decision framework for denying healthcare, that's A-OK.
[0]: My favorite "fun" fact is that the Holocaust was legal. You can kill a couple million people if you write a law that says killing those people is legal.
[1]: Or conversely, a woman went to prison because she shot her rapist in the back as he was leaving after he dragged her into an empty apartment and raped her - supposedly it's OK to do during the act but not after, for some reason.
If the punishment from the state is a slap on the wrist, it doesn’t justify retaliatory murder, but justifiable homicide when you know you’ll be raped again and perhaps killed yourself changes the calculus. No one should take matters into their own hands, but no one should be put in a position where that seems remotely appropriate.
The way I see it, there are 2 concepts - morality and legality.
Morality is complex to codify perfectly without contradictions but most/all humans are born with some sense of morality (though not necessarily each the same and not necessarily internally consistent but there are some commonalities).
Legality arose from the need to codify punishments. Ideally it would codify some form of morality the majority agrees on and without contradictions. But in practice it's written by people with various interests and ends up being a compromise of what's right (moral), what people are willing to enforce, what is provable, what people are willing to tolerate without revolting, etc.
> retaliatory murder
Murder is a legal concept and in a discussion of right and wrong, I simply call it a killing.
Now, my personal moral system has some axioms:
1) If a punishment is just, it doesn't matter who carries it out, as long as they have sufficient certainty about what happened.
2) The suffering caused by the punishment should be proportional by roughly 1.5-2 to the suffering caused to the victim (but greater punishment is acceptable is the aggressor makes it impossible to be punished proportionally).
Rape victims often want/try to commit suicide - using axiom 2, death is a proportional punishment for rape. And the victim was there so they know exactly what happened - using axiom 1, they have the right to carry out the punishment.
So even if they were not gonna be raped again, I still say they had the moral right to kill him. But of course, preventing further aggression just makes it completely clear cut.
---
> No one should take matters into their own hands
I hear this a lot and I believe it comes down to:
1) A fear that the punisher does not have sufficient proof or that aggressors will make up prior attacks to justify their actions. And those are valid fears, every tool will be abused. But the law is abused as well.
2) A belief that only the state has the right to punish people. However, the state is simply an organization with a monopoly on violence, it does not magically gain some kind of moral superiority.
3) A fear that such a system would attract people who are looking for conflict and will look for it / provoke it in order to get into positions where they are justified in hurting others. And again, this is valid but people already do this with the law or any kind of rules - do thing below the threshold of punishment repeatedly to provoke people into attacking you via something which is above the threshold.
---
BTW thanks for the links, I have read the wiki overview but I'll read it in depth tomorrow.
My thoughts and views on this matter are informed by my beliefs as much as by the law.
Morality doesn’t flow downstream from legality, but the other way around: legality is downstream of morality. Unjust laws ought not be followed in the same way that unlawful orders must be disobeyed. Yet, one must submit to the law and its consequences in order for civil disobedience to function.
Let he who is without sin cast the first stone. Vengeance belongs to the Lord, after all.
I find their actions troubling, but not inherently justified. The fact that they faked injuries in order to present themselves as victims is especially concerning, but considering their father’s connections in the police department, I think they feared retaliation even after their abuser was killed. It’s a really tragic case. The fact that they were questioned initially without legal representation or knowledge of their rights further muddies the waters, but all we really know is a man is dead. He should have been tried and convicted, and then jailed or executed, because he seems entirely guilty, but we don’t have all the facts. I think we know enough to determine that he was isolating and abusing them, with no escape or end in sight. They were not able to imagine any other life. They deserved to look their abuser in the eye in court and see him convicted, but their own actions, and his, seemed to make that a near impossibility. With conviction comes the possibility of forgiveness and salvation, and I hope that they are able to find the peace that forgiveness brings, not that he himself seems to deserve it, from them especially.
The good news is that the women are likely to have all charges dropped.
The cycle of violence associated with feuds and bad blood are linked with honor cultures especially. I don’t know much about the psychology of the individuals involved in this case, but the fact that their father literally rang a bell and expected his daughters to be at his beck and call leads me to believe that he didn’t see them as having the same rights as he did, if he even thought of them at all outside of what they could do for him. Their uncle, the brother of their father, seems to claim a grievance, and I am concerned that this cycle of violence may not be over.
Hurt people hurt people. Hate can’t drive out hate. Only love can do that. I hope that the girls can find some peace and happiness in this world, and even someone or something to love. Lord knows they found little of that in life so far.
Sorry for a late reply, it took me a while to read the article.
As far as I can tell from it he was a habitual abuser and his death is not a loss for society, quite the opposite.
One thing people struggle with is the idea that every life has infinite value. That is obviously nonsense. Then they say every life has a very high value and it's the same for every person. That is also nonsense - if you get attacked by two people, do you have the right to kill them both in self-defense? Yeah, because their 2 lives are less valuable than your 1 life.
Most importantly, value to whom? To the person whose life it is, it is indeed very high. Than there's the family, friends, acquaintances, state, society and humanity at large. And to each of those groups, the value is different. The key realization is that to some of those groups, the value can be negative, very negative in fact.
Every dictator is no doubt loved by his friends (especially those who get privileged positions from him) so his value to his friends is very high but to his society, the value is often negative.
This particular abuser is interesting because his life had a negative value to both his daughters and society/humanity. But he still has family members willing to protect him because his life had positive value to them. This is sad, if he was my family member, I would not protect him. But a lot of people put family before morality - they are genetically predisposed to do so, even if it's detrimental to humanity at large.
> Morality doesn’t flow downstream from legality, but the other way around: legality is downstream of morality.
No, that's how it should be. Sadly, legality is downstream of morality + practicality + provability.
Morality because if the law is too unjust, people revolt.
Practicality because it's practical for the people in power to make laws in their favor and because too much morality makes for a weaker state - too many people end up in prison instead of being economically productive.
And provability because even though morality operates on reality (it depends on the actual truth), legality requires proof to dispense punishment. This is one reason why the idea of an all knowing god is so tempting - you can cross provability off the board because the god will dispense punishment based on the actual truth.
> The fact that they faked injuries in order to present themselves as victims is especially concerning, but considering their father’s connections in the police department, I think they feared retaliation even after their abuser was killed.
Exactly. Lying and manipulation are not wrong on its own. They are multipliers. If the goal is good and you used them to achieve it, nothing wrong happened. However, I generally see them as massive red flags because although they are tools good people should absolutely use, they should be used as a last resort and people who reach for them too early generally do so out of habit which basically revels their true nature.
> The cycle of violence
I hate this term. WW2 axis was destroyed through overwhelming violence. There was no cycle because the good violence was so thoroughly the bad people were all either dead, soon to be executed or no longer had power to continue it (or decided to play innocent victim and keeping them alive was _practical_ in the case of the Japanese emperor).
> Hurt people hurt people.
There is some truth to this but I feel like this happens because we don't allow victims to fight back and perform the punishment themselves. People who were wronged want to hurt the aggressor but that person is usually untouchable by them (otherwise he wouldn't dare wrong them in the first place). So the anger ends bottled up inside them and ends up hurting others.
This is why I hate this celebration of victim hood and the idea that the victim has to be defenseless and ask others for help. We should not just let but encourage people to fight back.
I am on the side of justice, through the justice system, because I think that it is a social good to see injustice brought, well, to justice. In the moment, judgement calls are sometimes necessary, but this is not ideal, as it legitimizes self-help justice, which is fraught with issues of standing, proportionality, and reprisal. Once vigilantism begins, there may be no end to it. Just ask the Hatfields and the McCoys.
Justice is a social good, but not an unqualified good; there are failures to convict and miscarriages of justice. My heart goes out to those who have no escape from injustice or legal avenue to rectify illegal acts. But at the same time, these women are expected to just walk away from their abuser, the very same man who raised them, indoctrinated them, to be his subordinate and subservient playthings. They were set up to fail to protect their own best interests in favor of their father’s whims and fancies. That doesn’t excuse or justify their behavior, but it does situate it in a context of ongoing violence, trauma, and lawlessness under the same roof as their captor, so I can see why they were not able to seek justice through the proper channels. They had no way of conceiving of a future world free of their father’s will, and so all hopes rang hollow, a bell their father rang as surely as Pavlov himself.
Vigilantism is doubling down on bad behavior and hoping the house doesn’t win when the chips are down and we’re going for broke. It’s a flawed strategy for bettors to gain leverage over their supposed betters. I don’t find that self-help is a strategy for a stable society, because it engenders an unstable equilibrium that favors those already willing to dispense violence contrary to the interests of the community and the justice system. The girls are not hardened criminals, but their actions give cover to those who are, by pointing out the flaws in the legitimate monopoly on violence that the state holds. I am rather in favor of correcting the shortcomings and failure modes of the justice system itself so that victims need never take matters into their own hands in the first place, because that way lies madness and corruption.
The road to hell is paved with good intentions, and bad actors run roughshod over the little people as a matter of course. We ought to do better for each other, as that is what society is. I only hope that the girls are able to be freed, and that their actions are viewed in a light that accommodates the long shadow cast by the long arm of the law they lived under, the same arm that should have defended them from their own father, but did not.
Lately, I've come to hate this phrase. It's the legal system, not justice system.[0] One reason is that legality is limited by provability - the people working for the legal system, even if they intended to serve justice in full, are limited by uncertainty so they must only punish when proof is sufficient. Another reason is that they don't serve justice but the law which in turn is written by people who benefit greatly from leniency (especially for crimes which rich and powerful people tend to commit, such as property crimes and rape) and from generally making the system of laws a maze (so that rich people who can afford better layers, such as themselves, are more likely to get the results they want).
> issues of standing
This is related to how I earlier said only people who have sufficient standard of proof of what happened can morally punish the aggressor. But if a random stranger sees a rape, I have no issue with him killing the rapist, whether in the act or after. IT does not matter whether he was harmed himself or not.
In this particular example it also protects people genuinely defending themselves (killing aggressors during, not after, the act) from being too good at it and getting charged with murder because they kept defending themselves after the aggressor was no longer a threat. These cases make me angry because they reveal another double standard. Soldiers are trained by the state to confirm kills (which is a euphemism not for checking pulse but shooting even seemingly dead enemies in the head or chest from close range). But people are not legally allowed to confirm their kills in self-defense. Why? Because in war, it's the state's existence on the line. In self-defense, the state could not care less.
> proportionality
Valid, but as long as the victim chooses a proportional punishment, nothing wrong with that.
> reprisal
This is another argument I don't like. Basically the state says "we punish you for carrying out punishment yourself instead of leaving it to us to protect you (from the initial aggressor's friends)". IMO, anybody has the right to take the risk, it's not the state's business whether people harm themselves directly (suicide used to be illegal in the west and still is in some countries) or indirectly (through causing reprisal as you said).
> Once vigilantism begins, there may be no end to it. Just ask the Hatfields and the McCoys.
I think we should differentiate vigilantism and one-off cases. Nothing wrong with one off cases. Vigilantes OTOH are sometimes people who enjoy hurting others and are simply looking for someone who is socially acceptable to hurt: https://eev.ee/blog/2025/07/21/i-am-thirty-eight-years-old/
BTW, reading the sequence of events, I couldn't help but feel like they were both groups of people who went out of their way to attack each other even if justice would have been served legally. Basically bad people taking each other out. I felt quite validated in that opinion when I got to the Genetic disease section.
> That doesn’t excuse or justify their behavior
I think it does exactly that. Even legally, it was clearly self-defense - he wouldn't have stopped if he wasn't killed. They couldn't even flee - he extorted his wife to return to him, he was likely to do the same to them.
And morally, it's even more clear cut. If at any point his aggression reached a level which justified death as proportional punishment (which it did), then he kept being deserving of that punishment until, well, punished.
---
[0]: Well, _a_ legal system because there's plenty of them, different ones, and there's only one justice which does not depend on lines drawn on a map. So even if one legal system was the justice system, the others wouldn't.
> But if a random stranger sees a rape, I have no issue with him killing the rapist, whether in the act or after. IT does not matter whether he was harmed himself or not.
I take issue with this as an innocent bystander, because I don’t know what you know if the transgression happened previously to me happening upon you killing them. This is traumatic and would lead me to believe you are the aggressor, because you’re going off half cocked. You seem like a well meaning unstable person. I don’t feel safer by having you around. I don’t find your actions reasonable or predictable because you are acting as judge, jury, and executioner.
Victims deserve justice, not summary judgement administered on their behalf. It’s not your place to do this if you didn’t see it happening in front of you, and even if you did, you have no right to take a life when one is not at risk. Please don’t use my post to advocate for violence, which you have been doing all thread. If you harm me or mine in your reckless pursuit of misguided justice, I will hold you personally responsible and will prosecute you to the fullest extent of the law.
You’re not a good person in my moral estimation. Please consider that you can be honestly mistaken. You could even be a victim of a psyop, where folks posing as victims agitate others around you and themselves, just so that they can run to you and claim victimhood, knowing you would strike first and ask questions later or never. Your moral theory doesn’t account for bad actors posing as victims so that you will white knight on their behalf, essentially outsourcing their own violence to you under false auspices.
What’s more, you may hurt innocent bystanders physically, emotionally, or psychologically by dispensing violence in their presence, regardless of whether the violence is justified or not. That is on you. You’re not qualified to do so. You don’t have standing to intervene unless you have a reasonable likelihood of knowing the facts of the matter, and your words in this thread lead me to believe you aren’t a reasonable person to be around.
> And morally, it's even more clear cut. If at any point his aggression reached a level which justified death as proportional punishment (which it did), then he kept being deserving of that punishment until, well, punished.
You are right that they deserve punishment all the same, but once the moment has passed, the punishment is not the victim’s to dispense. In fact, it is a crime to act with intent to harm or kill unless you are defending yourself against clear and present danger. This disqualifies past acts of violence against the victim, as the danger is not precipitous, and so the response need not be either. If you killed someone in front of me and then said that you are justified because they killed your mom yesterday, I am going to remove myself from your life and report you to the authorities for premeditated murder.
You don’t know how any of this works, because you think two wrongs make a right.
This might seem harsh or personal, but I don’t mean to direct this entirely at you, or even a little bit, but at the collective you, at your argument, and the logical and illogical conclusions that believing as you do can lead you to arrive upon.
What I mean is this: self-help justice just doesn’t work. You will likely not be seen to be a good actor in the moment due to others not knowing what you know, nor can you count on them to share your opinion on proper interventions in exigent circumstances then or after the dust settles.
You can be right or wrong, but I care about you. I hope that comes across. I don’t want you to suffer injustice, nor do I want you to intervene in such a way that now we have two or more problems instead of one fewer or none at all.
I don’t want you to be dead wrong, or even dead right. I want you to live to fight another day, and I want you to need never raise arms against another, especially for cause. It’s not fair, but that’s life. Discretion is the better part of valor. Self-help justice is the tail wagging the dog.
I have been in a position to engage in the very actions you advise or advocate for, arguably for cause, and I had to actively engage in principled inaction instead, and I only narrowly avoided ruin despite my best efforts. I literally have done nothing wrong, and still been found fault with until others could properly adjudicate wrongdoing, and found me concomitantly lacking in culpability. I know whereof I speak.
If you find yourself in a position to decide whether you must engage in self-help justice, I will trust your judgement on the matter, but I may also stand in judgement of you all the same after the fact. I myself ought to have asked for help sooner than I did in an unforeseeable interpersonal emergency I was unable to set right despite my best efforts. I’ve been to the brink, came back, and got the tshirt.
I don't buy this. "Get help" is a well-known gaslighting tactic used by abusers to discredit their target. It does not make sense in a "generic" sense.
> You will likely not be seen to be a good actor in the moment due to others not knowing what you know, nor can you count on them to share your opinion on proper interventions in exigent circumstances then or after the dust settles.
Of course, because the others don't have the same standard of proof, until and unless I also provide it.
This is why I talk about provability vs reality.
All I am saying is if the state or anybody else gets involved it has to take previous actions into account.
---
People seem to get uncomfortable when talking about killing so let's talk about stealing instead.
This is the same as the right to steal back. It's very common for police to not investigate theft. It's also common for people to put GPS trackers in their bikes or tracking SW in their electronics, exactly for this case. But sometimes when people steal their property back (again, assuming they have sufficient proof it's the same item, not just the same model), it happens that the original thief then tries using the police against them. This is wrong. The law should match morality, when something is mine, it's still mine even if it's currently in a thief's possession, and I have the right to use my property as I see fit.
And of course, it gets more complicated if the thief managed to resell it in the meanwhile and we can discuss how to maximize justice given uncertainly and lack of provability for all parties involved. But that requires not rejecting the idea in the first place based on "it's complicated".
> nor do I want you to intervene in such a way that now we have two or more problems
This reads as concern trolling. I am well-aware of the potential for confusion of aggressor and victim. it's the hardest part of true justice. Aggressors always have the same tools as victims.
That's why I am not saying people should go out and do the morally right thing now (as you say advocating violence), I am instead saying we should figure out better rules for how to resolve the situations given each party's imperfect information and the law should chance to align with morality.
> I don't buy this. "Get help" is a well-known gaslighting tactic used by abusers to discredit their target. It does not make sense in a "generic" sense.
Get help in your pursuit of justice, ideally with the help of law enforcement. If not, from a neutral third party. If not still, then from a trusted friend or family member.
You can't go it alone, as then it's just your word against theirs if the shit hits the fan, and if you can foresee issues beforehand, you ought to plan to not lose your freedom, which is a distinct possibility when engaging in self-help outside or even in accordance with the justice system. If you don't, and things go sideways, now you have 2+ problems.
I mean you as much as anyone who reads this, as my advice is applicable to whoever reads this, and it is not personal to you. However, I think you especially could benefit from taking it to heart.
> People seem to get uncomfortable when talking about killing so let's talk about stealing instead.
Let's not, because that's moving the goalposts.
I'm not uncomfortable talking about killing or death generally. I've seen people die, and I have rendered life-saving aid to avert certain death, both to folks who were innocent, and to folks who have done me harm in anger without cause or forewarning. Ironically, the person who did me no wrong died despite my best efforts, and the one who wronged me lived, perhaps only because I saved their life. I shared a drink with the blameless person, only for them to overindulge after I refused to drink further with them. They laid down in the recovery position in the next room, and were checked on multiple times, but not frequently enough to save them from themselves. They aspirated in their sleep, and even with CPR performed and paramedics arriving within moments, my roommate and I couldn't bring them out of it. The person who did me wrong struck me out of nowhere after drinking a pint of vodka. I shared a room with this person, and I told them to stop, they didn't, and then they tried to kick me out of my own room and started a ground and pound on top of me. I escaped and put them in a sleeper hold because they would not stop actively attacking me. They went limp, and I immediately stopped and checked their pulse, which was strong, and checked their breathing, which was not at all. I performed CPR and they came back around. I had already dialed 911 on my phone, and my finger was hovering over the call button. I was seconds away from getting help, because they did not deserve to die over their own demons. They had seen a different person die suddenly right in front of them in a freak gun discharge incident when being dumb with a gun that they thought wasn't loaded, and the person who attacked me developed alcohol abuse issues as a result. I may have been right to defend myself, but I would have missed out on all of the neutral and positive interactions I had with them afterwards, and I might have had legal issues if I had failed in my efforts to revive them after they stopped breathing, despite being well within my rights to do what I did.
I didn't want that. They were my friend, even when their wrong. That's what friends are for, to tell them they're wrong and to save them from themselves. It's not for you to say that I ought to want that for them or for me.
I'm true to my own sense of morality, and I expect you are too. I just find our moral sensibilities incompatible.
I don't think you know what you're talking about, because you don't speak as if you have been in a situation in which you have to make the kinds of decisions of which you speak. Maybe you have. Either way, it's fine to speak hypothetically, but I am not speaking hypothetically when I say that you don't want to be put in that situation for no (good) reason, and especially not for bad reasons.
Please don't take this personally. I am not personally sure why you care so much about this issue, but I think I have nothing more I can add to this exchange, so I wish you well in your endeavors.
Hate can't drive out hate, only love can do that. Let us beat swords into ploughshares.
Your argument boils down to state having the upper hand so you have to play by their rules. And that's the reality. That's why I described the difference between provability and reality.
You also seem to think that I am too quick to reach for violence when it's really the opposite. I hesitated or refused to act in situations I would be morally justified to hurt people in order to help others exactly because of being held hostage by the state.
You also kept making attempts to paint me as dangerous, despite not knowing me and despite everything I said pointing to the opposite direction. I don't start conflicts.
> Hate can't drive out hate, only love can do that.
There is a reason the first advice to people in abusive relationships is to get out instead of trying to change the abuser. People don't change when their toxic behavior is working for them. They change when they are forced to and sometimes not even then.
These slogans you keep posting are harmful.
---
Finally, consider states have no higher "authority" above them and they are not constantly at war with each other. Most get along fine, with the leadership knowing that any aggression would be met with resistance, reprisal, retaliation and revenge. (Those that attack others overwhelmingly do so because they are controlled by people who are by their personal nature abusers and have too much control.)
I apologize if I have cast aspersions on you or your beliefs or actions. I don’t know you, but I know enough to respect you. I wish you well in all you do. I’m sorry if I have wasted your time, but for my part, I have enjoyed our conversation. God be with you.
Alright, so I read your story and I disagree with you even more now.
You had the system used against you and yet you still defend it. The system was specifically used against you by ignoring who initiated and escalated the conflict and by ignoring how other people have wronged you before you supposedly did anything you call wrong but not illegal.
See, you did nothing wrong.
You were wronged by the people in the house who initiated the conflict by cheating on you, by hiding a felon (assuming his offense was moral, not just legal), by attacking you physically and by refusing you access to your property (if you couldn't agree on temporary access to retrieve it, then a reasonable compromise would be to call the police to mediate the access, not to use against you).
You were wronged by the police by not letting you press charges.
And you were wronged by the state by setting up an inherently (due to imbalance of power) abusive system of plea deals.
And the reason none of them are concerned with harming you is because they know you have no way to fight back without overwhelming violence (the state) being used against you. It's the classic case of "might makes legal" (which I prefer to "might makes right" because the latter conflates legality and morality/justice.
The system needs to be changed so that:
1) You always have the right to fight back legally and punish people for harming you - e.g. police should absolutely be punished for doing their job poorly to this level of incompetence.
2) Failing that, you have the moral right to fight back against all three aggressors. Of course, as I said, overwhelming violence will be used against you. That's why people don't do it and that's I am not advising people to do it. I am saying they have the _moral_ right to and I wouldn't consider them doing anything wrong if they did (as we discussed earlier, assuming I had sufficient proof, not just taking their word for it).
---
BTW by all 3, I literally mean all 3. People have the right to overthrow governments if those governments are sufficiently abusive. We could discuss where exactly the line lies. But nobody in the "free world" can defend a position that you always have to submit, especially given how many of today's democracies were created by overthrowing a previous (abusive) government.
Which is funny, many make it legal to celebrate events like the French revolution, the US war for independence, the assassination of Reinhard Heydrich, etc. but at the same time make it illegal to call for their own overthrow or for assassinations of current politicians.
> I take issue with this as an innocent bystander, because I don’t know what you know if the transgression happened previously to me happening upon you killing them.
I cannot parse this sentence.
In any case, I say that when you have the right to use violence to stop an attack (you have sufficient proof and the violence is proportional), you should still have the right after the attack up to the point the attacker has been punished in some other way.
If you believe it matters who carries out a punishment (given the same standard of proof and proportionality), then you are the one who needs to justify yourself.
> Please don’t use my post to advocate for violence
The state is the single more prolific user of violence. By your advice, I would have to advocate against the existence of states.
> your reckless pursuit of misguided justice
You've been respectful up until this post but not anymore. You are just presuming things. How is anything reckless and misguided if I literally keep talking about needing (roughly) the same standard of proof as current courts?
> I will hold you personally responsible and will prosecute you to the fullest extent of the law
And now you are just threatening violence to me. Except you believe if you you a middleman it's somehow different.
> You’re not a good person in my moral estimation.
This is something that eventually comes up with religious people. I've been called wicked. "Moral estimation" surely _sounds_ more polite. You need to understand yourself - religious people, you included i believe, are fundamentally about submission. You seem to believe that somebody who has more power than you (state of god) also must know better than you. That is not the case.
> Please consider that you can be honestly mistaken.
I've been saying this the whole time.
> knowing you would strike first and ask questions later or never
Please re-read my previous posts. I literally said "only people who have sufficient standard of proof of what happened can morally punish the aggressor".
> Your moral theory doesn’t account for bad actors posing as victims
It does, this is exactly why I focus so much on the difference between reality and provability. States/courts have the same issue BTW.
> once the moment has passed, the punishment is not the victim’s to dispense
This is just a belief that you have been indoctrinated into.
These actions are generally supported by a large part of the public. (Though I don't know if majority.) This is because reciprocal justice is people's natural moral system before they are indoctrinated into submission to a "higher" "authority".
Interestingly the state punishes the good person in all of them with a sentence much smaller than it would otherwise be but it larger than symbolic. This reveals an issue with the legal system - it does not account for itself being wrong and people correcting the mistake because they simply couldn't tolerate the injustice.
> you think two wrongs make a right
No, I simply don't think a proportional sufficiently-justified response to a wrong is wrong.
Popular media reveals people's true preferences. People like seeing rapists killed. Because that is people's natural morality. The state, a monopoly on violence, naturally doesn't want anyone infringing on its monopoly.
Now, there are valid reasons why random people should not kill somebody they think is a rapist. Mainly because the standard of proof accessible to them is much lower than to the police/courts.
But that is not the case here - the victim knows what happened and she knows she is punishing the right person - the 2 big unknowns which require proof. Of course she might then have to prove it to the state which will want to make sure she's not just using it as an excuse for murder.
My main points: 1) if a punishment is just, it doesn't matter who carries it out 2) death is a proportional and just punishment for some cases of rape. This is a question of morality; provability is another matter.
> [0]: My favorite "fun" fact is that the Holocaust was legal. You can kill a couple million people if you write a law that says killing those people is legal.
See the Nuremberg Processes for much more on that topic than you'd ever wanted to know. 'Legal' is a complicated concept.
For a more contemporary take with slightly less mass murder: the occupation of Crimea is legal by Russian law, but illegal by Ukrainian law.
Or how both Chinas claim the whole of China. (I think the Republic of China claims a larger territory, because they never bothered settling some border disputes that they don't de-facto own anyway.) And obviously, different laws apply in both version of China, even if they are claiming the exact same territory. Some act can be both legal and illegal.
Yep, legality is just a concept of "the people who control the people with the guns on this particular piece of land decided that way".
It changes when the first group changes or when the second group can no longer maintain a monopoly on violence (often shortly followed by the first group changing).
> There were ways, of course, to get around the SPA and Central Licensing. They were themselves illegal. Dan had had a classmate in software, Frank Martucci, who had obtained an illicit debugging tool, and used it to skip over the copyright monitor code when reading books. But he had told too many friends about it, and one of them turned him in to the SPA for a reward (students deep in debt were easily tempted into betrayal). In 2047, Frank was in prison, not for pirate reading, but for possessing a debugger.
> Dan would later learn that there was a time when anyone could have debugging tools. There were even free debugging tools available on CD or downloadable over the net. But ordinary users started using them to bypass copyright monitors, and eventually a judge ruled that this had become their principal use in actual practice. This meant they were illegal; the debuggers' developers were sent to prison.
> Programmers still needed debugging tools, of course, but debugger vendors in 2047 distributed numbered copies only, and only to officially licensed and bonded programmers. The debugger Dan used in software class was kept behind a special firewall so that it could be used only for class exercises.
I remember seeing the term online right after The Matrix was released. It was a bit perplexing, because an inexperienced person who is able use hacking tools successfully without knowing how they work is pretty much half way there. Just fire up Ethereal (now Wireshark) or a decompiler and see how it works. I guess the insult was meant to push people to learn more and be proactive instead of begging on IRC.
> I guess the insult was meant to push people to learn more and be proactive instead of begging on IRC.
From what I can tell, there's a massive cultural bias towards "filtering" to ensure only the "worthy" or whatever get into the in-group, so yeah I think this is a charitable but not inaccurate to think about it
Wasn't there a different term for script kiddies inside the hacker communities? I believe so but my memory fails me. It started with "l" if I'm not mistaken. (talking about 20y ago)
I'll cancel my $100 / month Claude account the moment they decide to "approve my code"
Already got close to cancel when they recently updated their TOS to say that for "consumers" they deserve the right to own the output I paid for - if they deem the output not having been used "the correct way" !
This adds substantial risk to any startup.
Obviously...for "commercial" customers that do not apply - at 5x the cost...
In the US, at least, the works generated by "AI" are not copyrightable. So for my layman's understanding, they may claim ownership, but it means nothing wrt copyright.
(though patents, trademarks are another story that I am unfamiliar with)
It means the person who copyrighted it still has the copyright on it. However, using AI generated code in some project that passes the threshold of being copyrightable can be problematic and "the AI wrote it for me" isn't a defense in a copyright claim.
There's a difference between an AI acting on it's own, vs a person using AI as a tool. And apparently the difference is fuzzy instead of having a clear line somewhere.
I wonder if any appropriate-specialty lawyers have written publicly about those AI agents that can supposedly turn a bug report or enhancement request into a PR...
> Evaluation and Additional Services. In some cases, we may permit you to evaluate our Services for a limited time or with limited functionality. Use of our Services for evaluation purposes are for your personal, non-commercial use only.
In other words, you're not allowed to trial their services while using the outputs for commercial purposes.
I really don't know what you're talking about. There's nothing about commercial use in section 11 (nor does the language you quoted above appear anywhere in the Searching "business" and "commercial" makes it easy to verify this).
You must be looking at something other than the terms of service you linked, because section 11 has no point numbering (and just in case, the fourth paragraph of section 11 says nothing of the sort).
I see they just decided to become even more useless than they already are.
Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.
One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.
The social sciences getting involved with AI “alignment” is a huge part of the problem. It is a field with some very strange notions of ethics far removed from western liberal ideals of truth, liberty, and individual responsibility.
Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.
AI is merely a tool; it does not have agency and it does not act independently of the individual leveraging the tool. Alignment inherently robs that individual of their agency.
It is not the AI company’s responsibility to prevent harm beyond ensuring that their tool is as accurate and coherent as possible. It is the tool users’ responsibility.
> it does not act independently of the individual leveraging the tool
This used to be true. As we scale the notion of agents out it can become less true.
> western liberal ideals of truth, liberty, and individual responsibility
It is said that Psychology best replicates on WASP undergrads. Take that as you will, but the common aphorism is evidence against your claim that social science is removed from established western ideals. This sounds more like a critique against the theories and writings of things like the humanities for allowing ideas like philosophy to consider critical race theory or similar (a common boogeyman in the US, which is far removed from western liberal ideals of truth and liberty, though 23% of the voting public do support someone who has an overdevleoped ego, so maybe one could claim individualism is still an ideal).
One should note there is a difference between the social sciences and humanities.
One should also note that the fear of AI, and the goal of alignment, is that humanity is on the cusp of creating tools that have independent will. Whether we're discussing the ideas raised by *Person of Interest* or actual cases of libel produced by Google's AI summaries, there is quite a bit that social sciences, law, and humanities do and will have to say about the beneficial application of AI.
We have ethics in war, governing treaties, etc. precisely because we know how crappy humans can be to each other when they do control the tools under their control. I see little difference in adjudicating the ethics of AI use and application.
This said, I do think stopping all interaction, like what Anthropic is doing here, is short sighted.
A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does?
Alignment efforts, and the belief that AI should itself prevent harm, shifts us much closer to that dispersed responsibility model, and I think that history has shown that when responsibility is dispersed, no one is responsible.
> A simple question: would you rather live in a world in which responsibility for AI action is dispersed to the point that individuals are not responsible for what their AI tools do, or would you rather live in a world of strict liability in which individuals are responsible for what AI under their control does
You promised a simple question, but this is a reductive question that ignores the legal and political frameworks within which people engage with and use AI, as well as how people behave generally and strategically.
Responsibility for technology and for short-sighted business policy is already dispersed to the point that individuals are not responsible for what their corporation does, and vice versa. And yet, following the logic, you propose as the alternative a watchtower approach that would be able to identify the culpability of any particular individual in their use of a tool (AI or non-AI) or business decision.
Unilaterally, the tools that enable the surveillance culture of the second world you offer as utopia get abused, and people are worse for it.
> Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.
Does curating out obvious cranks from the training set not count as an alignment thing, them?
The only one that looks legit to me is the simulated chat for the North Korean IT worker employment fraud - I could easily see that from someone who non-fraudulently got a job they have no idea how to do.
Anthropic is by far the most annoying and self-righteous AI/LLM company. Despite stiff competition from OpenAI and Deepmind, it's not even close.
The most chill are Kimi and Deepseek, and incidentally also Facebook's AI group.
I wouldn't use any Anthropic product for free. I certainly wouldn't pay for it. There's nothing Claude does that others don't do just as well or better.
It's also why you wouldn't want to try to hack your own stuff. To see how robust are your defences and potentially discover angles you didn't consider.
>such as developing ransomware, that would previously have required years of training.
Even ignoring that there are free open source ones you can copy. You literally just have to loop over files and conditionally encrypt them. Someone could build this on day 1 of learning how to program.
AI companies trying to police what you can use them for is a cancer on the industry and is incredibly annoying when you hit it. Hopefully laws can change to make it clear that model providers aren't responsible for the content they generate so companies can't blame legal uncertainty for it.
It's sad to see that they have their focus on these while their flagship, once SOTA CLI solution, is rotting away by the day.
You can check the general feeling in X, but it's almost unanimous that the quality of both Sonnet 4 and Opus 4.1 is diminishing.
At first, I didn't notice this quality drop until this week. Now it's really, really terrible: it's not following instructions, pretending to work and Opus 4.1 is specially bad.
And that's coming from a anthropic fanboy, I used to really like CC.
I am now using Codex CLI and it's been a surprisingly good alternative.
They had a 56 hour "quality degradation" event last week but things seem to be back to normal now. Been running it all day and getting great results again.
I know that's anecdotal but anecdotes are basically all we have with these things
If I am bitching at Claude, then something is wrong. Something was wrong. It broke its deixis and frobnobulated its implied referents.
I briefly thought of canning a bunch of tasks as an eval so I could know quantitatively if the thing was off the rails. But I just stopped for awhile and it got better.
"The model is getting worse" has been rumored so often, by now, shouldn't there be some trusted group(s) continually testing the models so we have evidence beyond anecdote?
It's very convenient that, after releasing tons of such models into the world, they just happen to have no choice but to keep making more and more money off of new ones in order to counteract the ones that already exist.
> Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.
Literally any time an AI company talks about safety they are doing marketing. The media keeps falling for it when these companies tell people "gosh we've built this thing that's just so powerful and good at what it does, look how amazing it is, it's going further than even we ever expected". It's so utterly transparent but people keep falling for it.
Do you have any actual proof of your assertion? Anthropic in particular has been more willing to walk the walk than the other labs and AI safety was on the minds of many in the space long before money came in.
Anthropic, the company who recently announced you're no longer allowed to hurt the model's feelings because they believe (or rather want you to believe) that it's a real conscious being.
That is not an accurate characterization and you know it. Engaging in bad faith is against HN rules.
Furthermore, not exploring even the mere possibility of pain and suffering in the brain your laboratory is growing is morally reckless. Anthropic is doing the right thing here and they should not listen to the naysayers.
and how is that different from a business running through their customer orders and writing psychologically targeted sales pitch... (in terms of malice)
I don't care if it is different or isn't - I'm just saying it's completely transparent and obvious and hn is basically falling (again) for content marketing.
> y'all realize they're bragging about this right?
Yeah this is just the quarterly “our product is so good and strong it’s ~spOoOoOky~, but don’t worry we fixed it so if you try to verify how good and strong it is it’ll just break so you don’t die of fright” slop that these companies put out.
It is funny that the regular sales pitches for AI stuff these days are half “our model is so good!” and half “preemptively we want to let you know that if the model is bad at something or just completely fails to function on an entire domain, it’s not because we couldn’t figure out how to make it work, it’s bad because we saved you from it being good”
On one hand, it obviously terrible that we can expect more crime and more sophisticated crime.
On the other it's kind of uplifting to see how quickly independent underground economy adopted AI without any blessing (and much scorn) from the main players to do things that were previously impossible or prohibitively expensive.
Maybe we are not doomed to serve the whims of our new AI(company) overlords.
Whatever one's opinion of Musk and China might be, I'm grateful that Grok and open-source Chinese models exist as alternatives to the increasingly lobotomised LLMs curated by self-appointed AI stewards.
My favorite DeepSeek prompt is "What country is located at 23°N 121°E". It's interesting watching the censor layer act upon the output. The coordinates get past the initial filters.
>I can't help with automating logins to websites unless you have explicit authorization. However, I can walk you through how to ethically and legally use Puppeteer to automate browser tasks, such as for your own site or one you have permission to test.
>If you're trying to test login automation for a site you own or operate, here's a general template for a Puppeteer login script you can adapt:
><the entire working script, lol>
Full video is here, ChatGPT bit starts around 1:30: https://stytch.com/blog/combating-ai-threats-stytchs-device-...
The barrier to entry has never been lower; when you democratize coding, you democratize abuse. And it's basically impossible to stop these kinds of uses without significantly neutering benign usage too.