> The creator of a model can not ensure that a model is never used to do something harmful – any more so that the developer of a web browser, calculator, or word processor could. Placing liability on the creators of general purpose tools like these mean that, in practice, such tools can not be created at all, except by big businesses with well funded legal teams.
This matches my thoughts on why this is ultimately a bad piece of legislation. It is virtually impossible to ensure that a piece of technology will not be used for "harmful purposes". I agree that such stipulations will be just another roadblock keeping everyone except "big businesses with well funded legal teams" from working on LLMs.
As I understand this law does not mandate you to ensure anything. It requires you to follow best practices (to be determined), report safety incidents, etc. You are not even liable for safety incidents, you just need to report them, although it may be embarrassing. Overall, it seems highly reasonable.
> requires you to follow best practices (to be determined)
Trigger happy regulation for a field that hasn't even come into full swing. It's indicative of an over-active immune system; lawmakers with nothing better to do.
Pass laws against improper use and go after the malicious users. Don't ban the technology, the research, or even the applications. (Of which there will be abundant good uses. Many of which we've yet to even see or predict.)
Our culture has become obsessed with regulating and limiting freedom on the very principle that it might be harmful. We should be punishing actual measurable, physical and monetary harms. Not imaginary or hypothetical ones.
If California passes this, AI companies should leave California behind.
> Trigger happy regulation for a field that hasn't even come into full swing. It's indicative of an over-active immune system; lawmakers with nothing better to do.
I guess they are damned if they do and damned if they don't.
We constantly complain about slow lawmaking, "Look at how out of touch Congress are! XYZ technology is moving so fast, and they're always 10-20 years behind!" Finally, someone is actually on the ball and up-to-date with a current technology, and now the other complainers complain that they're jumping the gun and regulating too soon. Lawmakers can't win.
> Trigger happy regulation for a field that hasn't even come into full swing.
Of little concern in the US legal system. Might be problematic in the EU perhaps, but in the United States the courts have consistently been tremendously deferential to the interests of small and large businesses vs consumers.
>Pass laws against improper use and go after the malicious users.
I think they are having to deal with things like sales to countries outside of their legal reach. So, while I understand the tack here, there's probably more to it than this.
> Other relief as the court deems appropriate, including
monetary damages damages, including punitive damages, to
persons aggrieved aggrieved, punitive damages, and an order for
the full shutdown of a covered model.
> A civil penalty in an amount not exceeding 10 percent of
the cost, excluding labor cost, to develop the covered model for a
first violation and in an amount not exceeding 30 percent of the
cost, excluding labor cost, to develop the covered model for any
subsequent violation.
That's like saying we should punish when bridge collapses, before that any bridge should be able to be built. You can argue that, but not many will agree.
Because we know how to build bridges so that they don't collapse. The laws of physics that govern bridge-building are well known. The equivalent for AI systems? Not really.
We don't know if there's even any danger. All statements so far of any danger are somewhere between science-fiction stories and anthropomorphizing AI as some kind of god. The equivalent of "if the bridge breaks down, someone can be hurt", namely a real, quantifiable danger is sorely lacking here.
Best practices will say things like "you should test it". While we are ignorant, there are just many reasonable things to do. Human biology is not completely understood, but that does not mean medical checklists are useless.
Test it how? What makes it fail? The ability to tell people how to make a bomb? Being able to say what (few) good things Hitler accomplished for Germany? Giving medical advice? Where’s the line?
One thing law explicitly says is full shutdown capability. So it should be tested whether it can autonomously hack computers on the internet and propagate itself. In fact Anthropic tested this. See https://metr.org/ for more.
It's not at all like saying that.
The concept of bridges is thousands of years old at this point with well established best practices, and a dense knowledge base on what can go wrong and how much damage can occur if built incorrectly. We aren't at the stage of "bridge innovation" where we don't even know what a bridge collapse looks like.
We know very well the cost, threat to lives, even timeline that a poorly built bridge can cause.
I'm not against legislation regulating AI, but it needs to be targeted toward clear problems e.g.: stealing copyrighted material, profiling crime, face recognition, self driving vehicles, automated "targeting" however you want to interpret that.
I want to point out above are some awful uses of AI that are leveraged mostly by closed, proprietary entities
Nobody has been killed by AI, unless you're arguing it impacts mental health [1].
A better-fitting analogy I'd make is that sex causes disease and other negative externalities, so we should pass laws that force people to be married and licensed in order to have sex.
In any case, this bill is the walking epitome of something a "nanny state" might produce.
[1] TikTok and Instagram have far more impact on this, and we've yet to do anything there. We seem to be of the opinion that this should be an individual responsibility.
Some of us worry that billions of people will be killed by AI in the future -- possibly without anything that you or the average decision-maker might regard as a warning. (They're likely to be killed all at the same time.)
I.e., it is more like a large asteroid slamming into the Earth than a stream of deaths over time such as produced by the deployment in society of the automobile (except that the asteroid does not have the capability of noticing that it's first plan failed to kill a group of human over there, then to devise a second plan for killing those).
Safety and alignment stopped being about preventing AI from killing all humans a while ago. Unless you think that "don't say anything potentially offensive" is in-scope with "don't kill humans and don't take over the world by any means necessary to carry out your prompt."
Very fair point/question, I should have explicitly drawn this link because my comment was quite ambiguous and making (bad) assumptions on shared context.
The relevance is IMHO this bill is largely an ossification at the government level of the safety and alignment philosophy of the big corps. I'm guessing they mainly wrote this bill. It's not the specific words "safey and alignment" that matter, it's the philosophy.
If the bill were only covering AI killing machines I'd (probably) be in agreement with it, but it seems significantly more overreaching than that.
>If the bill were only covering AI killing machines I'd (probably) be in agreement with it, but it seems significantly more overreaching than that.
Just to make sure we are on the same page: my main worry is the projects ("deployments"?) that aren't intended to kill anybody, but one of those project ends up killing billions of people anyways. It probably kills absolutely everyone. That one project might be trying to cure cancer.
The only way of not incurring this risk of extinction (and of mass death) that I know of is to shut down all AI research now, which I'm guessing you would consider "overreaching".
It would be great if there were a way to derive the profound benefits of continuing to do AI research without incurring the extinction risk. If you think you have a way to do that, please let me know. If I agree that your approach is promising, I'll drop everything to make sure you get a high-paying job to develop your approach. There are lots of people who would do that (and lots of high-net-worth people and organizations who would pay you the money).
The Machine Intelligence Research Institute for example has a lot of money that was donated to them by cryptocurrency entrepreneurs that they've been holding on to year after year because they cannot think of any good ways to spend it to reduce extinction risk. They'd be eager to give money to anyone that can convince them that they have an approach with even a 1% probability of success.
Agreed, and I think this bill probably would help against that, although indirectly by stifling research outside of big corps. You might be winning me over somewhat - stifling research outside of big corps does feel like a pretty low price to pay against the death/destruction of all of humanity...
I guess I need to decide how high I feel the risk is of that, and that I'm less sure of. Appreciate the discussion btw!
The idea that something with greater cognitive capabilities than us might be dangerous to us occurs to many people: sci-fi writers in large numbers to be sure, but also Alan Turing and a large fraction of currently-living senior AI researchers.
What really gets me concerned is the quality of the writing on the subject of how can we design an AI so that it will not want to hurt us (just as we design bridges so that we know from first principles they won't fall down). Most leaders of AI labs have by now written about the topic, but the writings are shockingly bad: everyone has some explanation as to why the AI will turn out to be safe, but there are dozens of orthogonal explanations, some very simplistic, none of which I want to bet my life on or the lives of my younger relatives.
Those who do write well about the topic, particularly Eliezer Yudkowsky and Nate Soares of the Machine Intelligence Research Institute, say that it is probably not currently within the capabilities of any living human or group of humans to design an AI to be safe (to humans) the way we design bridges to be safe, and that our best hope is the hope that over the next centuries humankind will become cognitively capable enough to do and that in the meantime people stop trying to create AIs that might turn out to be dangerously capable -- which (because outside of actually doing the training run, we have no way of predicting the effects on capability of the next architectural improvement or the next increase in computing resources devoted to training) basically means stopping all AI research now worldwide and for good measure stopping progress in GPU technology.
Eliezer has been full-time employed for over 20 years to work on the issue (and Nate has been for about 15 years) and they've had enough funding to employ at least a dozen researchers and researcher-apprentices over that time to bounce ideas off of in the office.
How do you know? If we can agree something about AI, it is that we are ignorant about AI.
We were similarly ignorant about recombinant DNA, so Asilomar was very cautious about it. Now we know more, we are less cautious. I still think it was good to be cautious and not to dismiss recombinant DNA concerns as "science fiction".
"Best practices" means something sensible in most subfields of capital-E Engineering. (In fact, I think that's where the term originates from, with all other usages being a corruption of the original concept.)
In Engineering, "best practices" are the set of "just do X" answers that will let you skip deriving every answer about what material or design to use from first principles for cases where there's a known dominant solution. For example, "for a load-bearing pillar, use steel-reinforced concrete, in a cylindrical shape, with a cross-sectional diameter following formula XYZ given the number of storeys of the building." You can (and eventually must!) still do a load simulation for the building, to see that the pillar can hold things up without cracking — but you don't have to model the building when selecting what material to use; and you don't have to randomly fiddle with the shape or diameter of the pillar until the load holds. You can slap a pillar into the design and be able to predict that it'll hold the load (while not being overly costly in material use!), because "best practices."
1. The new Frontier Model Division is just receiving information and issuing guidelines. It’s not a licensing regime and isn’t investigating developers.
2. Folks aren’t automatically liable if their highly capable model is used to do bad things, even catastrophic things. The question is whether they took reasonable measures to prevent that. This bill could have used strict liability, where developers would be liable for catastrophic harms regardless of fault, but that's not what the bill does.
3. Overall it seems pretty reasonable that if your model can cause catastrophic harms (which is not true of current models, but maybe true of future models), then you shouldn’t be releasing models in a way that can predictably allow folks to cause those catastrophic harms.
If people want a detailed write up of what the bill does, I recommend this thorough writeup by Zvi. In my opinion this is a pretty narrow proposal focused at the most severe risks (much more narrow than, e.g., the EU AI act).
https://thezvi.substack.com/p/on-the-proposed-california-sb-...
On point #3, as far as I can tell, the bill criteria defines a "covered model" (a model subject to regulation under this proposal) as any model that can "cause $500,000 of damage" or more if misused.
A regular MacBook can cause half a million dollars of damage if misused. Easily. So I think any model of significant size would qualify.
Furthermore, the requirement to register and pre-clear models will surely precede open data access, and that means a loss in competitive cover for startups working on new projects. I can easily see disclosure sites being monitored constantly for each new AI development, rendering startups unable to build against larger players in private.
Your argument is meaningless if you don't specify what threshold there should be for harm
Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
Those are also all dual-use technologies that are objectively morally neutral
Laws should be about the outcome, not about processes that may lead to an outcome. It is already illegal in California to produce your own nuclear weapon. Instead of outlawing books, because they allow research into building giant gundam robots, just outlaw giant gundam robots.
> Laws should be about the outcome, not about processes that may lead to an outcome
They have to be about both because outcomes aren’t predictable, and whether something is an intermediate or ultimate outcome isn’t always clear. We have a law requiring indicator use on lane change, not just hitting someone while lane changing, for example.
But even this example is a ban on a specific action: changing lanes without using a legally defined indicator with a specific amount of display time.
The equivalent would be if the law simply said, "don't change lanes unsafely" but didn't define it much beyond that, and left it to law enforcement and judges to decide, so anytime someone changed lanes "unsafely" there's now extremely unknown legal risk.
Laws also should be possible (preferably easy) to implement. Why does DMCA ban circumvention tools? Circumvention is already illegal and it is piracy that should be outlawed, not tools to enable piracy? The reason is piracy tools are considerably easier to regulate than piracy.
The DMCA ban on circumumvention has been both stunningly useless at discouraging piracy and effective at hurting normal users including such glorious stupidity as being used to prevent 3rd party ink cartridges.
> Laws should be about the outcome, not about processes that may lead to an outcome.
Some outcomes are pretty terrible, I think there are valid instances where we might also want to prevent precursor technology from being widely disseminated to prevent them.
There are certainly types of data that are already prohibited for export and dissemination. In this case, I would argue no new law is needed, the existing laws cover the export or dissemination of dual use technologies. If the LLM becomes dual-use/export-restricted/etc because it was trained on export-restricted/sensitive/etc data, it is already illegal to disseminate it. Enforce the existing law, rather than use taxpayer money to ban and police private LLM training because this might happen.
> Otherwise you also have to complain about the stifling of open source bioagent research, open source nuclear warheads, open source human cloning protocols
No, actually you don’t.
This is just a slippery slope that suggests that any of these examples are even remotely comparable to AI. There is room for nuance and it’s easy to spot the outlier among bioagent research, nuclear warheads, human cloning, and generative artificial intelligence.
Unfortunately, I think you will see this differently in a few years, that AI is not an outlier (In the fortunate case where were there were enough "close calls" that we're still around to reflect on this question)
Agree that artificial intelligence is an outlier. I think it is the technology with the greatest associated risk of all technologies humans have worked on.
It’s unhelpful to the argument when you do this, and it makes our side look like a bunch of smug self entitled assholes.
The reality is that AI is disruptive but we don’t know how disruptive.
The parent post is clearly hyperbole; but let’s push back on what is clearly nonsense (ie. AI being more dangerous than nuclear weapons) in a logical manner hm?
Understanding AI is not the issue here; the issue so that no one knows how disruptive it will eventually be; not me, not you, not them.
People are playing the risk mitigation game; but the point is that if you play it too hard you end up as a ludite in a cave with no lights because something might be dangerous about “electricity”.
I disagree. Debating gives legitimacy, especially when one begins to debate a throwaway comment that doesn't even put an argument forward. The right answer is outright dismissal.
Someone who creates very dangerous items needs to take responsibility for them. Or their production needs to be very heavily regulated. That is just a reality. We don't let companies sell grenades on street corners.
The running away from responsibility is one of the things I like least about big tech.
Sure, ultra-hazardous activities are regulated differently from other activities, including under tort law, but generic AI tools are not ultra-hazardous by nature. No piece of software is, until it is connected in some way to real world effects. Take an object-detection algorithm. There's absolutely nothing inherently dangerous about identifying objects in a video stream. But once you use the algorithm to create an automatic targeting system for a drone with a grenade strapped to it, it does become hazardous. But that's no reason to regulate the algorithm as if it were hazardous itself, at least no more so than it is to regulate the drone. As you point out, we regulate hand grenades. We do not regulate the boxes hand grenades are delivered in, or the web framework used for building a website that can be used to purchase hand grenades.
All technology has good and bad uses and you can’t hold the maker accountable for all of those. At some point you have to hold users and buyers accountable or just stop developing anything.
When a person uses a car to drive into a crowd, do we blame the automobile manufacturer? Do you blame Kali Linux when someone uses it to hack a remote system? What about Apple when an iPhone is used to call in a threat to a school?
After all of the times that I have heard this argument, I now believe that the lesser evil is allowing people to sell grenades on street corners. This logic causes complacency in users of products and removes any responsibility on the part of malicious actors who still find ways to use the "softened" version of these products badly. They will now just blame the people who didn't "soften" them properly.
So no thank you, bring back responsibility to end users of products, and allow suppliers to develop the best capabilities they can.
This is a strawman argument. LLMs, like books, are not inherently dangerous. Grenades are, and lack any legitimate purpose beyond indiscriminate killing.
LLMs are functions of their training data, nothing more. This is evidenced by how we see very different model architectures produce essentially the same result. All of that training data is out there, on the internet, in books; none of that “dangerous” knowledge is banned or regulated, nor should it be.
Given the number of AI deaths (a handful, if we're counting very generously) and gun deaths, or car deaths, or even deaths caused by refusal to vaccinate, I'm fascinated we're choosing autocomplete on steroids as a "very dangerous item".
By all means, let's have responsibility for actual outcomes. That bill is talking about imagined outcomes.
The definition of harm is buried low in the bill, here's the list:
(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
(B) At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents.
(C) At least five hundred million dollars ($500,000,000) of damage by an artificial intelligence model that autonomously engages in conduct that would violate the Penal Code if undertaken by a human.
(D) Other threats to public safety and security that are of comparable severity to the harms described in paragraphs (A) to (C), inclusive.
That means AI for drug discovery and materials science development, AI for managing electricity grids and broadband traffic, AI in the financial and health services sectors, etc. Then there's the military-industrial side, which this legislation might not even touch if only federal contracts are involved. Classified military AI development seems reckless, hasn't anyone seen War Games?
I really hate the (apparently very popular) idea that we should be shifting responsibility away from end users and toward providers and makers of tools. From playgrounds to drugs to software, our society wants to force the suppliers to make things safe by design rather than requiring and educating end users on responsible use.
This matches my thoughts on why this is ultimately a bad piece of legislation. It is virtually impossible to ensure that a piece of technology will not be used for "harmful purposes". I agree that such stipulations will be just another roadblock keeping everyone except "big businesses with well funded legal teams" from working on LLMs.