Yes, with properly developed AI, rather than penalizing speeding, which most of us do and is also a proxy for harmful outcomes and inefficiencies, we can penalize reckless behaviors such as coming too close to vehicles, aggressive weaving, and other factors that are tightly correlated with the negative outcomes we care to reduce (i.e. loss of life, property damage). So too, the systems could warn people about their behavior and guide them in ways that would positively increase everyone's benefits. Of course this circumstance will probably go away with self-directing cars (which fall into the "do the right thing by default" bucket) but the point is illustrated that the laws can be better formulated to focus on increasing the probabilities of desirable outcomes (i.e. harm reduction, efficiency, effectiveness), be embodied and delivered in the moment (research required on means of doing so that don't exacerbate problems), and carry with them a beneficial component (i.e. understanding).
Unfortunately different people have different definitions of "harm" and "effectiveness". What one person consider a, "positive increase in behavior" another might consider a grievous violation of their freedom and personal autonomy. For example there is an ongoing debate about compelled speech. Some people view it as positive and desirable to use the force of law to compel people refer to others as they wish to be referred, while others strongly support their freedom to speak freely, even if others are offended. Who gets to program the AI with their definition of positivity in this case?
A free society demands a somewhat narrowly tailored set of laws that govern behavior (especially interpersonal behavior). An ever-present AI that monitors us all the time and tries to steer (or worse, compel with the force of law) all of our daily behaviors is the opposite of freedom, it is the worst kind of totalitarianism.
Certainly the perfect should not be the enemy of the good. But the bad should be the enemy of the good. The very core of freedom is the ability to have your own thoughts, your own value system and your own autonomy. In a free society, laws exists so that individuals are able to enjoy their own thoughts, values and autonomy while being constrained from harming others. Obviously, there is a balance to strike (which is not always easy to determine) between law and freedom. We see this on display every day in our society. You need look no further than the crisis in San Franisco (and many other US cities) between the right of an mentally ill individual to sleep and defecate on the sidewalk and the right of society to pass laws to prevent this activity.
The conversation changes when you are talking about prescribing a set of behaviors that are universally considered, "good" and are pushed (and possibly demanded) by an ever-present AI that is constantly looking over your shoulder and judging your behavior (and possibly thoughts) by this preset behavioral standard that may or may not match your own preferences. This is totalitarianism beyond anything Orwell ever imagined. What you consider good and desirable, someone else considers bad and despicable. That is the essence of freedom. In a free society, the law exists (or should exist) only to stop you two from hitting each other over the head or engaging in other acts of overt violence and aggression, not to attempt to brainwash and coerce one of you into falling into line.
We agree that the bad and good are enemies, or so at least the bad would like you to think. The good might be convinced the bad has good points that need refining, growth, and improvement. I'm fine with those disagreeing.
I think what you're saying is that it's hard to meditate between everyone which is true. Perhaps you are also saying that the implication of a standard of correctness is inherently totalitarian. It's seems to me you weakened that by admitting there are things that should be universally barred in free societies. Violence was your reference but murder might be even easier. Easier yet that breast cancer is bad? We make allowances for boxing and war but broad agreement can be found in society and across societies by careful anthropologists.
However, it seems you project over me (or perhaps the AI) a "Highlander hypothesis" that there can be only one correctness or even any notion of correct within the system. Such a system can operate simply on what appears to be with strings of evidence for such description. As you note, beyond a small set of mostly-agreed-to matters we are more diverse and there are spectrums spanning scoped policies (say by public and private institutions) all the way to individual relationship agreements custom fit to two. It is, in fact, the nature of a free society to allow us such diversity and self selection of the rules we apply to ourselves (or not). An ever present AI could meditate compatibilities, translate paradigms to reduce misunderstanding or adverse outcomes (as expected by the system over the involved parties), and generally scale the social knowing and selection of one another. It could provide a guide to navigating life and education for our self knowing and choosing of our participation more broadly. The notion there isn't to define correctness so much as to see what is and facilitate self selection of individual correctnesses as based on our life choices and expressed preferences.
To be honest in closing, this has dipped into some idealisms and I don't mean to be confused in suggesting a probability of such outcomes.