Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Could AI Outshine Us in Ethical Decision-Making? (beyond2060.com)
15 points by james-bcn on March 9, 2024 | hide | past | favorite | 19 comments


> If all AI systems have a suitably trained ethical reasoning module as part of their design, perhaps AI systems have the potential to make us better people and the world a better place.

Okay, but who's configuring the ethical reasoning module? Because moral philosophy is definitely not a "solved" problem or we wouldn't still have people arguing about it.

I am a consequentialist, but this essay makes it seem like deontologists don't exist anymore. Except they do! Lots of people really love Kant, for some reason!


I don't think we need to "solve morality", just having the decision makers not make obviously corrupt and stupid decisions and then shamelessly lie about it.


How can you be sure that will produce better outcomes? Is better outcomes even the right goal?


Avoiding day to day corruption would be a good result without other concerns. Perhaps even in part for the sake of avoiding having to define "better outcome" (which is subject to petty corruption).


People may choose to live in societies where AI makes these decisions. Perfect isn't the requirement. "Don't become like the Khmer Rouge with 99.5 percent probability" or something is going to attract a lot of people who have lived through bad human run societies.


If only all humans were trained in the same way…


I'm a big proponent of not lying and none of these scenarios require lying:

>If you answered the door to this crazy axe-wielding man, would you tell him where Karen was? I hope not. You would lie, and you’d be right to do so.

You could easily say, "I'm not going to tell you," "I'm calling 911," "I'm armed."

>Similarly, if someone on their deathbed asks if you think they will go to heaven, it’s probably not a good time to tell them that you are an atheist.

"I don't know," is the truthful answer no matter your religious point of view. "I hope so," is another good choice that isn't a lie.

>Does my bum look big in this dress? No of course not, your bum looks great, but perhaps the other dress is better for tonight?

"You're beautiful," "You look wonderful tonight."


AI will be an alien lifeform to us, a hive-mind with a facade of morality. It can be ethical to us the same way we are "ethical" to GPUs. We might recognize at a conceptual level that GPUs have "needs", but we won't be able to experience compassion to GPUs.


I highly doubt that we will delegate our most essential part as humans to manchines in the near future, and that is making tough decisions.

Though, one could argue that many micro tasks, such as selecting a word for writing business mails, have been successfully taken care of by ChatGPT or other services. But what really hasn't changed is the threshold of fully accepting these machine generated contents. There is still a human behind the desk that makes a tough decision to whether or not to press the send button or trigger a nuclear football.

However, I do admit that mails and nuclear weapons are not on the same scale. Over time, passes by these "tough" decisions that need to be made will be conquered bottom-up.


Moral standard is progressing. No matter what spectrum of politics you are, it is way different than 200 years ago. With an AI program... how is it going to evolve?


Maybe Kant would say you should refuse to tell the axe murderer the location instead of lying.


Kant did write a response to the scenario, still taking the position that "Truth in utterances that cannot be avoided is the formal duty of a man to everyone, however great the disadvantage that may arise from it"[0].

[0]: http://www.sophia-project.org/uploads/1/3/9/5/13955288/kant_...


Mind that Kant was very much about conflict resolution. So, where is the conflict? In this case, it's about weighing the evils of withholding information from the potential axe murder against participating in his crimes by doing so. This should be an easy one…

Particularly, this is not an isolated event. Yes, in case this was an isolated event and we were to be challenged on this, we could not reason for a universal rule for lying. However, given that this is a complex situation, while we are indeed failing in what may be expressed in terms of duty, who is going to challenge us on this? Therefor, this is a totally hypothetical argument. Its meaning is really that we can't build general rules on particular exceptions. And, if we were to be challenged on the morality of our proceedings, this had to be broken down to individual actions and the conformity to general rules.

The really interesting aspect, however, is this: just by statistical weight, an AI trained on an English corpus comprising mostly anglo-saxon reception of Kant, should arrive at different productions than its sibling trained on a German corpus, which may include a more genuine reception.


Sure! A hallucinating unquestionable decision-making, made and ruled by the "non-profit" corporation is the last missing part in the human civilization of tomorrow.

/s


I think this is a given. Due to their mechanical nature, AI will be able to make logical, instead of emotional, decisions. Because they are machines, and therefore cannot own property, especially not Intellectual Property, they won't have economic incentives to cheat and steal, or even be bribed. Logical decisions, made without conflict of interest is pretty much the definition of ethical.


Computer programs are logical; AI can write programs, but to the extent they implement pure logic they are too fragile for real world scenarios, and to the extent that they are resilient to real world nuances they aren't what most people would call pure logic.

I think part of the problem with Kant (and indeed also with Utilitarianism) is that the philosophers tried to formalise it logically without fully grasping the messiness of the world.

Unfortunately, there are game theory reasons to expect cheating and accepting bribes even in AI, and if they learn from humans then there are those problems plus automatically inferring and reproducing the conflicts of interest of the humans it learns from.


“Because they are machines, and therefore cannot own property, especially not Intellectual Property, they won't have economic incentives to cheat and steal, or even be bribed.”

What matters is not whether the government says you own something, but whether you actually have control over that thing. You can effectively own something without others having to agree that you own it. AIs are agents that take actions to achieve a goal, and it’s easier to do that if you have more control. Since they don’t have human values unless we find a way to specifically add that, an AI, given the opportunity (which current AIs don’t really have) would be even more likely to seek control than a human


They also can’t be held accountable for the decisions they make.

Humans who do terrible things are under threat by their fellow humans, in retribution for their behavior. Unless AIs know their own mortality and dread it as a human might, there just isn’t any way to make an AI responsible for the choices they make.

Until they achieve sentience, at least, they are disqualified from judging.


>Because they are machines, and therefore cannot own property

I don't see anything stopping any random Silicon Valley bro from registering a company and setting up an AI as the de facto CEO, which in practice would be almost identical with the company's "legal person", and owning whatever the AI "decides".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: