Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just want to appreciate how well written and thought out this was. I have spent countless hours reading over ethics on AI, especially from Big Tech sources, but this note is leaps beyond. I compare this to the disastrous letter that effectively knee-capped American AI all while proposing flimsy AI ethics within about 500 words (https://futureoflife.org/open-letter/pause-giant-ai-experime...). This should be another red flag when America's $500 Billion Stargate project is being led by people including Sam Altman and Larry Ellison, who are singing doomsday prophecies while the Vatican is making sincere efforts to understand AI.

I'm really caught admiring this and think this may very well be the AI Magna Carta. There are so many gems and while many of the sources are based on Catholicism, there is also an incredible depth of research, even going into "On the foundational role of language in shaping understanding, cf. M. Heidegger." The note also builds upon numerous different discussions from the Vatican including this supplemental one, https://www.vatican.va/content/francesco/en/speeches/2024/ju...



It is well thought out. The "AI Magna Carta" is a stretch, though.

Some good insights:

60. Anthropomorphizing AI also poses specific challenges for the development of children, potentially encouraging them to develop patterns of interaction that treat human relationships in a transactional manner, as one would relate to a chatbot. Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth. Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person.

That's a good one. Teacher time is a scarce resource, but the chatbot is always there, and undemanding if not asked anything.

Kids who grow up talking mostly to AIs may have that kind of relationship with the world. Historically, kids who grew up with servants around sometimes defaulted to that kind of transactional relationship. Now that can scale up. Amusingly, asking Google's AI about "bringing up children with servants" produced an excellent summary of the topic.

Years ago, the French Catholic author Georges Bernanos warned that “the danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed from their childhood to desire only what machines can give.”

That's an argument against too much screen time for kids.


I would put a slight twist on it though: it seems to me like kids begin by default with very transactional patterns, and must be taught to have "relationships rooted in empathy and a steadfast commitment to the good of the other". The very patience that, say, teachers or nursery staff have to have give that impression: the ideal staff is always patient, never tired or irritable, never bored with what the children do or say, and so on; and it's partly the parents' job to help point out to children that the professional people they interact with are real people who have their own lives.

The challenge with LLMs is that there is no "real person" behind there to have an own life. The AI doesn't go home at the end of the day, never needs rest, never needs to have "me time" or let their guard down, has no other independent purpose than to serve the child (or some other human). So if we were to replace all the service workers in our lives with AIs, we'd lose loads of opportunities for developing empathy.


Children need human attention to thrive.

Classes are WAY too big.

There have been several very successful projects focused on getting adults to do homework with children.


Way too big for whom? While it is a common trope that we see classrooms with 25+ children/teenagers as too big, I have rarely seen a comment that complains on university lectures hosting too many students. How come? Good Lecture-design is a direct function of the percentage of students in a classroom who understand the input without lots of clarification needed. There's a reason why we have different forms of teaching & learning in universities. Big passive lecture halls make sense to transfer basic information to a lot of people. Small study groups make sense to deepen knowledge on a topic, midsized seminars are useful to stimulate students in questioning their own understanding of a topic and getting rid of blind spots in their conceptions. Classrooms are similarly sized (at least in my country) compared to university seminars. It is not the number of pupils that is too high. It is the architecture and structure of teaching and learning in schools that doesnt account for different learning/teaching formats and even more so ill-equipped teachers. Studies have found again and again that classroom-size, even though often discussed, are actually a minor factor in the quality of teaching and learning. The number one factor by large is the teacher. If you have a capable teacher, who can lit the interest of their students, performance goes up. If you have an unmotivated, badly/barely trained teacher (not necessarily trained as a teacher, but as a storyteller and facilitator), you can shrink the classroom as much as you want, the students won't learn anything.

The same goes for assisted homework-sessions. If the adult can ignite the interest of the student and get past the "I have to do this, but I don't even know why", it will work. Otherwise...well, ChatGTP provides at least a baseline of quality in explaining and rephrasing learning material.

So yes, I totally agree, children need human attention for a good development, but not necessarily in their study-time. If you look at the "hole in the wall"-experiments by S. Mitra, you'll see that the attention of a granny who asks with earnest interest what a child has learned today, the kid massively profits from that interaction even though the granny has no clue what the kid is talking about.


Children still learn ways how to process input at young ages, smaller classes are likely beneficial for everyone as more attention is awarded to them. University students do not need such attention, students do miss lectures and catch up on them independently if they need to.


It could be argued that the emergence of the web and search engines in particular has established this as a common pattern long before AI was around. I'm not convinced that AI represents a dramatic change to this behavior, though the point about anthropomorphizing AI likely acts as a magnifier.


I think the main difference is the degree of anthropomorphizing that happens with new chatbots. I mean, most kids in the 2000's didn't believe that they were literally asking Jeeves a question, but a lot of users today actually think of AI as an anthropomorphic being.


> but a lot of users today actually think of AI as an anthropomorphic being.

You think more than 10% of users?


It's easily closer to 90% than 10%.


I'm sure we could extend this farther into the separation of people out of villages into cities and the rise of transactional capitalism.


> Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth.

In the US,

"guide and nurture their intellectual and moral growth"

is not always regarded as the best or even a desirable part of teaching.


> Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person.

This reminds me of one of the main themes of Neal Stephenson's 1995 novel "Diamond Age": being "raised" by an AI agent, with vs without a caring human in the loop.


Reminds me of Inter Mirifica.

https://www.vatican.va/archive/hist_councils/ii_vatican_coun...

These are notable because they are not tweets or op-eds, one in thousands produced daily to keep you hooked to a source of information.

Rather these are published once by the church as part of their core mission and in response to the events themselves once. There is not necessarily a huge conversation here, although of course there might be conversation that lead to the letter and conversations that arise from the letter, but the center, core of the message from the church is very clear and static. It is long yes, but you only need to read it once and you'll be up to date with the church for years. You don't need to turn the news on everynight or keep your twitter feed clean and stay hooked every 20 minutes.


One may or may not appreciate the religious aspect, but the Vatican has always been a hub for “refined thinkers.” And when it comes to establishing an (initial) point of discussion on such an ethically significant topic, I believe that the amount of thought distilled into this page has been considerable.


The Jesuit order - to which Pope Francis belonged (belongs?) - has a long and notable history of contributing to science and scientific discovery. So they are not just thinkers, but doers.


I generally agree that the particular "rationalist" fear of AGI-autonomy are silly but your statement here, "the disastrous letter that effectively knee-capped American AI all while...", seems quite implausible. The same thing that makes the letter shallow is what means it's signers aren't going to hesitate for a second when they see an opportunity for profit.


Mamma mia. Yes, I totally agree with what you wrote: this is a landmark, profound, historic (and very courageous) document.


It will be funny to discover that they were aided by ChatGPT to write this :)


> I just want to appreciate how well written and thought out this was

Cath church is well famous on writing good things and then failing at putting them in practice. It took up to 1992 to see an official pardon to Galileo Galilei (1564 - 1642) we are still missing few words on people like Giordano Bruno, killed with fire because he dared to speculate of life outside of earth.

I personally find this piece of text over closer to a philosophic rant rather than an accurate analysis of the current situation.

> The commitment to ensuring that AI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocation serves as a criterion of discernment for developers, owners, operators, and regulators of AI, as well as to its users. It remains valid for every application of the technology at every level of its use.

this is being said by the organisation who decided american natives were lacking souls and therefore could be killed, the very same organisation who helped promoting slavery across the world, in the 2nd world war worked together with nazis and it is well known it supported far right governments in south america in the 1970's . the very same organisation against promoting condom use in Africa to prevent and contain AIDS .

> 54. Furthermore, there is the risk of AI being used to promote what Pope Francis has called the “technocratic paradigm,” which perceives all the world’s problems as solvable through technological means alone.[106] In this paradigm, human dignity and fraternity are often set aside in the name of efficiency, “as if reality, goodness, and truth automatically flow from technological and economic power as such.”[107] Yet, human dignity and the common good must never be violated for the sake of efficiency,[108] for “technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary, aggravate inequalities and conflicts, can never count as true progress.”[109] Instead, AI should be put “at the service of another type of progress, one which is healthier, more human, more social, more integral.”[110]

TL;DR yes you can use AI but for few things, for other matters please refer to your local community priest


It is a rehashing of the same stalled philosophical debates that are already tired. They didn't present any scientific evidence for biological necessity for intelligence, nor did they assert their religious authority. It is completely pointless.


You're asking for proof that doesn't exist and using it's absence as proof you're right. Doesn't look very constructive to me.


I am saying that a document that says nothing new and claims no authority is not a Magna Carta.


> claims no authority

It should be obvious what authority is implicit in a document from the vatican.


You don't understand what I am saying. It has the authority to make a decision about these questions but it doesn't claim that authority and do it.

It could state 'intelligence is only something humans can have because we have God's authority to say so', but instead tries to use science and philosophy to make those claims. This doesn't work, because everyone else is trying the same thing and it is just a mire since there is no science which can prove intelligence is biologically based and there isn't even a standard definition for intelligence.

Instead of joining everyone else in the same tired debates while providing no proof for their conclusions, they had an opportunity to use their religious authority to go 'Because God told us' or whatever, and they didn't, so this document is effectively pointless.


I don't really care about decisions; I care about truth and wisdom, and there is plenty to be found in this document.


The document is just a summary of a bunch of arguments and assertions that have yet to be proven either way. Every single thing they state as a given can be stated as not being a given just as convincingly. If they had claimed religious authority to make those statements at least it would have meaning. I am glad you found wisdom in it, but anyone familiar with the debate over a necessary biological basis for intelligence will find it unconvincing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: