Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Take the time to read a very pessimistic take like (like [0]) and see if you reconsider.

[0]: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...



I wish it was more convincing.

As it stands, it says more about the author's egoistic view of humanity than of AI. Imagine for a second taking Feuerbach's The Essence of Religion and applying it to Descartes' rationality and Plato's nous. We'd get a critique of the rational human's construction of his own ideal - the essence of intellect.

AI threatens this ideal and ego-threats get sublimated into existential threats by those unable to understand themselves well enough to express themselves directly.


Maybe it's wrong, and things will be fine. Maybe it's right. But you can't psychoanalyze your way to truth. Whether future AIs will destroy humanity or not is a fact about future AIs, and the landscape of intelligent systems, not a fact about Eliezer Yudkowsky.


Pass. Yudlowsky is all explanation, but too important to summarize his core thesis. This is always a giant red flag for me. I am simply not going on a long journey with someone that won't make the effort to sketch out the destination in an abstract.

People write page after page about how it might kill us all in a flash, without ever offering a good explanation of why it would want to. My experience with people whose immediate reaction to something they're scared of is to destroy it is that they're panicky screechers who are an annoying distraction to the person actually handling the situation, whether that's wild animals, fire, or interpersonal violence.

I'm not saying 'just let me handle it bro, I totally got this.' There's a lot of potential risks, I don't think anyone is qualified to say they can mitigate all those, or even most of them. But I trust a machine intelligence - even one that's vast, cool, and unsympathetic - far more than the would-be Butlerians.


> without ever offering a good explanation of why it would want to

The point of much of the alignment debate is that people like Yudlowsky are pointing out that it doesn't need to want to, it just needs to not not want to enough.

You're hoping for an outcome ranging from "When The Yoghurt Took Over" to "The Metamorphosis of Prime Intellect", but many other people are expecting an outcome more similar to gwern's "Clippy".


You should take the time to watch "The AI Dilemma"

https://vimeo.com/809258916/92b420d98a


One, I'm not new to these issues. I've been interested in AI for decades and thought plenty about the existential and ethical implications, though since I'm not a public figure I appreciate you have no way of knowing that. But I am very up to speed on this topic, as much as one can be without being directly involved in the industry/academic research.

Two, I would generally not set aside an hour of my time for a video without at least some indication of what it's about. I'd rather spend that time reading than in the 'hot' medium of video.

Three, I find this video deeply suspect. It purports to document a 'private' gathering, yet it's clearly a well produced event that was intended to be documented and shared on video. People who actually want to keep a thing private are generally well able to keep it private. So while the participants have a valid point of view with many legitimate arguments, the facts of its existence suggest to me that it was absolutely intended to become public, and the 'private gathering!!' is essentially a marketing hook.


That was a weird talk to put random "China bad! into. I guess they had to tailor it to a US-elite audience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: