Just for those of you who aren't aware, LessWrong is a site devoted to the cult of the "singularity." The site founder, Eliezer Yudkowsky, is a secular humanist who is probably most well-known for his Harry Potter fanfiction, "Harry Potter and the Methods of Rationality." His core beliefs include that the obvious goal of the human race is immortality, that it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth, and that the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI." The site frequently dallies with discredited ideas like Drexlerian nanobots and some on the site take absurd concepts like "Roko's Basilisk" [1] seriously.
All of this is not to discredit this specific article. And there are lots of very intelligent posters there. But I tend to take everything I read there with a massive grain of salt.
>LessWrong is a site devoted to the cult of the "singularity."
LessWrong is a discussion forum mostly pertaining to psychology, philosophy, cognitive biases, etc,. A frequent topic of discussion is artificial intelligence, but it is hardly centric. Saying that LessWrong is a hangout spot for "singularity cult members" (as you call them) is simply incorrect on multiple levels. Technological singularity is no more than a scientific hypothesis, and it's slightly dramatic to say it perhaps has cult members worshiping and breeding its realism. In actuality technological singularity just has scientists and researches observing/theorizing its stepping stones and outcomes. Maybe you meant transhumanists rather than "singularity cult members", which I suppose makes more since from your other statements.
>Eliezer Yudkowsky, is a secular humanist who is probably most well-known for his Harry Potter fanfiction
Yudkowsky is also a prominent researcher of a variety of artificial intelligence topics which is enhancing the field. Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.
>the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI."
"friendly AI"* and I'm not sure what you're talking about when you say mathematical model, you should do more research it's mostly hypotheses and ideas for system transparency.
>But I tend to take everything I read there with a massive grain of salt.
Maybe you should visit LessWrong and read some articles about cognitive biases so you understand why someone saying "massive grain of salt" makes me want to kill innocent puppies.
Are you really trying to deny that google cars and other automated systems at least partially based on AI have safety issues? Even if we're talking autonomous, "life-like" AI, there is a long list of interesting philosophical and legal questions to be asked.
I can't say I find any of the statements here or in the article very appealing, but you shouldn't dismiss real safety/security issues just because you don't like the guy.
Are you really trying to assert that MIRI is addressing systems on the level of Google cars, in any serious technical manner? If so, can you point to examples?
No, I'm saying that AI has wider applications, and I was responding to the manned flight safety example. Also, I'm arguing that we shouldn't dismiss the guy's arguments just because he's an ass. Especially with regards to this article, we really don't need to resort to a straw man to refute what he wrote.
> Yudkowsky is also a prominent researcher of a variety of artificial intelligence topics which is enhancing the field. Primarily he focuses not on developing a Strong AI (AGI), but rather focusing on safety issues that such a technology would pose.
Nice defense on the other points. But no, Eliezer Yudkowsky has no peer reviewed publications, open source code, or really anything else to point to which provides any independent assessment of his contribution to the field of AI.
He has a couple of blog posts and self-published white papers. Forgive me for being skeptical.
"3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall. You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times. That's 3^^^3. It's the smallest simple inconceivably huge number I know." [1]
That's a monumentally different setup than mere 7 billion humans on Earth. And it's making a point about the cold cruel calculus of moral utilitarianism, a very common and popular ethical position. Your disgust is precisely his rhetorical point...
(Edit: ...about human heuristics and biases getting in the way of the moral calculus of utilitarianism. If you are a utilitarianist, there is some cutoff of X independent people -- for some very large X -- for which it is better to save that many people a slight inconvenience at the expense of 50 years of torture for one individual. This follows straight from the math of finite utilitarianism. Yudkowsky's position of trusting the math over intuition may not be intuitive for most people, but I'd be surprised if a HN reader did not agree at least with the methodology.)
So annoying 3^^^3 people "for a fraction of a second, barely enough to make them notice" is worse than horribly torturing a single person for 50 years?
Keep in mind that the 3^^^3 dust specks are spread across 3^^^3 people and their annoyance can't be simply added together.
Most people measure goodness and badness by how they feel about it, and for anyone who feels a significant amount of badness (grief, anger, whatever) at the death of a single person, it is physiologically impossible for them to feel a million times worse about the death of a million people. It's intuitively obvious to most people, therefore, that suffering, annoyance, life-saving, and so on, are not additive: they just have to check how they feel about the situation to know that.
In order to suggest that they are additive, and that N "people annoyed" can outweigh M "people suffering", you have to first convince someone that how their own internal measurement of goodness (how they feel about it) is not as accurate as some external measurement.
It's analogous to saying that "most people believe the world is flat. the burden is really on you to show that the world is really round."
Which is true. But requires a willingness to counter one's own intuition when encountering contradictory evidence. Unfortunately the type of person that does that is uncommon.
I don't disagree with what you actually said, but your choice of analogy suggests that you believe that questions of morality are settled and have obvious, objective answers.
In some cases, yes. If you accept utilitarianism as the the reductive explanation of morality, and assume some non-controversial terminal values, then all of morality is reduced to straight-forward calculations.
"The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right." -Leibniz
Unfortunately we retain some ignorance on the correct nature of utility functions (finite? time-preference adjusted? etc.), and terminal values for humans are demonstrably arbitrary.
>If you accept utilitarianism as the the reductive explanation of morality
... then LW ends up with Roko's Basilisk.
Really, you're using that as your answer to "I don't disagree with what you actually said, but your choice of analogy suggests that you believe that questions of morality are settled and have obvious, objective answers." You can prove anything if you first make it an axiom.
You can't seriously claim that utilitarianism accurately captures human moral intuitions. Variations on the Repugnant Conclusion occur immediately to anyone told about utilitarianism, and are discussed in first-year philosophy right there when utilitarianism is introduced.
LessWrong routinely has discussion articles showing some ridiculous or horrible consequence of utilitarianism. The usual failure mode is to go "look, this circumstance leads to a weird conclusion and that's very important!" and not "gosh, perhaps naive utilitarianism taken to an extreme misses something important."
For more-less exactly the same reason you accept general relativity over aristotelian motion - it is derived from first principles using maths, can be shown to match experience even if somewhat intuitive to people, and works pretty well in practice.
> can be shown to match experience even if somewhat [un]intuitive to people, and works pretty well in practice.
I think these are the two points that those skeptical of utilitarianism have trouble with: it's exactly that it doesn't seem to match experience that started this thread. Additionally, it doesn't actually seem to work well in practice: http://econlog.econlib.org/archives/2014/07/the_argument_fr_...
I don't think he is making the assumption that they know each other.
Each individual has a tolerance of what they can comfortably cope with. If 3^^^3 people were all experiencing a pain that is below that tolerance, nobody would be prevented from happiness. However in the other situation, the tortured individual clearly would be.
That's an example of infinite or unbounded utility functions: no matter how many specks of dust in the eye, it will never add up to a single person being tortured for 50 years. Even 3^^^^^^^^^^^3 specks of dust. Unfortunately the mathematics of infinite and/or unbounded utility functions doesn't work out well. It leads to some seriously messed up edge cases. (So does finite utilitariansm, to be fair -- [Pascal's mugging](http://www.nickbostrom.com/papers/pascal.pdf) -- but these are fully dealt with by decision theory, whereas the infinite or unbounded cases are not). It's not very strong, but it is evidence that we should be accepting of the calculations of finite utilitarianism since the formalization works out better in cases which are within the realm of our experience.
I'd say if each of those two people had the "bad experience" of having their only child killed in a car accident, that's worse than someone else having his uncle and grandmother killed in a car accident. "Bad experience" is hugely oversimplified. And just not start about a trillion specs of dust in a trillion people.
Because at the end of the day, no one gives a fuck about a spec of sand in their eyes. Having your balls ripped off, might be different. YMMV. Human feeling is a bit more complicated than just adding.
What's the probability that having speck in the eye at any given time has some terrible consequences (like, for instance, causing traffic accident, or making a mistake when doing a brain surgery)? If we assume that on the whole Earth, at any time there's at least one person who would suffer greatly from speck in the eye, the probability is at least 1/8e9.
Now notice that 1/8e9 of 3^^^3 is more people than have ever lived.
I assumed there to be no side effects to the dust specks. Otherwise millions would die in accidents and the thought experiment wouldn't make much sense.
It doesn't assume that we can simply add their annoyance together. It assumes that, whatever function computes the badness of some badness applied to multiple people, it diverges and does so fast enough (where "fast enough" will probably include "exceptionally slowly", but in principle there are sufficiently slow divergent functions).
I don't think this is well enough established, though it seems plausible.
I think that's illusory, at best. Does time play a role? If not, this would lead to "Someone has ever got a dust speck in their eye. It is now no longer a bad thing when someone else gets a dust speck in their eye." Which seems absurd. In fact, given that we live in a universe where someone has ever got a dust speck in their eye, it would mean that getting a dust speck in one's eye is no longer the "least bad bad thing" that could happen, which violates the assumption of the thought experiment and you should substitute something more severe.
If time does play a role, you can spread out the 3^^^3 dust specks. (Obviously, we don't realistically have enough time, but then we don't realistically have enough space either).
What do you mean "good, bad, utilitarian"? Good and bad are descriptions we attach to causal causes which lead to outcomes with higher (good) or lower (bad) utility.
Having either my friend have dust in his eye or some stranger tortured are outcomes with less utility in my book than the counterfactual default. Actions which lead to these outcomes are bad, actions which prevent or provide restitution are good.
I don't know what you are getting at with a recursive relationship between observers and observed systems.
It's funny but this article is actually the strongest evidence of cult behaviour. Teaching you how to be "happy" is pretty much the initial selling point of almost every cult out there. Just walk by any Scientology building.
Also from the wiki you linked [1]
> Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas — even when they're fairly sure intellectually that it's a silly problem.[5] The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture
Wow either that article is hyperbolizing this or... wow.
> It's funny but this article is actually the strongest evidence of cult behaviour. Teaching you how to be "happy" is pretty much the initial selling point of almost every cult out there.
This is nonsense. It's an article posted to a curated community blog by some guy who happens to like reading tons of psychology papers and wanted to do some research into the state of scientific understanding of happiness. Is Wikipedia selling a cult too now[0]?
> This is nonsense. It's an article posted to a curated community blog by some guy who happens to like reading tons of psychology papers and wanted to do some research into the state of scientific understanding of happiness...
...and is executive director of MIRI, the organization hosting and providing most of the content to Less Wrong.
You should take Slate[1] with an even bigger grain of salt. While Less Wrong is a canonical illustration of all the problems with high IQ people, at least they try to be accurate in their beliefs.
Which is the actual purpose of the site. Singularity, consequentialism/utilitarianism, and so on are side effects.
I'm not sure what you mean. I've been following LessWrong for a while now, but I've never seen that Slate article before. I should be clear that most posters do not agree with the idea itself, as it is fairly absurd. What's more interesting are the thought processes that might lead one to consider such an idea, which are in full evidence throughout the site.
I do not understand why there are so many haters of LessWrong. It's not a cult. No one on that site has ever bought into the Roko idea. Also, Roko's post is poorly explained there. However, everyone thought it was nonsense at the time and still does. Imagine if the most inane nonsense posted to hacker news was used to dismiss everyone who posts here.
As for nanotechnology. It is possible - biology is a proof of that - though some of the early visions may have been naive. It's likely that artificial systems can be created using a broader pallet than natural selection does, and that these technologies will be powerful.
>That it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth
No, he wrote that it was justifiable to torture a man to prevent 3^^^3 dust specs from floating into 3^^^3 eyes. 3^^^3 is a mind-bogglingly large number - far far far far far more than everyone on earth.
> I do not understand why there are so many haters of LessWrong.
A lot of us have ended up in places where they have tried to force their beliefs on us (a huge one being "debate is always the correct way to understand things, and is never inappropriate in any situation ever" - the concept of, say, cooperative discussion, or even caring about people's emotions and sense of self, seems to have never occurred to any of them that I have met).
It's rather understandable that many would see them as a cult, given the religious fervour that many of them have in forcing their beliefs on others.
>Or even caring about people's emotions and sense of self, seems to have never occurred to any of them that I have met.
But that's not relevant to a community focusing on rational discussion. And it's the exact opposite of what cults do - they try to shower their followers with love. Not caring too much about people's emotional attachments to their beliefs is appropriate on LessWrong. It is inappropriate outside of that context.
Yes. The issue is that many seem to have the complete inability to understand this. I don't care what they do in their own IRC channel or community; it's entirely inappropriate to attack somebody without their consent anywhere else, and yet they do.
You forgot to take a jab at cryonics. Eliezer sincerely believes that people who do not sign-up their kids with a cryonics package are "lousy parents".
Maybe you disagree with some of the conclusions, but do you disagree with the methods it preaches? Ie. reductionism, awareness of our biases, bayesian inference.
There do seem to be a lot of extreme viewpoints on LessWrong, so I think it is justified to take those extreme viewpoints with a grain of salt. But I also think that the core beliefs/approaches are valid, and so that should be factored in to how big a grain of salt you take things with.
I think they're probably a little too focused on the Bayesian interpretation, but yes, the site has plenty of good content. In particular, it is a really excellent way of finding effective charities. Where the site goes astray is that it has its own preconceived biases--e.g. its priors for the eventual development of a transhuman AI.
I agree with your remarks. One reason, however, why I still enjoy reading articles on the site from time to time is that they almost always make me think. Their perspectives, however flawed, have the merit of being strongly coherent, and trying to find what's wrong with an article turns out to be a very good exercise in judgment.
I agree with the grain of salt, but you're being overly dismissive. Calling them a cult surely is over the top.
It's just a bunch of grumpy nerds with a somewhat hardline "rational" fanaticism, who love to spend late dark nights on the internet reasoning about stuff.
Sure, it's probable that I'm misremembering the exact numbers. Though I realize that the numbers are critical to the proposition from his perspective, I think the fact that he would consider it at all probably differentiates his worldview from that of a lot of people.
"Sure, it's probable that I'm misremembering the exact numbers."
The exact number makes "the population of Earth" a tiny rounding error away from zero. Eliezer could be entirely right in his conclusion here ("core belief" is a gross mischaracterization - it follows from other things, not the other way around) and it could still be wrong to torture a man for a minute to spare dust specks in eyes of a billion Earths (and thats not even getting us much closer to the number in question).
That he would consider it at all differentiates his thinking only from those who don't think about these things. "What happens in the extreme?" is a useful question, when trying to pin down how systems work.
>His core beliefs include that the obvious goal of the human race is immortality, that it is justifiable to kill a man if it would remove a spec of dust from the eye of every other person on Earth, and that the most important thing we could possibly be doing right now is devoting all of our time developing a mathematical model of a "safe AI."
Possitive proof that IQ doesn't equal smartness, much less wisdom.
Positive proof that you can make everything sound ridiculous if you half-assedly read about it, cherry-pick some sentences, and misinterpret them as it suits your preconceived opinions. Saying that GP's criticism is just inaccurate would be extremely charitable.
>Positive proof that you can make everything sound ridiculous if you half-assedly read about it, cherry-pick some sentences, and misinterpret them as it suits your preconceived opinions.
Sure. But I can't conceive ANY possible argumentation and further discussion that accepts the above points that is not also silly.
I would have trouble doing that as well; fortunately, you should not accept the points above because they are either non-issues or seriously misrepresenting what Eliezer believes and ever wrote.
It's fair to evaluate and criticize opinions, but as maaku said upthread, one should play fair.
All of this is not to discredit this specific article. And there are lots of very intelligent posters there. But I tend to take everything I read there with a massive grain of salt.
[1] http://rationalwiki.org/wiki/Roko%27s_basilisk