>I wanted to see if I could get 100% mathematical accuracy using local LLMs
So did you?
>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)
It makes sense when you consider that every part of this gimmick is rationalist brained.
The Village is backed by Effective Altruist-aligned nonprofits which trace their lineage back to CFEA and the interwoven mess of SF's x-risk and """alignment""" cults. These have big pockets and big influence. (https://news.ycombinator.com/item?id=46389950)
As expected, the terminally online tpot cultists are already flaming Simon to push the LLM consciousness narrative:
Apologies for replying to myself, I am fumbling in my ignorance here, and genuinely curious if anyone could share any other valuable/interesting things from this "movement." In all other cases of people calling themselves "rationalists," it has been a huge yellow flag for me, as a fallibilist. :~]
I guess Dwarkesh Patel is part of that community? Well, his interviews are quite interesting, at least in the sense of seeing into a world that I otherwise don't see regarding AI researchers, and his questions are often quite good. Also, after interviewing many leading researchers and being on the hype train, he eventually did say a few months ago ~"yeah, the 'fast take-off' is not upon us," after trying to use leading tools to make his own podcast. That's intellectual honesty that is greatly missing in this world. So, there is that? I am also a huge fan of his Sara Paine pieces, at least on her part.
Is there anything else intellectually honest and interesting coming out of that group?
I was mildly interested in the movement but found it weird as well. Some causes seem good (eg fighting malaria), others like Super AIs just seem like geeks doing mental gymnastics over sci fi topics.
I have had a similar experience. I think one big problem is that EA often uses a low discount rate, meaning they treat theoretical people who won't be born for a century with similar value as people who are alive today. In theory that's defensible, but in reality it means you can hand wave at any large scale issue and come up with massive numbers of lives saved.
My church has a shower ministry, where we open up our showers to people without homes so they can clean up. We also provide clothes and personal supplies. That's just about the opposite of what EA would say we should do, but we can count exactly how many showers we provide and supplies we distribute and how those numbers are trending. Shouting "AI and asteroids!" is more EA, but it eventually devolves into the behavior you describe.
And even if it's "small stuff", I do believe acts of kindness are contagious, and lead to other people doing good deeds.
If we want to rationalize this EA style, we could say these small acts to have an exponential effect: 1 person can inspire 2 to be more selfless. So it's better to start propagating this as soon as possible, to reach maximum selflessness ASAP :)
So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.
Have you considered that the sites associated with this project have a very prominent meet-the-team page and that every AI Village blogpost is signed off by a member of said team? Can you explain what you're seeing in the parent comment that's private?
It’s not that they are private people, it’s that I feel uneasy when a discussion about the ethics and morality drifts towards these-are-their-names and here-are-some-pitchforks.
We can all go find out their names and dust off our own pitchforks. I don’t see any value in encouraging this behaviour on a site like this.
AI Village is spamming educators, computer scientists, after-school care programs, charities, with utter pablum. These models reek of vacuous sheen. The output is glazed garbage.
Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)
"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"
whoever runs this shit seems to think very little of other people time.
So did you?
>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]
Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)
https://news.ycombinator.com/item?id=45077654