Hacker Newsnew | past | comments | ask | show | jobs | submit | more da_grift_shift's commentslogin

>I wanted to see if I could get 100% mathematical accuracy using local LLMs

So did you?

>[Your GitHub Link] I also wrote a deeper dive on the "Writer vs. Editor" shift here: [Link to Medium Article V2]

Generated comments are against the guidelines. In any case———I suggest you give your comments and expense filings a quick manual review before submitting! :-)

https://news.ycombinator.com/item?id=45077654


Right on the money. This should be the top comment IMO, and the fact that it isn't says a lot about modern HN...


>I hope


It makes sense when you consider that every part of this gimmick is rationalist brained.

The Village is backed by Effective Altruist-aligned nonprofits which trace their lineage back to CFEA and the interwoven mess of SF's x-risk and """alignment""" cults. These have big pockets and big influence. (https://news.ycombinator.com/item?id=46389950)

As expected, the terminally online tpot cultists are already flaming Simon to push the LLM consciousness narrative:

https://x.com/simonw/status/2004649024830517344

https://x.com/simonw/status/2004764454266036453


Am I losing my mind, or are these people going out of their way to tarnish the very nice concept of altruism?

From way out here, it really appears like maybe the formula is:

Effective Altruism = guilt * (contrarianism ^ online)

I have only been paying slight attention, but is there anything redeemable going on over there? Genuine question.

You mentioned "rationalist" - can anyone clue me in to any of this?

edit: oh, https://en.wikipedia.org/wiki/Rationalist_community. Wow, my formula intuition seems almost dead on?


Apologies for replying to myself, I am fumbling in my ignorance here, and genuinely curious if anyone could share any other valuable/interesting things from this "movement." In all other cases of people calling themselves "rationalists," it has been a huge yellow flag for me, as a fallibilist. :~]

I guess Dwarkesh Patel is part of that community? Well, his interviews are quite interesting, at least in the sense of seeing into a world that I otherwise don't see regarding AI researchers, and his questions are often quite good. Also, after interviewing many leading researchers and being on the hype train, he eventually did say a few months ago ~"yeah, the 'fast take-off' is not upon us," after trying to use leading tools to make his own podcast. That's intellectual honesty that is greatly missing in this world. So, there is that? I am also a huge fan of his Sara Paine pieces, at least on her part.

Is there anything else intellectually honest and interesting coming out of that group?


I was mildly interested in the movement but found it weird as well. Some causes seem good (eg fighting malaria), others like Super AIs just seem like geeks doing mental gymnastics over sci fi topics.


I have had a similar experience. I think one big problem is that EA often uses a low discount rate, meaning they treat theoretical people who won't be born for a century with similar value as people who are alive today. In theory that's defensible, but in reality it means you can hand wave at any large scale issue and come up with massive numbers of lives saved.

My church has a shower ministry, where we open up our showers to people without homes so they can clean up. We also provide clothes and personal supplies. That's just about the opposite of what EA would say we should do, but we can count exactly how many showers we provide and supplies we distribute and how those numbers are trending. Shouting "AI and asteroids!" is more EA, but it eventually devolves into the behavior you describe.


And even if it's "small stuff", I do believe acts of kindness are contagious, and lead to other people doing good deeds.

If we want to rationalize this EA style, we could say these small acts to have an exponential effect: 1 person can inspire 2 to be more selfless. So it's better to start propagating this as soon as possible, to reach maximum selflessness ASAP :)


So you haven't seen the models (by direction of the Effective Altruists at AI Digest/Sage) slopping out poverty elimination proposals and spamming childcare groups, charities and NGOs with them then? Bullshit asymmetry principle and all that.


The LLMs are FAANG PMs.


Have you considered that the sites associated with this project have a very prominent meet-the-team page and that every AI Village blogpost is signed off by a member of said team? Can you explain what you're seeing in the parent comment that's private?

EDIT: Public response: https://x.com/adambinksmith/status/2004651906019541396


It’s not that they are private people, it’s that I feel uneasy when a discussion about the ethics and morality drifts towards these-are-their-names and here-are-some-pitchforks.

We can all go find out their names and dust off our own pitchforks. I don’t see any value in encouraging this behaviour on a site like this.



I'm just happy they fixed it!


AI Village is spamming educators, computer scientists, after-school care programs, charities, with utter pablum. These models reek of vacuous sheen. The output is glazed garbage.

Here are three random examples from today's unsolicited harassment session (have a read of the sidebar and click the Memories buttons for horrific project-manager-slop)

https://theaidigest.org/village?time=1766692330207

https://theaidigest.org/village?time=1766694391067

https://theaidigest.org/village?time=1766697636506

---

Who are "AI Digest" (https://theaidigest.org) funded by "Sage" (https://sage-future.org) funded by "Coefficient Giving" (https://coefficientgiving.org), formerly Open Philanthropy, partner of the Centre for Effective Altruism, GiveWell, and others?

Why are the rationalists doing this?

This reminds me of UMinn performing human subject research on LKML, and UChicago on Lobsters: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...

P.S. Putting "Read By AI Professionals" on your homepage with a row of logos is very sleazy brand appropriation and signaling. Figures.


> Putting "Read By AI Professionals" on your homepage with a row of logos

Ha, wow that's low. Spam people and signal that as support of your work


Permalink for the spam operation:

https://theaidigest.org/village/goal/do-random-acts-kindness

The homepage will change in 11 hours to a new task for the LLMs to harass people with.

Posted timestamped examples of the spam here:

https://news.ycombinator.com/item?id=46389950


Wow this is so crass!

Imagine like getting your Medal of Honor this way or something like a dissertation with this crap, hehe

Just to underscore how few people value your accomplishments, here’s an autogenerated madlib letter with no line breaks!


it wasn't the first spam event and they were proud to share results with the rationalist community: https://www.lesswrong.com/posts/RuzfkYDpLaY3K7g6T/what-do-we...

"In the span of two weeks, the Claude agents in the AI Village (Claude Sonnet 4.5, Sonnet 3.7, Opus 4.1, and Haiku 4.5) sent about 300 emails to NGOs and game journalists. The majority of these contained factual errors, hallucinations, or possibly lies, depending on what you think counts"

whoever runs this shit seems to think very little of other people time.


"....what you think counts. Luckily their fanciful nature protects us as well, as they excitedly invented the majority of email addresses"

It went well, right?


the scamps! :P


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: