Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have been thinking about this exact question (how to verify that a user is a human) and I still don't have a good answer for it.

At least not in a non-dystopian way.

Sam Altmann's WorldCoin tries to achieve this using retina scanners which I believe falls in the "dystopian" camp.



I think we'll eventually come to the conclusion that it's the wrong question.

What we really want is certain types of content, and to ban others. If we get that certain type from a bot, that's fine; if the type of content we don't want is coming from a human, it should still be removed.

By "type" of content, I mean very broadly. For instance one could create a community in which there's a limited number of posts/characters/etc. per day, not just be looking at the characteristics of the content itself. I mean all aspects of the content, data, metadata, all of it, as part of the analysis of "desirable."

If you want a pure-human community, put constraints on the community only humans can meet; heavy-duty, unscalable identity verification may play a role there.

As a bit of a "how do you build communities online" hobbyist, I think another trend we're going to see is communities getting faster on the draw to evict participants (originally wrote "people" here, but it's actually generically "participants"), for reasons beyond mere spam or active antagonism. Historically, I think it's a thing that most communities have done; the American/Western zeitgeist has disfavored that idea for a while in favor of expecting every community to take everyone who wants to join, but regardless of the ethics or philosophy behind that idea, I think that's just going to become simply impossible online. If the standard for participation in some community includes bots that won't be evicted no matter what they do, that community will rapidly become just another bot congregation ground and look like all the rest of them. With people roaming the internet for new communities to infiltrate with their bots, community building will become a subtractive process rather than an additive one. That's going to be a big change, it isn't going to be smooth or all good.


> If you want a pure-human community, put constraints on the community only humans can meet; heavy-duty, unscalable identity verification may play a role there.

I predict that this requirement would only decrease the amount of community and further increase the already high levels of isolation and alienation in society.

But I also predict that conversational AI will inevitably do this anyway, so perhaps we're just doomed.


Bootstrapping will be a big problem. A community that already has some size can potentially start adding an identity-checking step, but if you want to start a new community with confidence that you don't have it full of unaligned bots, it's going to be a lot harder.

Once the community gets going, though, well, we have experience with that. The web used to have a lot of actual communities, where you might know someone for 10 years and perhaps meet up for picnics or something. Larger sites took a huge chunk out of them, and there's actually some disadvantage to the Internet being completely geography-agnostic... it's hard to meet up with my community of 50 people spread more-or-less evenly across the world, or even the US. But they have existed before and they may exist again. I said it won't be all good in my original post, but it won't be all bad either. Some of what is going to be excluded in the botpocalypse is the worst of what exists today. Of course, there's going to be all kinds of incentives to create new pathologies, so who knows which way it will go in the end.


I’m not 100% sure what problem we’re trying to solve. If it is having authentic discussions with real humans… I don’t think there’s any alternative to just meeting with them in real life. Maybe we can exchange hand-written letters.

If the goal is to use the internet to produce interesting discussions and arguments, IMO it would be neat to try embracing the fact that bots are going to exist and get in the dataset. If bots produce outputs, and we pick the “good” output, that output can be smarter than the model, and go back to train the model, right?


Altman's shitcoin won't solve a thing. The "real human" user could just be acting as a front for a spambot.


Indeed, if it's a one-time only, account creation or long duration authentication system, spam bots reusing said account afterwards would be an issue.

I guess that "always on" verification or short duration authentication could make this strategy less useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: