Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wouldn't those only mean that the account was initially created by a human but afterwards there are no guarantees that the posts are by humans.

You'd need to have a permanent captcha that tracks that the actions you perform are human-like, such as mouse movement or scrolling on phone etc. And even then it would only deter current AI bots but not for long as impersonation human behavior would be a 'fun' challenge to break.

Trusted relationships are only as trustworthy as the humans trusting each other, eventually someone would break that trust and afterwards it would be bots trusting bots.

Due to bots already filling up social media with their spew and that being used for training other bots the only way I see this resolving itself is by eventually everything becoming nonsensical and I predict we aren't that far from it happening. AI will eat itself.



>Wouldn't those only mean that the account was initially created by a human but afterwards there are no guarantees that the posts are by humans.

Correct. But for curbing AI slop comments this is enough imo. As of writing this, you can quite easily spot LLM generated comments and ban them. If you have a verification system in place then you banned the human too, meaning you put a stop to their spamming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: