Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Stylometry. Seems like it currently leaves enough textual clues that simple BERT like models have no trouble picking them up, no watermarks needed.

Demo: https://huggingface.co/openai-detector - note it not was not even finetuned to GPT3/ChatGPT, but merely GPT2 (3 years old and much smaller model)



Hmmm. Here's the text that I entered:

> This is some real text. It was not written by a robot. It was written by a human. If you don't believe me, ask the guy who wrote it. He will tell you that he wrote it using his brain and fingers. Do you believe me?

The page says it needs 50 tokens to start getting accurate results and that the above text has 54. It also rates it as 99.93% fake.

Spotting robots may be easy. Spotting humans I think is the hard part.


> Stylometry. Seems like it currently leaves enough textual clues that simple BERT like models have no trouble picking them up, no watermarks needed.

Demo: https://huggingface.co/openai-detector - note it not was not even finetuned to GPT3/ChatGPT, but merely GPT2 (3 years old and much smaller model)

But then I don't know why I'm telling you - when I enter the above text it says it's 99.86% fake, based on 80 tokens :/


Also, in many use cases that have been described thus far it's a collaboration between the AI and a human (i.e. the AI writes the first copy, the human edits). That blurs the line even further.


Agreed. If we have only learnt one thing it would be that Sarah Connor + Arnie is the kick-ass combination.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: