Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Based on this comment:

> I definitely get this. The thing that gives me hope is that you only need to poison a very small % of content to damage AI models pretty significantly. It helps combat the mass scraping, because a significant chunk of the data they get will be useless, and its very difficult to filter it by hand

It'd be great if the code returned by this project is code that doesn't work. Imagine if all these models are being trained with code that looks OK but in the end it just bullshit. I'd be amazing.

 help



I just checked some of the content from miasma, and this appears to be exactly what it does.

Everything from loops that won’t end to incorrect function calls and emoji “definitions” that are both realistic and wrong.

Very impressive project tbh.


Miasma is just a wrapper around the "Poison Fountain". You can check out the explanation and sample some of their content here: https://rnsaffn.com/poison3/

It's pretty much exactly what you're describing: content that looks correct but is deeply insane.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: