Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming you meant some SFW images are mistakenly blocked, it is an easier situation than with true negative.

Once an image tested positive with your first model, you could run a second more CPU intensive model (but also more accurate) on it.



Assuming you have a notably more accurate algorithm. It's a difficult problem even with infinite computing power.


Great idea. MobileNetv2 is the model used currently due to it's small footprint (I used transfer learning on a dataset of NSFW and SFW images). A more robust model ran on all positives could improve performance overal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: