Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the insurmountable problems, I think, is the fact that different people (and different cultures) consider different things 'harmful', and to varying degrees of harm, and what is considered harmful changes over time. What is harmful is also often context-dependent.

Complicating matters more is the fact that something being censored can be considered harmful as well. Religious messages would be a good example of this; Religion A thinks that Religion B is harmful, and vice-versa. I doubt any 'neutral network' can resolve that problem without the decision itself being harmful to some subset of people.

While I love the developments in machine learning/neural networks/etc. right now, I think it's a bit early to put that much faith in them (to the point where we think they can solve such a problem like "ban all the harmful things").



This doesn't actually affect the system at all

There's way too much moralizing from people who have no idea what's going on

All the filter actually is is an object recognizer trained on genital images, and it can be turned off

The prompt isn't very relevant. I've had the filter fire on completely innocent text

The filter is a simple checkbox in preferences. All this deep thought is missing the point. You can just turn it off


>There's way too much moralizing from people who have no idea what's going on

>All the filter actually is is an object recognizer trained on genital images, and it can be turned off

I'm not sure if you misread something, but neither I or the person I was replying to was talking about this specific implementation, but in a more general sense?

I'm pretty sure you are the one who missed the point of the parent post and mine.


Cool story.

Anyway, the filters you're thinking about don't exist and you can download the code and use it today.

Thanks for speculating.


It's not that simple. The model was not trained to recognize "harmful" action such as blowjobs (although "bombing" and other atrocities of course are there).


It very much is, actually, that simple

The model was trained on eight specific body parts. If it doesn't see those, it doesn't fire. That's 100% of the job.

I see that you've managed to name things that you think aren't in the model. That's nice. That's not related to what this company did, though.

You seem to be confusing how you think a system like this might work with what this company clearly explained as what they did. This isn't hypothetical. You can just go to their webpage and look.

The NSFW filter on Stable Diffusion is simply an image body part recognizer run against the generated image. It has nothing to do with the prompt text at all.


The company filtered the LAION 5b based on undisclosed criteria. So what you are saying is actually irrelevant, as we do not know what pictures were included or not.

It is obvious to anyone who bothers to try - have you? - that a filter was placed here at the training level. Rare activities such as "Kitesurfing" produces flawless, accurate pictures, whereas anything sexual or remotely lewd ("peeing") doesn't. This is a conscious decision by whoever produced this model.


If you're done, emad already discussed this


Who the hell is emad




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: