You do understand that this has nothing to humans in general right? This isn't AI recognizing some evolutionary pattern and drawing comparisons to humans and primates -- it's racist content that specifically targets black people that is present in the training data.
I don't know nearly enough about the inner workings of their algorithm to make that assumption.
The internet is surely full of racist photos that could teach the algorithm. The algorithm could also have bugs that miss-categorize the data.
The real problem is that those building and managing the algorithm don't fully know how it works or, more importantly, what it had learned. If they did the algorithm would be fixed without a term blocklist.