The comment was pointing out a specific example about how an AI miscategorized something because of a small sample-size in data, something that has been shown to be often the result of unintended biases in the training set, and you say that we're just talking about "gender identities"??
This is the kind of thing AI researchers write papers on (source: AI MSc), not some SJW topic, yet you saw the word "gender" and assumed it didn't belong?
This is the kind of thing AI researchers write papers on (source: AI MSc), not some SJW topic, yet you saw the word "gender" and assumed it didn't belong?