Something I must stress, it is NEVER the AI's "fault". Hear me out.
The responsibility of the act of bigotry (and it is always an act) is always human.
Don't let people and corporations slide off their responsibility on inanimate things.
One of the larger reasons why I oppose surveillance by facial recognition tech, completely without quarters, is NOT bias. It is this erasure of responsibility.
The real danger is law enf saying it was not our fault, that machine told us to do blah.
What you can do is not have it without adequate human supervision.
And it's not your fault for assuming so I guess
This may have happened because the human moderators (if any exist) who use the algorithmic suggestions are ill paid and tired and don't have clear guidelines etc. Or are plain dumb. That is the best case. You all know the worst case.