Aug 162019
 

The algorithms that detect hate speech online are biased against black people

The title is misleading. The algorithms probably don’t know anything about the people whose posts it reads other than the words they use. What the algorithms do is their job. If some people use certain Naughty words more than others… shrug. Computer don’t know from context. And they certainly can’t tell if Person A using the exact same word in the exact same sentence as Person B is permitted because Person A is in a special protected class while Person B isn’t.

On of the big problems I have with “hate speech algorithms,” apart from the whole BS notion of hate speech in the first place, is the mutable nature of the English language. Until a few days ago, almost nobody knew that “Fredo” was an ethnic slur, the Italian equivalent of the Naughty-word. Largely, of course, because it isn’t and hasn’t been. but let’s say someone in a position to make such a determination determines that, indeed, “Fredo” is a Bad Word. Well, for a while you’re going to have people utterly stumped when every message to their cousin Fredo goes missing, until they learn that they now need to use cutesy euphemisms. And do the algorithms work backwards? Will messages to, about and from Fredos posted over the last thirty fookin’ years be erased from Yon Interwebs? Will Facebook pages devoted to fettuccine alfredo be insta-nuked? Will Frodo Baggins be pre-emptively dumped down the memory hole because his name is just too close? Will invocations of the Norse goddess Frigga cause the servers to melt down and the FBI to be called over the Super-Hate that comes from merging two Naughty words into one?

But in the mean time enjoy the spectacle of yet another Social Justice Initiative turning around and biting the SJWs square in the taint.

 Posted by at 9:13 pm