The #Youtubeisoverparty, it seems, has gone from my-folks-are-out-of-town beer bash to a months-long bacchanalia—and not the fun kind. This week, the streaming platform unwittingly extended the festivities with a new “family-friendly” filter that was meant to protect kids from seeing sexually-explicit or otherwise age-inappropriate stuff. Not a bad idea—except in addition to hiding Hitler cosplay, it also scrubbed G-rated LGBTQ content.
In its original response to the ensuing outcry, YouTube said the restricted mode only applied to videos addressing LGBTQ issues that also included mature subjects like sexuality, health, and politics. But then pop duo/LGBTQ icons Tegan and Sarah came knocking, pointing out the only controversial part of their blocked music videos was their bad dancing. Prominent YouTubers like LGBTQ-activists Tyler Oakley and Gigi Gorgeous also called foul, saying even their most innocuous videos got caught up in the restricted mode’s hyperactive censors.
It’s Time for Facebook to Deal With the Grimy History of Revenge Porn
PewDiePie’s Anti-Semitic Videos Put YouTube’s Business in a Bind
The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed
So now YouTube is even sorrier, and says it’s fixing the problem via algorithm. (Algorithms: is there anything they can’t do? Wait, don’t answer that.) The problem is, this problem isn’t unique to YouTube. It belongs to the whole of internet culture.
No social media platform has been able to divine the line between extreme permissiveness and undue censorship; not with algorithms, and not with AI. The technology isn’t ready—but more importantly, neither are the humans involved. People write their own blindspots and prejudices into algorithms. And the people they’re trying to moderate are an ever-expanding, global group talking about a ceaselessly shifting set of priorities in a language that mutates as fast as they can type.
The people of the internet simply don’t know how to live with each other. Tech companies can get better at helping—a lot better, even—but until humanity’s collective kumbaya outweighs its biases and tribalism, debacles like YouTube’s faulty restricted mode are inevitable.
There’s some good news here—namely that, regardless of how they bungled the execution, YouTube was trying to do something good. The company knows it has a content moderation problem. It’s tried to enlist users to flag problem videos, and that backfired when trolls heard about the plan. It’s tried demonetizing objectionable videos, and people hated that too. Its algorithm changes (real or imagined) are always a story.
But despite YouTube’s efforts, it didn’t notice YouTube megastar PewDiePie going rogue. Or when ads for major brands and the UK government ran on hate speech-filled far-right videos. “We’ve come to them and said, ‘Look at all this hate content,'” says Heidi Beirich, director of the Southern Poverty Law Center’s Intelligence Project. “This restricted mode is a screwup, and they have a gazillion bugs to work out, but their bigger screwup was not trying something like this sooner.”
Theoretically—or in a more technologically advanced world—this restricted mode would be a great solution. But while YouTube has been extremely vague about how restricted mode works, its temporary solution is having people moderate the moderation. So we can probably assume that this must be a case of a website turning itself over to its own code, a la Facebook’s Megyn Kelly fake news drama.
But code doesn’t exist in a vacuum; it’s a product of company culture. “It seems like a not-very-diverse team was making the decisions,” says Jen Golbeck, a computer scientist at the University of Maryland. “Somebody might have said, ‘We’re going to block anything with LGBTQ stuff in the title.'” But equally likely, Golbeck says, is that the algorithm takes its cues from the YouTube community: if enough people flag the video as inappropriate, the algorithm might take their word for it.
“That’s why it’s a good idea to have teams that are aware of the social issues, and not just code,” Golbeck says. Remember when Twitter turned Microsoft’s AI-powered chatbotinto a Nazi in less than a day? That might not have happened if developers had anticipated the possibility and programmed in safeguards. YouTube’s restricted mode team could have done the same, if they’d considered how controversial LGBTQ issues are. This kerfuffle is evidence of a blind spot at best, and prejudice at worst.
YouTube isn’t new to the internet; it obviously employees people whose job is to consider how the site’s community standards align with social issues. Social media companies have highly trained teams constantly staring into the darkest parts of the internet abyss, and keeping it off your newsfeed. But once you get past unanimously reviled things, like beheadings or child porn, what “objectionable” means becomes much dicier. “Judging something against a set of global standards is difficult, and subjective,” says Kate Klonick, a lawyer at Yale who studies private platform moderation of online speech. “What seems violent to you and me might not to someone living in Iraq.”
Since YouTube and Facebook and Twitter are essentially the same no matter where you live, vastly different cultures can clash over the same exact image. The problems with Youtube’s restricted mode mirrors Facebook’s 2014 gay-kissing controversy almost exactly: after anti-LGBTQ groups reported photos of gay couples kissing as inappropriate, Facebook took the photos down, but after public blowback eventually apologized and reinstated the photos. In other words, good luck if you think you can crowdsource a definitive opinion on what should be censored. “The people who developed these systems for Youtube and Facebook know they’ll always be offending someone,” Klonick says. “They just hope that circle of offense gets smaller and smaller.”
And despite their best intentions, Facebook and YouTube will never be able to write the perfect, bulletproof algorithm. “We have instant speech, instant communication, and instant cultural change,” Klonick says. “Memes and new norms develop in seconds and spread across the world.” When a cartoon frog can (d)evolve from innocent meme to bonafide hate symbol seemingly overnight, technology’s ability to keep pace with online culture is unlikely.
Still, that’s no reason to excuse lapses like YouTube’s—not today, and not tomorrow. “Doing this perfectly is hard, but doing better is not,” Golbeck says. Tech companies still have to try to make the internet a safer, nicer place. Even if it’s an uphill battle.