fbpx

YouTube and Hate Speech: Is the Media Platform Taking Responsibility?


The views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the author’s employer, organisation, committee or other group or individual.

Note: this article contains discussions of homophobia, racism and other forms of bigotry and hate speech.

Ladies, gentlemen, and variations thereupon, the 22nd century is on the horizon. However, 2019 remains a year where massive media platforms like YouTube can still justify tolerating channels that promote white supremacist speech until they experience relentless backlash for it, and continue to keep their platform available to an outright homophobe’s channel despite ample evidence.

If that sounds like déjà vu to you, that’s because it is. We’ve all been here before. YouTube has been at the centre of many controversies in the past, most of them to do with tolerating the presence of accounts on their platform that spewed hate speech, bigotry, or otherwise attempted to invalidate the existence of groups of people based on certain characteristics.

It has been an ongoing theme for YouTube to regularly ignore minority communities while simultaneously peddling marketing campaigns that seem to cater to them on a corporate scale. In fact, in light of some of the incidents, a former Google employee petitioned to ban the tech company from the 2019 San Francisco Pride Parade. This is a first in the history books and constitutes a massive talking point for the online community.

But what, really, passes as support for targeted communities on the Internet? And can YouTube’s actions this past June really be justified?

Buckle in, this might get bumpy.



Failing Queer Creators

An image of Carlos Maza presenting a segment on Vox.
Carlos Maza, an openly gay, Cuban-American Vox presenter, who has been harassed by Crowder for years.

Vox presenter Carlos Maza is an openly gay Cuban-American man, and he has a prominent presence on YouTube on Vox’s channel. On May 30th, Carlos released a video compilation of one Steven Crowder from the show “Louder with Crowder”, openly calling him discriminatory, bigoted things. The video sent shockwaves throughout Twitter and the queer community, with words like “lispy queer” and “sprite” reverberating around the internet.

According to Maza, these taunts had been going on for years, which the video compilation shows in great detail. The impact on his life was greater than simply having to endure hate-speech, which is already a terrible thing to go through. He was doxed – his personal details were found and leaked to hundreds of Crowder’s followers, hundreds of whom proceeded to bombard his phone with the message “Debate Steven Crowder”.

The thing is, after bringing these actions and words to YouTube’s attention, the company refused to take down Crowder’s channel, citing that while “hurtful”, Crowder’s language and actions did not justify wiping his channel, contradicting their own policies on harassment. Doesn’t that directly defy the concept of creating a “Free Space for People to Belong”?

To anyone unfamiliar with the history of YouTube, this may seem like an isolated incident, but unfortunately there is a lot of evidence to the contrary.


History of Hate Speech on YouTube

A black and white image of a cardboard sign protesting hate.

First thing’s first: it’s important to acknowledge that being able to express one’s opinion is a basic human right, and that media platforms are simply places where people can congregate to express their opinions in a way that brings like-minded people together. Thus, a website like YouTube will have a large variety of people with an equally large variety of opinions.

Suffice to say, this can be difficult to handle, as many people will profess that their opinions are not harmful or hurtful in any way. However, it can be argued that there should be baseline rules in place for the protection of people online – minority groups especially – and the problem is that we seem to be missing consistent reinforcement of YouTube’s policies around hate speech. Often, they border on subjective ruling.


Extreme Content & The Amplification Algorithm

YouTube is a land that contains many extremes. Amplification seems to play an integral role in their recommendation algorithm, wherein users seem to be directed towards content that is a more intense, more concentrated version of their watch history.

Jennifer Heuer of the New York Times investigated the YouTube recommendations algorithm with a variety of topics, and found an interesting trend: videos tend more and more toward extreme viewpoints. For example, as the Wall Street Journal discovered in 2018, “if you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos“.

This is a demonstration of how the YouTube recommendation algorithm works its magic –it pushes what it deems “engaging”, and this can include factors such as “view velocity” and the total amount of views a video has, as well as other factors it deems relevant.

YouTube CEO Susan Wojcicki has commented on the situation, once again referencing YouTube’s policies on supremacist or discriminatory content. She also spoke about a “borderline category”, a term used when videos are close to, but not quite overlapping, the parameters that would get them booted from the site. These borderline videos are, according to Wojcicki, recommended to viewers much less, and are oftentimes demonetised.

The problem with that approach is that alt-right channels like Stephen Crowder – and others like his – are still able to monetise their views by linking to the merchandise they produce, marketing with affiliate content, and encouraging viewers fund them through platforms like Patreon. YouTube is the perfect mechanism for people like him to earn money from shocking content, because, at the end of the day, it is an entertainment platform.

“A policy that attempts to ban hateful content is only as effective as the mechanisms implemented to enforce such rules. If executed poorly, this policy could contribute to even more harm for black communities and other communities targeted by white supremacist ideologies. “

Rashad Robinson, President and CEO of Color of Change

Queer Content Takes a Hit: #ProudToBe Restricted

Let’s jump back a few years, to when two major events changed the course of YouTube’s history.

  1. The #ProudToBe Campaign was launched. This was aimed at showing open support for the LGBTQ+ community and making space for creators on their platform.
  2. YouTube Restricted Mode”. People could select a mode that would filter out content containing “sensitive topics or material”, a feature available since 2010. The company then decided to update their restricted mode algorithm in 2016, causing several notable malfunctions.

These two events formed a perfect storm, in which a number of videos from queer creators got caught up in the updated filter, and people who used restricted mode couldn’t see videos from LGBTQ+ creators anymore. This sent a message to YouTube channel owners – as well as viewers on YouTube – that LGBTQ+ content was “sensitive” material, and thus shouldn’t be seen by children, which just isn’t the case.

YouTube has attempted to rectify the issue in 2017, finally allowing creators to once again discuss times in which they were discriminated against, and to open up the dialogue around queer-phobia without subsequently becoming restricted. There are still instances of queer creators finding their videos blocked by content filters while viewers enable restricted mode, however, they appear to be less widespread.

The company also apologised for their role in what they termed a “misunderstanding”, but it does demonstrate that their algorithms somehow picked up a significant amount of LGBTQ+ content with little to no basis, and continued to do so for some time despite the change to their policies.


Ad Companies & The Saga of the Adpocalypse

The “Adpocalypse” was a phenomenon characterised by advertisers pulling their support from YouTube following massive controversy, impacting on creators’ abilities to monetise their content. After all, not many ad companies would want to put their brands in between disturbing or damaging content.

The YouTube Adpocalypse has, arguably, occurred three times – at the end of 2017, in February of 2019, and just recently in June of 2019.

Last 2017, following a racist video that had been allowed to gain traction, as well as various other upsetting content (such as Logan Paul’s video where he literally shows his viewers a hanging corpse), major advertisers pulled their money and advertisements from YouTube’s ad platform. YouTube’s response was to update its algorithm and policies to only feature ads on channels that were considered “family-friendly”, which ultimately left many of the platform’s creators high and dry. This is what the YouTube creator community dubbed as the “Adpocalypse”.  

Another Adpocalypse occurred after February of 2019 when YouTube MattsWhatItIs posted a video that suggested the existence of a “soft-core paedophilia ring” on the platform. In response, YouTube scoured its channels for potentially illegal content and reported them to the authorities. Major brands once again pulled their ads from YouTube, especially children-oriented ones like Disney and Hasbro.

YouTube content creators have hated these Adpocalypse updates – as many of them have had their own YouTube channel revenue decline by more than 90% (and putting their livelihoods in jeopardy) despite them producing content that would be considered completely inoffensive.



The Vox Adpocalypse & YouTube’s Response

YouTube CEO addressing their changes to policy.
Susan Wojcicki, YouTube CEO, speaking about their changes in hate speech policy.

Now that you have some context, let’s bring it back to the present day. Following the advent of the great “Maza vs. Crowder” war this past June 2019, which many have termed the Vox Adpocalypse, ad companies have again become sensitive about issues surrounding YouTube’s algorithms.

Not only that, there was some light shed on huge topics that have been missing from YouTube’s policies. Namely, the matter of a more robust policy against hate speech, especially around racism, homophobia, and other forms of bigotry.

Pressure from both YouTube creators and advertisers shook multiple existing problems out of YouTube’s woodwork. A prevalence of white supremacist and conspiracist videos became the subject of great debate across users of the platform, resulting in the CEO of YouTube stepping in with another amendment to their policies.


The Great Ban on Supremacist Speech

On June 5th 2019, YouTube announced that they would be taking a more active stance against supremacist speech of all forms. Their policies have since then been updated to ban videos “alleging that a group is superior in order to justify discrimination, segregation, or exclusion”.

This would include all white supremacist and Nazi ideologies, as well as videos that would spread misinformation about the well-documented events such as the Holocaust and the Sandy Hook school shooting. The update came about as a result of a massive backlash from the community, urging them to change their classifications of what constituted an offence.

The policy went on to remove thousands of channels – anyone who violated those terms got the chop. This was fantastic news! But – surprise, surprise – there are several problems with their implementation.


Is the Ban Algorithm Working? Anti-Racism Takes the Hit

Much like their hiccups along the road to “YouTube Restricted Mode”, the tech company’s expanded hate speech policies have now impacted on channels that speak to instances of racism. Black creators and academic platforms on the site started noticing that their videos were being removed by YouTube’s algorithm, because of the inclusion of terms like “racism and bigotry”.

Furthermore, One People’s Project is a group that dedicates itself to monitoring and reporting goings-on in the alt-right and white supremacist communities, and communicating their impacts to a largely minority audience. They had an informational video on their channel taken down due to the discussion of racism in its title and content, despite no harmful stigma being perpetuated.


“It indicates that [YouTube has] not refined well enough the difference between someone who is exploring issues of racism and hatred and someone who’s promoting it.”

Heidi Beirich, Director of the SPLC’s Intelligence Project.

Meanwhile, popular conservative YouTuber Stephen Crowder (mentioned above), posted a video on June 19th of this year entitled “Hate Speech Isn’t Real: Change My Mind“. The video currently has almost two million views and has not been taken down.


Conclusion: Corporate Responsibility is Not Optional

The main takeaway here is that, despite their clearly written and cited policies, YouTube has repeatedly demonstrated a failure to support communities on their platform that need a voice. In fact, there have been several instances over the years in which they have contributed to taking that voice away.

YouTube needs to use their power in ways that aren’t just campaigns giving them visibility – but to actually protect the minorities who use their platform. Public backlash should not be the thing driving media platforms to make positive change – we need the bigger people to look out for the little guys to begin with.


Courtney-Dale Nel

Courtney is a Content Writer on the Pure SEO team. They have a Bachelor in Behavioural Psychology, way too much experience working with pigeons, and a fondness for nachos that rivals most marriages.

Digital Marketing Agency

Ready to take your brand to the next level?
We are here to help.