How YouTube evolves and enforces its policies: hate speech
We are committed to our responsibility to protect the YouTube community from harmful content.
One of the most complex and constantly evolving areas that we deal with is hate speech. We
systematically review and re-review all of our policies to make sure that we are drawing the line in
the right place, often consulting with subject matter experts for insight on emerging trends.
For our hate speech policy, we work with experts in subjects like violent extremism,
supremacism, civil rights and free speech from across the political spectrum.
As a result of this evaluation, in June 2019 we
announced
an update to our hate speech policy to specifically prohibit videos alleging that a group
is superior in order to justify discrimination, segregation or exclusion based on attributes
like age, gender, race, caste, religion, sexual orientation or veteran status. We also
announced that we will remove content denying that well-documented violent events took place.
Hate speech is a complex policy area to enforce at scale, as decisions require nuanced
understanding of local languages and contexts. To help us consistently enforce our policy, we
have expanded our review team’s linguistic and subject matter expertise. We’re also deploying
machine learning to better detect potentially hateful content to send for human review,
applying lessons from our enforcement against other types of content, like violent extremism.
Sometimes we make mistakes, and we have an
appeals process
for creators who believe that their content was incorrectly removed. We constantly evaluate our
policies and enforcement guidelines and will continue to consult with experts and the
community and make changes as needed.
In addition to removing content that violates our policies, we work to
reduce recommendations
of content that comes close to violating our guidelines. We also have long-standing
advertiser-friendly
guidelines
that prohibit ads from running on videos that include hateful content. Channels that
repeatedly come close to violating our hate speech policies are suspended from the YouTube
Partner programme, meaning that they can’t run ads on their channel or use other monetisation
features, like Super Chat.