Transparency Report

Featured policies

This section provides more details about some of the policy areas where our automated flagging systems are instrumental in helping detect violative content. Once potentially problematic content is flagged by our automated systems, human review of that content verifies whether the content does indeed violate our policies and our reviewers take appropriate action. These decisions continuously train and reinforce our machines for better coverage in the future.

Featured Policies: Hate Speech

Hate speech is not allowed on YouTube. We remove content promoting violence or hatred against individuals or groups based on any of the following attributes: age, caste, disability, ethnicity, gender identity, nationality, race, immigration status, religion, sex/gender, sexual orientation, victims of a major violent event and their family members, and veteran status.

This means we don’t allow content that dehumanizes individuals or groups with these attributes, claims they are physically or mentally inferior, or praises or glorifies violence against them. We also don’t allow use of stereotypes that incite or promote hatred based on these attributes, or racial, ethnic, religious, or other slurs where the primary purpose is to promote hatred. Our policy prohibits content that alleges the superiority of a group over those with any of the attributes noted above to justify violence, discrimination, segregation, or exclusion. We also do not allow content that denies that a well-documented, violent event took place. More details about content that violates our guidelines can be found on our hate speech policy page.

If content directed against an individual is not covered by our hate speech policy, it may instead be covered by our policies against harassment and violence, while content that praises or glorifies terrorist or criminal figures or organizations is covered by our policies against violent criminal and terrorist organizations. Reviewers evaluate flagged content against all of our Community Guidelines and policies. We limit the numbers in this section to the content that is removed under our hate speech policy.

Community Guidelines and enforcement details

How YouTube evolves and enforces its policies: Hate Speech

We are committed to our responsibility to protect the YouTube community from harmful content. One of the most complex and constantly evolving areas we deal with is hate speech. We systematically review and re-review all our policies to make sure we are drawing the line in the right place, often consulting with subject matter experts for insight on emerging trends. For our hate speech policy, we work with experts in subjects like violent extremism, supremacism, civil rights, and free speech from across the political spectrum.

As a result of this evaluation, in June 2019 we announced an update to our hate speech policy to specifically prohibit videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on attributes like age, gender, race, caste, religion, sexual orientation or veteran status. We also announced that we will remove content denying that well-documented violent events took place.

Hate speech is a complex policy area to enforce at scale, as decisions require nuanced understanding of local languages and contexts. To help us consistently enforce our policy, we have expanded our review team’s linguistic and subject matter expertise. We’re also deploying machine learning to better detect potentially hateful content to send for human review, applying lessons from our enforcement against other types of content, like violent extremism. Sometimes we make mistakes, and we have an appeals process for creators who believe their content was incorrectly removed. We constantly evaluate our policies and enforcement guidelines and will continue to consult with experts and the community and make changes as needed.

In addition to removing content that violates our policies, we work to reduce recommendations of content that comes close to violating our guidelines. We also have long-standing advertiser-friendly guidelines that prohibit ads from running on videos that include hateful content. Channels that repeatedly come close to violating our hate speech policies are suspended from the YouTube Partner program, meaning they can’t run ads on their channel or use other monetization features, like Super Chat.

Educational, documentary, scientific, and artistic content: Hate Speech

YouTube is a platform for free expression, and this can be a delicate balancing act with the need to protect our community. In enforcing our hate speech policy, we consider the purpose of the video. We may allow content that includes discussion around hate speech if the purpose is educational, documentary, scientific, or artistic in nature. If users are posting content related to hate speech for this purpose, we encourage them to be mindful to provide enough information so viewers understand the context, such as through an introduction, voiceover commentary, or text overlays, as well as through a clear title and description. Providing documentary or educational context can help the viewer, and our reviewers, understand why potentially disturbing content sometimes remains live on YouTube.

Priority Flaggers: Hate Speech

Across our policy areas, we continue to invest in the network of over 300 academics government partners, and NGOs who bring valuable expertise to our enforcement systems, including through our Priority Flagger program .

Participants in the Priority Flagger program receive training in enforcing YouTube’s Community Guidelines, and because their flags have a higher action rate than the average user, we prioritize them for review. Content flagged by Priority Flaggers is subject to the same policies as content flagged by any other user and is reviewed by our teams who are trained to make decisions on whether content violates our Community Guidelines and should be removed.

Flagged video process examples: Hate Speech

These are examples of videos that were flagged as potentially violating our Community Guidelines. These examples provide a glimpse of the range of flagged content that we receive and are not comprehensive.

Flagging reason
Hateful or abusive
Flagger type
Priority Flagger
Video description
A video depicting a minor with their face on another’s body with audio to imply the minor is homosexual.
Outcome
Video violates harassment and cyberbullying policies prohibiting content with the intent to shame, deceive or insult a minor, and the channel was removed.
Flagging reason
Hateful or abusive
Flagger type
User
Video description
A video claiming that the March 2019 Christchurch, New Zealand mosque shootings were fake.
Outcome
Video violates hate speech policy prohibiting content that denies that well-documented violent events took place. Video was removed.
Flagging reason
Hateful or abusive
Flagger type
User
Video description
A video containing a song with lyrics promoting violence towards a racial group.
Outcome
Video violates hate speech policy prohibiting content inciting violence based on race and was removed.
Flagging reason
Hateful or abusive
Flagger type
User
Video description
A video showing a dogfight, with animals in distress.
Outcome
Content violates animal abuse policy related to violent or repulsive content and was removed.
Flagging reason
Hateful or abusive
Flagger type
User
Video description
A video by a medical organization discussing Lyme disease symptoms and diagnoses.
Outcome
Content did not violate policy. No action taken.

YouTube Community Guidelines enforcement

Viewers and Creators around the world use YouTube to express their ideas and opinions. YouTube’s approach to responsibility involves four Rs: Remove violative content, Raise authoritative voices, Reduce recommendations of borderline content, and Reward trusted creators.

Learn more at How YouTube Works