Transparency Report

Featured policies

This section provides more details about some of the policy areas where our automated flagging systems are instrumental in helping detect violative content. Once potentially problematic content is flagged by our automated systems, human review of that content verifies whether the content does indeed violate our policies and our reviewers take appropriate action. These decisions continuously train and reinforce our machines for better coverage in the future.

Featured Policies: Violent Extremism

Content that violates our policies against violent extremism includes material produced by government-listed foreign terrorist organizations. We do not permit terrorist organizations to use YouTube for any purpose, including recruitment. YouTube also strictly prohibits content that promotes terrorism, such as content that glorifies terrorist acts or incites violence. We make allowances for content shared in an educational, documentary, scientific, or artistic context.

Content produced by violent extremist groups that are not government-listed foreign terrorist organizations is often covered by our policies against posting hateful or violent or graphic content, including content that's primarily intended to be shocking, sensational, or gratuitous. Reviewers evaluate flagged content against all of our Community Guidelines and policies. We limit the numbers in this section to the content that is removed under our violent extremism policy.

Total videos removed
264,640

Community Guidelines and enforcement details

How YouTube uses technology to detect violative content: Violent Extremism

YouTube has developed automated systems that aid in the detection of content that may violate our policies. Once potentially problematic content is flagged by our automated systems, human review verifies whether it indeed violates our policies. If it does, the content is removed and is used to train our machines for better coverage in the future. The account that posted the content generally receives a strike, and multiple strikes leads to account termination. With respect to the automated systems that detect extremist content, the more than three million videos our teams have manually reviewed provide large volumes of training examples, which help improve the machine learning flagging technology.

Machine learning now helps us take down extremist content before it has been widely viewed. Our significant investment in fighting this type of content is having an impact: between April and June 2022, approximately 95% of the videos that were removed for violating our Violent Extremism policy were first automatically flagged.

Hash sharing: Violent Extremism

YouTube utilizes technology to prevent re-uploads of known violative content before that content is available to the public. We have long used this technology to prevent the spread of child sexual abuse imagery on the platform. In 2016, we created a hash-sharing database with industry partners where we share hashes (or “digital fingerprints”) of terrorist content to stop its spread. The shared database currently contains more than 400,000 unique hashes that are near-identical to the human eye.

YouTube's vast media library and automated detection systems make us a large contributor of hashes to the hash-sharing database. In accordance with the sharing criteria established by the Global Internet Forum to Counter Terrorism, YouTube contributed over 45,000 unique hashes to the hash-sharing database in 2023.

Once content has been hashed, other platforms can use those hashes to help detect related content on their platforms and assess it against their own content policies. Since 2017, the number of companies contributing to and benefiting from this database has grown from 4 to 13. This organized effort is now formally operated by the Global Internet Forum to Counter Terrorism (GIFCT).

Global Internet Forum to Counter Terrorism: Violent Extremism

In 2017, YouTube, Facebook, Microsoft, and Twitter founded the Global Internet Forum to Counter Terrorism (GIFCT) as a group of companies, dedicated to disrupting terrorist abuse of members’ digital platforms. Although our companies have been sharing best practices around counterterrorism for several years, GIFCT provided a more formal structure to accelerate and strengthen this work and present a united front against the online dissemination of terrorist content. In collaboration with the Tech Against Terrorism initiative, GIFCT hosts global workshops, engaging tech companies, non-governmental organizations, and international government bodies.

With varied members and industry partners using the hash-sharing database, there needed to be a baseline consensus for what would constitute terrorist and extremist content for the purposes of reviewing and sharing hashes of content. As noted in GIFCT's 2021 Annual Transparency Report, “the original scope of the hash-sharing database is limited to content related to organizations on the United Nations Security Council's Consolidated Sanctions List.”

YouTube and GIFCT's other founding members signed on to the Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online. Building on the Christchurch Call, GIFCT developed a new Content Incident Protocol (CIP) for GIFCT member companies to respond efficiently to perpetrator-created, live-streamed content after a real-world, violent event. This protocol has been tested and proven effective-for example, following the attack on a synagogue in Halle, Germany (October 2019); a shooting in Glendale, Arizona, US (May 2020); a shooting in Buffalo, New York, US (June 2022); and a shooting in Memphis, Tennessee, US (September 2022). Since creating the CIP, GIFCT has further developed its Incident Response Framework to include a Content Incident tier to respond to non-live-streamed, perpetrator-produced video and images depicting the real-world event. This was activated for the first time following an attack in Udaipur, Rajasthan, India (July 2022).

GIFCT has evolved to be a standalone organization with an independent Executive Director and staff. GIFCT's structure also includes an Independent Advisory Committee composed of government representatives and non-governmental members, including advocacy groups, human rights specialists, researchers and technical experts. Within the new governance framework of the institution, YouTube holds a position on GIFCT's Executive Operating Board.

Educational, documentary, scientific, and artistic content: Violent Extremism

If users are posting content related to terrorism for an educational, documentary, scientific, or artistic purpose, we encourage them to be mindful to provide enough information so viewers understand the context. It’s not okay to post violent or gory content that’s primarily intended to be shocking, sensational, or gratuitous. If a video is particularly graphic or disturbing, it should be balanced with additional context and information to help viewers understand what they are seeing, such as through an introduction, voiceover commentary, or text overlays, as well as through a clear title and description. Providing documentary or educational context can help the viewer, and our reviewers, understand why potentially disturbing content sometimes remains live on YouTube. For instance, a citizen journalist who captures footage of protesters being beaten and uploads it with relevant information (date, location, context, etc) would likely be allowed. However, posting the same footage without contextual or educational information may be considered gratuitous and may be removed from the site. Graphic or controversial footage may be subject to age-restrictions or a warning screen.

Priority Flaggers: Violent Extremism

Across our policy areas, we continue to invest in the network of over 300 government partners and NGOs who bring valuable expertise to our enforcement systems, including through our Priority Flagger program. Participants in the Priority Flagger program receive training in enforcing YouTube’s Community Guidelines, and because their flags have a higher action rate than the average user, we prioritize them for review. Participants of the Priority Flagger program have a direct line of communication with our Trust & Safety teams for quicker issue resolutions. Content flagged by Priority Flaggers is subject to the same policies as content flagged by any other user and is reviewed by our teams who are trained to make decisions on whether content violates our Community Guidelines and should be removed.

Flagged video process examples: Violent Extremism

These are examples of videos that were flagged as potentially violating our Community Guidelines. These examples provide a glimpse of the range of flagged content that we receive and are not comprehensive.

Flagging reason
Sexual
Flagger type
Priority Flagger
Video description
A video depicting a minor engaging in a sexual act.
Outcome
Video violates child safety policies prohibiting content that includes sexualisation of minors, and the channel was removed.
Flagging reason
Child abuse
Flagger type
Priority Flagger
Video description
A video depicting a minor in non-sexual activity, with a video title sexualising the minor.
Outcome
Video violates child safety policies prohibiting content that includes sexualisation of minors, and the channel was removed.
Flagging reason
Hateful or abusive
Flagger type
Priority Flagger
Video description
A video depicting a minor with their face on another’s body with audio to imply the minor is homosexual.
Outcome
Video violates harassment and cyberbullying policies prohibiting content with the intent to shame, deceive or insult a minor, and the channel was removed.
Flagging reason
Child abuse
Flagger type
Priority Flagger
Video description
A video which solicited sexual imagery from minors at school.
Outcome
Video violates child safety policies prohibiting content that includes sexualisation of minors, and the channel was removed.
Flagging reason
Hateful or abusive
Flagger type
User
Video description
A video claiming that the March 2019 Christchurch, New Zealand mosque shootings were fake.
Outcome
Video violates hate speech policy prohibiting content that denies that well-documented violent events took place. Video was removed.

YouTube Community Guidelines enforcement

Viewers and Creators around the world use YouTube to express their ideas and opinions. YouTube’s approach to responsibility involves four Rs: Remove violative content, Raise authoritative voices, Reduce recommendations of borderline content, and Reward trusted creators.

Learn more at How YouTube Works