Google seeks to preserve freedom of expression and access to information. But in order to maintain a vibrant and enjoyable community on Google+, we balanced those principles with efforts to prevent the spread of content that violates our Content Guidelines and/or the law. Striking this balance is never easy, especially for a global platform operating in societies that have different standards for speech.
Mechanisms for submitting complaints about allegedly illegal content according to NetzDG
To report a complaint under NetzDG, Google+ provided an easily recognizable, directly accessible and permanently available in-product reporting flow. The procedure for logged-in users to submit legal complaints under NetzDG was directly and intuitively integrated into the flagging flow, which was available next to each post in the upper right corner. When clicking through the flagging flow and selecting the NetzDG option (“I believe this content should be restricted under the Network Enforcement Law."), the submitter was directed to a legal webform through which the individual was able to submit a NetzDG complaint.
Google+ also offered a legal NetzDG webform for both logged-in and logged-out users that was directly available through the Google+ imprint (a contact page available to all users in Germany).
As a legal layperson, the average user will likely be overwhelmed when confronted with a complete portfolio of complex offenses or even deterred from any reporting at all. The average user is unlikely to understand and cite relevant statutes of the StGB when submitting a legal NetzDG complaint. Moreover, some notified content may have violated more than one of the listed offenses. For instance, a video that tries to recruit new members or supporters for a criminal or terrorist organisation (§ 129, 129a StGB) will usually also contain symbols like a flag, which may be punishable under §§ 86, 86a StGB; it may also fulfil the offence of a preparation of a serious violent offence endangering the state according to § 89a StGB.
Thus, in order to ease the notification process and help the submitters report content they believed may be illegal under NetzDG, we had created seven content categories in our NetzDG reporting flow that corresponded to, reflected, and categorized in a generally understandable way the 21 relevant criminal offenses. This approach also helped to make offenses that are very abstract and have a very broad scope more tangible (especially for legal laypersons). For example, § 140 StGB refers to such diverse offenses as prerequisite as treason, murder and other serious crimes like war crimes, grievous bodily harm, certain crimes endangering the public, as well as certain serious sexual crimes, etc. Our experience with content notification processes demonstrated that users appreciate such a true-to-life and low-threshold approach to submit effective complaints of illegal content.
These categories—and the corresponding criminal offenses we expected to be essentially covered and reported by selecting among them—are:
Hate speech or political extremism
- § 130 StGB: Incitement to hatred
- § 166 StGB: Defamation of religions, religious and ideological associations
Terrorist or unconstitutional content
- § 86 StGB: Dissemination of propaganda material of unconstitutional organizations
- § 86a StGB: Using symbols of unconstitutional organizations
- § 89a StGB: Preparation of a serious violent offence endangering the state
- § 91 StGB: Encouraging the commission of a serious violent offence endangering the state
- § 100a StGB: Treasonous forgery
- § 129 StGB: Forming criminal organizations
- § 129a StGB: Forming terrorist organizations
- § 129b StGB: Criminal and terrorist organizations abroad; extended confiscation and deprivation
- § 140 StGB in connection with § 138 I StGB: Rewarding and approving of certain offenses listed in § 138 I StGB
- § 269 StGB: Forgery of data intended to provide proof
Violence
- § 131 StGB: Dissemination of depictions of violence
Harmful or dangerous acts
- § 111 StGB: Public incitement to crime
- § 126 StGB: Breach of the public peace by threatening to commit offenses
- § 140 StGB in connection with § 126 I StGB: Rewarding and approving of offenses listed in § 126 I StGB
- § 241 StGB: Threatening the commission of a felony
Defamation or insult
- § 185 StGB: Insult
- § 186 StGB: Defamation
- § 187 StGB: Intentional defamation
Privacy
- § 201a StGB: Violation of intimate privacy by taking photographs
Sexual content
- § 184b StGB: Distribution, acquisition and possession of child pornography in connection with § 184d StGB: Distribution of pornographic performances by broadcasting, media services or telecommunications services
- § 140 in connection with §§ 176 to 178: Rewarding and approving of certain offenses listed in §§ 176 to 178
Based on our long-term global experience with flagging of content, we were convinced that the legal aim of NetzDG as a whole was best supported and achieved by these categories.
Measures to inform the submitter and the uploader according to NetzDG
When we received a complaint under NetzDG through the dedicated NetzDG reporting channels described above, the submitter received an email with a reference number confirming that we received the complaint and will review it. Once we reviewed the complaint and the allegedly illegal content, we sent the submitter an email informing him or her of our decision to remove or to take no action. We also notified the uploaders when they had violated our policies or the law and provided more information on the removal so that Google+ uploaders were educated on our terms of service.
Methods of reporting
Technology. Our technologies were constantly running to identify Content Guideline violations on Google+, such as spam or sexual content. New rules were added to our algorithms on a weekly and monthly basis. We also used hashing technologies to prevent re-uploads of photos on Google+ that have been removed for some policy violations, such as child sexual abuse imagery. We use fingerprinting and matching to scan, identify, and block uploaded photos that contain child sexual abuse imagery.
Human flagging: users and top contributors. We had a flagging system for signed-in users to alert us to content that violated the Google+ Global Content Guidelines. This was a voluntary self-regulatory system that existed outside of any legal obligation. Anyone who was signed in to his or her Google account and found a piece of content that may have potentially violated our Global Content Guidelines was able to flag it by accessing the flagging option of Google+—represented by three dots on the top right corner—then clicking “report abuse” and selecting the category of the alleged content violation. When flagging, users reported what policy they believe the content violates. Policy reporting categories and removal reasons include: sexually explicit content, violent or dangerous content, hateful, harassing or bullying content, and spam. Community flags were assessed purely based on our Content Guidelines.
The Top Contributor program was developed as a way for a subset of pre-defined Google+ users to prioritize for the Google+ content review team. These Top Contributors escalated trends or edge cases based on in-depth product knowledge from the large amount of time spent answering questions on our forums. Top Contributors flagged Content Guideline violations in-product, which were then escalated to the content review team. Flags from Top Contributors were only reviewed under our Content Guidelines. You can learn more about the Top Contributors Program here.
Legal complaints. We had developed a dedicated process so that signed-in users were able to inform us directly and easily if they believed that content posted on Google+ violated one of the statutes that falls under NetzDG. Allegedly illegal content could be reported by accessing the flagging option of Google+ via three dots on the top right corner of posted content, and selecting “I believe this content should be restricted under the Network Enforcement Law." Signed-in and signed-out users could also file a NetzDG complaint through the NetzDG webform that was accessible through the Google+ imprint. The submitter received a reply confirming that we have received the complaint. These reporting channels enabled users to identify the objectionable item and provide a reason for the legal complaint. This information is necessary for us to conduct a proper legal review so that we can take appropriate action. If the rationale was unclear or there is insufficient justification for local legal removal, we may have asked the submitter to supply additional information. If the reported content infringed the Google+ Content Guidelines, we removed it globally. If the content did not violate these Guidelines, but one or more of the criminal statutes NetzDG refers to, we blocked the content locally. The submitter received an email notice from Google+ with our decision and a reason for our decision.
For many years, we have provided other dedicated legal webforms for users to submit legal complaints and we have blocked content we have identified as illegal in the relevant jurisdiction. The submitter has always received feedback on the legal complaint.
Process for evaluation
Human flagging of the Google+ Community. When we received a flag, our review teams assessed the content under our global Google+ Content Guidelines. The teams were able to see the surrounding context during their review of reported content—for example, the headline accompanying a photo on a post or the Google+ community description. These contextual clues are often important factors in evaluating the intent of the upload. For example, a political community on current affairs would have likely be allowed under our global policies. We may have had to discern this context via the community description and other content uploaded. However, the same content uploaded to glorify or encourage hateful views could have potentially violated our Content Guidelines and resulted in a removal.
We had developed Content Guidelines that set the rules of the road on the kind of content we allowed, many of which overlap with NetzDG statutes. These included guidelines prohibiting: hate speech, harassment, bullying, and threats; personal and confidential information; child exploitation; sexually explicit material; violence; and terrorist content. You can read the policies for each of these areas in detail here.
Our review teams were able to take one of several actions: remove content globally if the content violated our community guidelines, mark as “not family safe” if the content did not violate our guidelines, but may not be appropriate for minors, or leave content live if it was deemed not in violation of our guidelines. In cases of repeat abuse or of more egregious violations, we may have penalized the user by disabling certain features or by terminating their account. We may have also terminated the account at first offense for egregious violations like terror.
General legal reporting. When we received a legal complaint, our review teams performed a review based on the information provided in the complaint and the referenced content. In addition, reviewers saw the surrounding context of the reported content as described above. In case some important information was missing in a complaint—for example, the identity of a person affected by allegedly defamatory content—the team may have contacted the submitter and asked for additional information. Once this was received, the team did a legal assessment. If we identified the content as illegal—for example, for a claimed copyright or personality rights infringement—we blocked the content locally.
NetzDG complaints. Given our review process, when we received a NetzDG complaint, our specialized NetzDG review team (see section Review Teams), that saw the surrounding context of the reported content, also assessed the content against our global Content Guidelines and removed it globally in case of a violation. If the content did not violate our guidelines, but one or more of the 21 statutes of the StGB covered by NetzDG (§1 III NetzDG), we blocked the content locally.
The assessment of complaints is often not easy. Some of the criminal offenses are difficult to pin down, even for lawyers—e.g., forgery of data intended to provide proof (§ 269 StGB)—the whole category of defamation and insults, to give another example, is an area where extensive case law has been established over the last decades, in particular since the German Constitution came into effect. Thus, when it comes to defamation and insults, only a minority of cases are obviously illegal. Courts sometimes deliberate the legality of a piece of content for years and still come to different conclusions. For example, the Federal Constitutional Court has reversed judgments by the Federal Supreme Court, showing many times that complex balancing tests need to be made and that legality is always circumstantial, depending on the circumstances of an individual case. Unlike in court proceedings, the social network doesn’t always have all necessary information. Furthermore, there is no main proceeding requiring evidentiary rules. In these cases the admissibility of content—when measured by specific elements of offenses—is very difficult to judge upon and should typically be decided by the responsible courts.
These considerations are also supported by the actual practice: Many NetzDG complaints in the area of defamation and insults were not submitted by the affected person, but rather from third parties who assumed that the affected person might feel defamed. Whether that was indeed the case or whether the affected person actually filed a criminal complaint at the respective law enforcement authorities—because the prosecution of these offenses require a first party complaint (“Antragsdelikt”)—was not known to the social network because we were not in a position to verify the identity of the submitter.
NetzDG requests were reviewed by our NetzDG team in two shifts, seven days a week, 365 days a year until the deprecation of Google+ to allow for a global removal or local block of content as applicable within the time limits of the NetzDG. If a request was obviously unfounded, the requester was immediately notified according to the legal requirements (see section “Measures to inform the submitter and the uploader according to NetzDG”). If the content did not obviously violate either global Google+ Content Guidelines or the relevant criminal statutes, the content was otherwise complex or did not obviously relate to Germany, the responsible NetzDG content reviewer escalated the request to the next level for prompt review with the appropriate action then taken by senior content reviewers. Complex requests were passed on to the Google legal team who, if in doubt, further escalated difficult and edgy cases to members of the legal department of Google Germany GmbH, who in turn had the option to escalate particularly difficult cases to an external law firm that is specialised in criminal law. This process usually took up to 7 days.
To ensure that the NetzDG team was operating as intended and was applying global Google+ Content Guidelines and the criminal offenses under NetzDG correctly and consistently, we implemented a rigid quality assessment process. In the reporting period we audited approximately 70% of the reviewed content. The quality assessment volume may have changed from week to week depending on incoming request volumes. During this process the quality review team evaluated the decisions taken by each content reviewer, provided individualized feedback and performed an overall analysis of the results of the quality review. The selected quality sample was the basis for a weekly quality data overview. The quality reviewers were a separate team within the NetzDG team and consisted of senior team members who previously worked in content review and had substantial experience with the criminal statutes referred to in NetzDG and with global Google+ Content Guidelines. During weekly meetings between the Google legal team and the NetzDG team, we not only discussed the most recent quality assessment results, but also calibrated on particularly interesting, difficult and complex cases. Furthermore, any notable trends, current “hot topics” and case law developments were raised and fully discussed in order to ensure a consistent approach across the NetzDG team. When appropriate, we refined the removal policies to adapt to, for example, updates to our Content Guidelines and case law developments. In such instances, new guidance and, where appropriate, training materials were delivered to all members of the NetzDG team.