Glossary

Actioning accounts – In addressing TVEC-related issues, a content-sharing service may take action in response to the TVEC-related online activity of a user or an account. This could be endorsing or rewarding positive user behaviour, such as helpfully flagging or reporting problematic content. Conversely, it could be action to prevent or address negative user behaviour, such as sharing TVEC that violates the guidelines. Examples of the latter type of action include:

Banning – Banning a user prohibits them from logging on to a content-sharing service and/or from creating and using any new accounts.

Disabling/de-activating/suspending – Disabling an account — which could include removing, deleting, de-activating or suspending an account — is effectively closing an account which has violated guidelines. This may be temporary or permanent and may be open to redress mechanisms or subject to a specific period of time. It may or may not affect the accessibility of the account’s past contributions on the content-sharing service, and may or may not be subject to an obligation to preserve data for law enforcement or similar purposes.

Reporting to law enforcement – A user or account may be reported to a law enforcement agency in order to address illegal activity or imminent risks to safety.

Restricting user privileges – An account may remain operable but with specific privileges restricted, muted, suspended or removed. These privileges may include the ability to livestream, comment or post.

Warning – A warning message or notice may be issued to an account that has violated company guidelines.

Actioning content – Once the appropriate moderation outcome is determined, the content either remains on the online platform in its original state as is, or is actioned in some way by the moderator (company staff, technology and/or a designated third party). Action may also be taken on an interim basis while a moderation outcome is pending. Content may be actioned in a number of ways. These include:

Blocking/disabling – Blocking/disabling means restricting or removing access to specific content for a particular user or group of users. Geo-blocking, for example, restricts access to content for users whose IP addresses are registered within a specific physical location. The content may remain available to some users under specific circumstances.

De-listing – De-listing is the removal by a content-sharing service, or by a user, of content from recommendation lists for users, or from indexing within the “explore” or “discover” functions that allow users to search content on the content-sharing service.

De-monetising – De-monetising content is restricting its ability to leverage the content-sharing service’s monetisation features. For example, de-monetising could involve removing the possibility for advertisements to appear alongside content that does not comply with relevant guidelines (e.g. content or others).

Down-ranking – Down-ranking allows content to remain available on the content-sharing service but with reduced visibility. Down-ranking is also known as down-listing, de-prioritising or limiting visibility.

Hiding/quarantining – Notifications provided before content can be accessed are also known as interstitial notices. Content hidden behind an interstitial notice may become accessible to a user if specific conditions are met — such as users declaring their age or acknowledging that content may be offensive. Content may also be quarantined or hidden behind a notification to indicate that it is not accessible to users because it is under review or is in violation of a company’s guidelines.

Notification – A moderator may add a notification to user-generated content, to make other users aware that it may be sensitive, disturbing, false, inappropriate for younger users, or otherwise challenging to community expectations, even though it may not violate company guidelines.

Removing – Removing is the process of a content-sharing service taking down content so it is no longer accessible to any users. The permanency of removal is determined by the content-sharing service’s guidelines and redress mechanisms, and the legality of the content.

Appeals and reviews – A process by which one or more users who believe the outcome of a moderation decision is incorrect may seek reconsideration of that decision. Some content-sharing services that provide options for appeal or review may use automated review and/or human review. The review may be conducted internally by the service and/or by appropriate circumstances that involve members of the user community, or by an external, independent body, including the judicial authorities in respective countries. If a review results in a decision to reverse, overrule or change the initial moderation outcome, common forms of redress or resolution include restoring content or an account, actioning content (see above) or actioning an account (see above).

Banning See Actioning accounts.

Blocking See Actioning content.

Company guidelines – Company guidelines are also known as community standards, rules, acceptable use policy, terms of service or terms of use. These guidelines are commonly understood to be a set of expectations for what content or activity is or is not allowed on a company’s service or product. These guidelines may also outline the actioning of content or accounts and user notification and redress mechanisms.

Content-sharing services – Content-sharing services are any online services that enable the transfer and dissemination of content, in whatever form, whether one-to-one, one-to-few or one-to-many.

De-activating See Actioning content.

De-listing See Actioning content.

De-monetising See Actioning content.

Detection and moderation – Detection and moderation can occur at different stages and can take a number of forms. They may occur nearly simultaneously (for example, through automated systems) or sequentially over a period of time (for example, through human review of content reported by a user). The following reflect some common forms and definitions of detection and moderation.

DetectionDetection is the process of identifying TVEC or TVEC-related online activity on a content-sharing service. Detection may be:

  1. Proactive – Proactive detection occurs when TVEC or TVEC-related online activity is detected as a result of company-led routine detection. Proactive detection can happen from human, tooling or hybrid systems of review established by a content-sharing service. Proactive detection can be:

  1. Proactive at upload – Proactive detection at upload occurs as soon as a user attempts to add TVEC to, or take specific TVEC-related online actions on a content-sharing service and before it is shared with or becomes accessible to others. This is primarily done by automated tools. Once such content or activity is flagged, various moderation actions can take place. For example, if the content is not obviously or overtly against guidelines, it might trigger a triage to human review.
  1. Proactive after upload – Proactive detection after upload occurs after TVEC has been added to a content-sharing service. Depending on the circumstances, this detection may occur before or after TVEC has been shared with or become accessible to other users. Again, once TVEC is flagged, various moderation actions can take place.
  1. Reactive – Reactive detection occurs when TVEC or TVEC-related online activity is identified through a third-party report made to the content-sharing service. TVEC or TVEC-related online activity may be reported by users (see online community reports below) or by others, such as civil society organisations, governments, law enforcement, trusted notifiers, regulatory bodies, industry bodies, etc. Reports from government institutions or public authorities may take the form of referrals or legal requirements. While there is not always a clear-cut distinction between the two categories, most referrals or legal requirements fall within the parameters contained in the first two items below. Content-sharing services may also have special reporting channels or escalation pathways for specific individuals, entities, types of requests or requirements, TVEC, TVEC-related online activity or situations, such as a real-world terrorist or violent extremist event with direct online implications. The channels or pathways described below may differ or overlap slightly, as they are impacted by how companies design their respective reporting procedures.
  1. Government referrals – Government referrals are requests by a government institution or public authority to a content-sharing service to review TVEC or TVEC-related online activity on the basis that it may violate the company’s community guidelines, terms of service or other relevant guidance documents. The TVEC or TVEC-related online activity may or may not violate local law, as well.
  1. Internet Referral Units – Specialised public authorities typically housed within law enforcement bodies, with responsibility for making referrals to content-sharing services. IRUs operate within the confines of their mandate and flag TVEC or TVEC-related online activity that violates a given country’s terrorism legislation but which is referred to a company for review against the company’s terms of service.
  1. Online community reports – Online community reports or flags are a common mechanism for users to report TVEC or TVEC-related online activity to a content-sharing service.
  1. Real-world terrorist or violent extremist event with direct online implications – A real-world terrorist or violent extremist event with direct online implications is a concurrent online manifestation of a real-world terrorist or violent extremist incident. It involves TVEC produced by a perpetrator or accomplice that appears to depict ideologically-driven murder (including attempts), torture or serious physical harm and appears to have been designed, produced and disseminated for virality – or has achieved actual virality – being shared online in a manner that presents a threat of unusually high impact (i.e. geographical / cross-platform scale), is likely to cause significant harm to communities, and therefore warrants a rapid, coordinated and decisive response by industry and relevant government agencies. For example, the livestreaming of the Christchurch attack was considered a real-world terrorist or violent extremist event with direct online implications requiring rapid response and action from industry and relevant government agencies.
  1. Trusted notifiers – Some content-sharing service designate trusted notifiers or partners who are deemed particularly trustworthy, effective or are subject matter experts in a particular violation or harms type for notifying a content-sharing service of TVEC or TVEC-related online activity that violates its guidelines. Trusted notifier status may include special privileges, for example reports being prioritised, enhanced reporting functionality and increased engagement with the content-sharing service about moderation decisions. Depending on the content-sharing service, trusted notifiers may be comprised of individuals, organisations and/or government institutions.
  1. Manual detection – Manual detection (also known as human detection) occurs when people manually identify user-generated TVEC or TVEC-related online activity based on a content-sharing service’s guidelines and any relevant internal resources and processes, including quality control. Depending on the circumstances, these people may be employed, contracted or appointed for this purpose.
  1. Automated detection – Automated detection occurs when technological tools are used in an automatic capacity, in a repeatable manner and without human triggering, to identify, surface, triage and/or action TVEC or TVEC-related online activity that violates a content-sharing service’s guidelines.

Moderation – Moderation is the process of reviewing/assessing TVEC or TVEC-related online activity and deciding a course of action based on a content-sharing service’s guidelines. Moderation and human review processes may be triggered by internal processes of investigations, routine checks, or from an automated triage system. They may also be triggered by external third party entity reporting or making a company aware of TVEC or TVEC-related online activity that might violate company guidelines.

  1. Internal moderation – Internal moderation occurs when TVEC or TVEC-related online activity is reviewed/assessed by internal moderation teams or administrators, or by external bodies or moderation services, contracted by or at the direction of a content-sharing service to decide how to apply the company’s guidelines.
  1. User moderation – User moderation, or community-based moderation, occurs when a content-sharing service’s users or community moderate TVEC or TVEC-related online activity directly on the service. This may occur through a removal system or a voting system which allows users to register approval or disapproval.
  1. Automated moderation – Automated moderation occurs when technological tools are used automatically, in a repeatable manner to action identified TVEC or TVEC-related online activity that violates company guidelines.
  1. Manual moderation – Manual moderation (also known as human moderation) occurs when people manually review/assess user-generated TVEC or TVEC-related online activity based on the company guidelines, any relevant internal resources and processes and, in some cases, the subject matter expertise or socio-linguistic understanding of the moderator. Depending on the circumstances, these people may be employed, contracted or appointed for this purpose.
  1. Hybrid moderation system – A hybrid system is a mix of automated and manual detection and moderation.  Content-sharing services most commonly use hybrid systems.
  1. Activity-based moderation – Moderation decisions based on online user TVEC-related online activity rather than the specific pieces of content a user shares. In essence, this means that content shared by users and/or user accounts might be actioned despite a specific piece of content not having strictly violated company policy. Such moderation can rely on methods such as, but not limited to, user typologies, accounts or access signals and environment profiling.

Disabling content See Actioning content.

Disabling accounts See Actioning accounts.

Down-ranking See Actioning content.

Government legal requirements See Detection and moderation, Detection, Reactive.

Government referrals See Detection and moderation, Detection, Reactive.

Hash – A hash is a unique identifier, often likened to a signature or a fingerprint, that can be created from a digital image or video.

Hiding See Actioning content.

Internet Referral Unit See Detection and moderation, Detection, Reactive.

Livestream – To livestream is to use a content-sharing service to record and broadcast audio-visual content of an event in real-time. The transmitted content itself is also known as a livestream.

Moderation See Detection and moderation, Moderation.

Notification See Actioning content.

Online community reports See Detection and moderation, Detection, Reactive.

Online or digital tooling – A function, plug-in, or mechanism used to facilitate a particular action or service on a given platform, device or site.

Providing reasons – A content-sharing service may provide a statement of reasons (such as violating or not violating company guidelines) to the user who reported certain content, requested a review, or posted the content, as well as any other user(s) affected and/or the broader community.

Quarantining See Actioning content.

Real-world terrorist or violent extremist event with direct online implications See Detection and moderation, Detection, Reactive.

Removing See Actioning content.

Restoring – Restoring and/or reversing actions taken on content or accounts.

Suspending See Actioning accounts.

Terrorist and violent extremist content (TVEC) – Content refers to any type of digital information, such as text, video, audio and pictures, and the scope of the VTRF is limited to terrorist and violent extremist content (TVEC). As noted in the Origin and Aims section of the VTRF, there is no universally accepted definition of terrorism or violent extremism, nor, by extension, of terrorist and violent extremist content, and no definition is delineated or endorsed here. Instead, that section provides general examples of different approaches that have been taken by different online content-sharing services in defining TVEC, and the metrics ask companies to provide transparency over how they understand the term and any significant updates to that understanding during the reporting period.

Tooling See Online or digital tooling.

Trusted notifiers See Detection and moderation, Detection, Reactive.

User-generated content – User-generated content is content created, uploaded or shared by a content-sharing service’s users.