Print Читать на русском
Rate this article
(votes: 3, rating: 5)
 (3 votes)
Share this article
Anastasia Tolstukhina

PhD in Political Science, Program Manager and Website Editor at the Russian International Affairs Council

One of the most recent trends to appear in internet governance is the tightening of control over online content. And it was China and Russia that set the wheels for this in motion. The trend has extended across the globe – just look at the impressive list of states that supported the Christchurch Call to Action to eradicate terrorist and violent extremist online content. France, the United Kingdom, India, Japan, Indonesia and many other states endorsed the call, questioning the right to spread information online without any restrictions.

It is no secret that terrorists today strive to use the benefits of the nascent digital age for nefarious purposes, namely, to spread dangerous content, recruit new foot soldiers, finance terrorist groups and broadcast terrorist attacks using various internet resources. This is why many governments, fearing the radicalization of their population, demand that global internet platforms step up measures to counter extremist and terrorist content. For example, in May 2017, the Parliament of the United Kingdom criticized Twitter and Facebook for their inability to remove extremist content. At the 2018 G7 Summit in Toronto, security ministers demanded that tech companies step up the fight against dangerous content.

Global tech companies began to respond to governmental calls to flag dangerous online content long before the tragedy in Christchurch. For instance, in June 2017, Facebook, Microsoft, Twitter and YouTube formed the Global Internet Forum to Counter Terrorism (GIFCT) under the auspices of the United Nations. The Forum’s participants pledged: 1) to develop and share technology to responsibly address terrorist content across the industry; 2) to fund research and share good practices in order to develop viable methods of countering dangerous content.

Russia was not involved with the Christchurch Call and the new institutions and mechanisms it generated, since Russia had not been invited to join the discussion of the document and endorse it.

It is important here to point out the different approaches to public-private partnerships. Russia emphasizes the importance of public-private partnerships while maintaining the leading role of the state in handling security issues. Other stakeholders (non-governmental organizations, private companies, etc.) are assigned supporting roles. Western companies, on the contrary, emphasize the leading role of businesses in this issue. For instance, Tom Burt, Corporate Vice President for Customer Security and Trust at Microsoft, notes in his blog, “The internet is the creation of the private sector, which is primarily responsible for its operation, evolution and security.” He believes that governments should play an important role in observing and enforcing standards of conduct in cyberspace and in preventing harmful attacks by other nations.

Despite these different approaches, there are certain common points where Russian and Western interests overlap:

  1. Tightening control over online information flows.
  2. Involving various stakeholders in the process of resolving the problem.

The danger of illegal content spreading over the internet is a global cross-border threat. Russia does not censor the internet like China does with its Great Firewall. Millions of Russian citizens use Western internet platforms, browsers and messengers, and the dangerous content spread there is our problem too. What matters in this regard is the dialogue between parties, even if Russia (through the government or private companies) was not a signatory to the Christchurch Call to Action and is not a member of the organizations affiliated with it. It is important that we make use of those areas where Russian and Western interests overlap, since we travel different roads to the same goal – cleansing the information space of dangerous content.

Communication channels between Russian and Western stakeholders need to be set up, and agreements need to be reached on the means of interacting and cooperating. Criteria need to be defined for flagging extremist and terrorist content to prevent misidentification. And technical solutions need to be shared.

An open-ended intergovernmental expert committee could serve as a platform for sharing opinions on the problem with a view to drafting an international convention on countering the criminal use of information and communication technologies.


One of the most recent trends to appear in internet governance is the tightening of control over online content. And it was China and Russia that set the wheels for this in motion. The trend has extended across the globe – just look at the impressive list of states that supported the Christchurch Call to Action to eradicate terrorist and violent extremist online content. France, the United Kingdom, India, Japan, Indonesia and many other states endorsed the call, questioning the right to spread information online without any restrictions.

It is no secret that terrorists today strive to use the benefits of the nascent digital age for nefarious purposes, namely, to spread dangerous content, recruit new foot soldiers, finance terrorist groups and broadcast terrorist attacks using various internet resources. This is why many governments, fearing the radicalization of their population, demand that global internet platforms step up measures to counter extremist and terrorist content. For example, in May 2017, the Parliament of the United Kingdom criticized Twitter and Facebook for their inability to remove extremist content. At the 2018 G7 Summit in Toronto, security ministers demanded that tech companies step up the fight against dangerous content.

The Christchurch Call to Action

The Christchurch Call to Action to eliminate terrorist and violent extremist content online came in May 2019 from the Government of New Zealand as the peak of governmental demands for radical measures to be taken in this area.

Speaking to CNN, Prime Minister of New Zealand Jacinda Ardern said, “This call to action is not just about regulation, but instead about bringing IT companies to the table saying you have a role, too.”

The Call came after the tragic events of March 15, 2019, when a terrorist used Facebook Live to run a 17-minute broadcast of a mass shooting in Christchurch mosques. The video was accessible for 29 minutes on Facebook itself, and for several hours on YouTube, Instagram and Twitter. The delayed reaction of global digital platforms meant that millions of users throughout the world watched the broadcast.

For New Zealand and for many other states, this tragedy signalled the need to take drastic measures. New Zealand and France spearheaded a summit held in Paris on May 15, 2019, that was attended by the leaders of 17 states, representatives of the European Commission and eight tech companies (Amazon, Daily Motion, Facebook, Google, Microsoft, etc.) [1]. The Christchurch Call is essentially an action plan calling upon its signatories to prevent using the internet as a tool for terrorists.

As of today, 48 states, UNESCO, the Council of Europe, the European Commission and eight tech companies have joined the call to action.

Curiously, three important actors remained uninvolved with the Call to Action: Russia, China and the United States. Beijing and Moscow did not officially comment on their refusal to join. Washington cited its respect for freedom of speech while generally supporting the overall goals of the document. The United States counters dangerous content at the state level, but it employs different methods. Instead of blocking information, the United States, according to the White House, promotes credible, alternative narratives to “defeat” terrorist messaging.

A Pure PPP

The Christchurch Call is a pure PPP. The document envisions a clear delimitation of duties between government bodies and businesses.

For instance, governments must:

  • counter the drivers of terrorism and violent extremism;
  • increase media literacy;
  • ensure the effective enforcement of applicable laws;
  • encourage media outlets to apply ethical standards when depicting terrorist events online.

Technical solutions, including content control (content filtering and blocking), are left to tech companies that, among other things, are mandated to:

  • develop technical solutions to prevent the upload of violent terrorist and extremist content;
  • provide greater transparency in detecting and removing content;
  • implement regular reporting;
  • ensure that algorithms developed and used by the companies do not lead users to extremist content.

The Call also lists several joint commitments for government and online service providers, including:

  • accelerating research into and developing technical solutions;
  • ensuring appropriate cooperation with and among law enforcement agencies for the purposes of investigating and prosecuting illegal online activity;
  • developing processes allowing governments and online service providers to respond rapidly, effectively and in a coordinated manner to the dissemination of terrorist or violent extremist content.

GIFCT to the Rescue

Global tech companies began to respond to the governmental calls to flag dangerous online content long before the tragedy in Christchurch. For instance, in June 2017, Facebook, Microsoft, Twitter and YouTube formed the Global Internet Forum to Counter Terrorism (GIFCT) under the auspices of the United Nations. The Forum’s participants pledged:

  1. to develop and share technology to responsibly address terrorist content across the industry;
  2. to fund research and share good practices in order to develop viable methods of countering dangerous content.

The European Commission supported the Forum, allocating €10m in funding to it. Additionally, a $5m joint innovation fund was launched jointly with Google.org for countering hate and extremism. This fund financed non-profits combating hate both online and offline.

GIFCT is based on a multi-stakeholder governance model and actively cooperates with small internet companies, civil society, scientists, and governmental and non-governmental organizations. Through the UN Office of Counter-Terrorism and the Tech Against Terrorism programme spearheaded by the United Nations, the Forum has worked with over a hundred tech companies throughout the world. Conferences for stakeholders have been held in Europe, the Asia Pacific and Silicon Valley. Additionally, GIFCT members attend G7 ministerial meetings and actively interact with Europol.

At the same time, the Forum is not open to everyone. In November 2019, China’s rapidly developing internet platform TikTok was denied membership because it did not meet the established criteria, including compliance with certain human rights requirements and the publication of transparency reports. The Forum’s members are concerned that TikTok may be collecting data and engaging in censorship.

Methods of Countering Dangerous Content

The principal method of countering dangerous content is the constant updating of the general industry “hash” database. “Hashes” are unique digital “fingerprints” of terrorist and extremist content (photos and videos). This database allows any Forum member to automatically detect and remove illegal content from their digital platforms prior to it going public. In the two years since its launch, GIFCT has accumulated over 200,000 unique hashes. In addition to this database, Forum members have been able to share URLs linked to terrorist and extremist content securely with their sectoral partners since January 2019.

As of today, 13 companies and services have access to the database: Microsoft, Facebook, Twitter, YouTube, Ask.fm, Cloudinary, Instagram, JustPaste.it, LinkedIn, Verizon Media, Reddit, Snap and Yellow. As we can see, access has mostly been granted to companies based in the United States.

To support the Christchurch Call, Amazon, Facebook, Google, Twitter and Microsoft released joint statement on expanding the GIFCT’s activities and listing nine steps on countering terrorist and extremism content online. Nearly half of these steps need to involve government agencies and other stakeholders. These actions include, among other things:

  • updating terms of use for various digital platforms and services
  • creating better feedback methods for reporting illegal content
  • enhancing technology through additional investment
  • cooperating with the sectoral, governmental and non-governmental bodies with a view to developing a protocol for rapid response to illegal actions
  • publishing regular reports on transparency concerning flagging and removing terrorist content

More New Initiatives

The Christchurch Call also generated new institutions, instruments and forms of business cooperation with governmental agencies and civil society bodies.

In September 2019, GIFCT was transformed into an independent organization. The Forum’s participants announced that they would be expanding cooperation between companies, governmental agencies and experts.

To support the “call to action,” the companies agreed to take additional steps:

  • set up formal channels of communication so they can share intelligence and content with non-GIFCT companies and other stakeholders;
  • introduce joint content incident protocols to enable and empower companies to more quickly and effectively respond to illegal online activities (such a protocol describes steps companies could take for a rapid response to an attack).

The Christchurch Call Advisory Network will be set up to ensure that the measures adopted to counter dangerous content do not violate human rights. The network will comprise civil society organizations that aim to “integrate a broad range of perspectives and live up to the commitments in the Call around supporting human rights and online freedoms, as well as the rights of victims of terror.”

Anastasia Tolstukhina:
Business in Need of Cyber Rules

It is also worth noting here that, in September 2019, Microsoft, Hewlett Foundation, MasterCard and several other large IT corporations, together with a number of charity foundations, launched the CyberPeace Institute intended to aid victims of cybercrime.

“Occupational Aptitude” Test

A tragedy in Germany served as the first major occupational aptitude test for the overhauled GIFCT. On October 9, 2019, several shooters opened fire in the vicinity of a synagogue in Halle and uploaded a video of the attack. The video remained on Twitch for 65 minutes and was seen by 2200 people. Copies were distributed via Telegram, 4chan and other services (none of which are GIFCT members).

The video of the shooting was not spread via larger online platforms, such as Facebook and YouTube, which GIFCT saw as a positive shift in countering extremist content. This was largely due to the abovementioned Content Incident Protocol (CIP). Actions taken under the protocol include: a) promptly uploading hashes of the attacker’s video, its derivatives, and other related content into the shared GIFCT hash database; and b) promptly notifying Europol and the government of Germany about the incident.

The official website of the Forum notes that the incident uncovered vulnerabilities where additional work on mechanisms for countering dangerous content is needed. Moreover, the Forum's members intend to simplify the decision-making process, step up the exchange of information with various stakeholders and ensure that the blocking system is continually improved.

One Goal, Different Approaches

Russia was not involved with the Christchurch call and the new institutions and mechanisms it generated. The media reported that Russian companies had not been invited to sign the document.

At the same time, representatives of Russian online platforms said that their own rules generally comply with the contents of the Call. The Odnoklassniki social network welcomes the introduction of rules for handling extremist content. Additionally, the network continuously improves its tools for the rapid detection and blocking of prohibited content. For this purpose, it primarily uses so-called neural networks that have learned to identify depictions of violence in accordance with set patterns and hide dangerous content from public access. Another social network, VKontakte, also uses neural networks to automatically detect and block extremist content. Pursuant to requests from users or governmental agencies, dangerous posts are blocked within minutes.

The Russian government was also not involved with the Christchurch Call, since it had not been invited to join the discussion of the document and endorse it.

Kevin Gourlay, Maria Smekalova:
Mitigating Cyber Risks: Is There Room for Two?

We can assume that the Call in its current form, despite its good intentions, would hardly suit the Russian side. We have already mentioned that the Christchurch Call is a pure public-private partnership that assigns significant responsibilities to private companies. Russia, on the other hand, invariably emphasizes the importance of public-private partnerships while maintaining the leading role of the state in handling security issues. Other stakeholders (non-governmental organizations, private companies, etc.) are assigned supporting roles. Western companies, on the contrary, stress the leading role of businesses in this issue. For instance, Tom Burt, Corporate Vice President for Customer Security and Trust at Microsoft, noted in his blog, “The internet is the creation of the private sector, which is primarily responsible for its operation, evolution and security.” He believes that governments should play an important role in observing and enforcing standards of conduct in cyberspace and in preventing harmful attacks by other nations.

Despite these different approaches, there are certain common points where Russian and Western interests overlap:

  1. Tightening control over online information flows.
  2. Involving various stakeholders in the process of resolving the problem.

The danger of illegal content spreading over the internet is a global cross-border threat. Russia does not censor the internet like China does with its Great Firewall. Millions of Russian citizens use Western internet platforms, browsers and messengers, and the dangerous content spread there is our problem too. What matters in this regard is the dialogue between parties, even if Russia (through the government or private companies) was not a signatory to the Christchurch Call to Action and is not a member of the organizations affiliated with it. It is important that we make use of those areas where Russian and Western interests overlap, since we travel different roads to the same goal – cleansing the information space of dangerous content.

Communication channels between Russian and Western stakeholders need to be set up, and agreements need to be reached on the means of interacting and cooperating. Criteria need to be defined for flagging extremist and terrorist content to prevent misidentification. And technical solutions need to be shared.

An open-ended intergovernmental expert committee could serve as a platform for sharing opinions on the problem with a view to drafting an international convention on countering the criminal use of information and communication technologies.

1. A total of 17 states supported the Christchurch Call At the Paris Summit on May 15, 2019 (the United Kingdom, Japan, Australia, Canada, France, Germany, Indonesia, India, Ireland, Italy, Jordan, the Netherlands, New Zealand, Norway, Senegal, Spain and Sweden), as did the European Commission and eight tech companies (Amazon, Daily Motion, Facebook, Google, Microsoft, Qwant, Twitter and YouTube).


Rate this article
(votes: 3, rating: 5)
 (3 votes)
Share this article

Poll conducted

  1. In your opinion, what are the US long-term goals for Russia?
    U.S. wants to establish partnership relations with Russia on condition that it meets the U.S. requirements  
     33 (31%)
    U.S. wants to deter Russia’s military and political activity  
     30 (28%)
    U.S. wants to dissolve Russia  
     24 (22%)
    U.S. wants to establish alliance relations with Russia under the US conditions to rival China  
     21 (19%)
For business
For researchers
For students