Can all online harms be tackled using the same regulatory approach?

Can all online harms be tackled using the same regulatory approach?
By Kate Regan and Simona Milio
Kate Regan
Researcher, Public Policy
Simona Milio
Director, Public Policy
Nov 20, 2019
8 MIN. READ
The UK has sparked debate with its proposal for broad-ranging internet regulation. Can current approaches to tackling online child sexual abuse material serve as a model for addressing lawful harms? 

When the man seen as the conscience of Silicon Valley, Tristan Harris, describes the internet as a 'digital Frankenstein' that only the law can tame—as he did in a Sunday Times interview recently—it suggests the time is ripe to reconsider regulation. 

The UK government clearly thinks so. Its proposal for a robust new regulatory framework is adding to the sense that we've reached a pivotal moment in the search for better ways to protect everyone—and particularly children—from harm when they go online. 

"The rise in internet users—and the scale of their exposure to online harms—requires us to take stock, now."

Twitter icon 24x24 Click to tweet 
 

In the UK, 90% of adults use the internet. This increases to 99% for 12-15-year-olds, who spend a weekly average of 20 hours online. Even children as young as three and four are now online for an average of eight hours a week.  

According to the UK's communications regulator Ofcom, 45% of adults have experienced some form of online harm. When it comes to children, one in 10 youngsters and one in five teens say they've encountered something worrying or nasty online. Almost 80% of 12-15-year-olds have had at least one potentially harmful experience in the last year. 

Internet companies have been firmly in the firing line in recent times as the source of these harmful experiences. Newspaper headlines and government statements routinely link the internet to tragedies such as teen suicides and terrorist attacks.

It's not surprising then that many governments are passing the baton of regulatory responsibility to the tech firms. By making them responsible for the removal of illegal content, the UK's proposal is certainly not breaking new ground. What is novel—and potentially problematic—is including in its scope 'harms with a less clear definition' that are not necessarily illegal. 

Online obligations

The scope of existing European online harms legislation is limited to content that contravenes criminal law. Internet companies are legally obliged to take down illegal content if they are made aware of it under Article 14 of the European Commission’s E-Commerce Directive. This obliges internet service providers to react to and remove illegal content or activity that they host rather than proactively identify it. Article 15, meanwhile, states that they must not be compelled to actively monitor content on their platforms. This protects them from legal liability for any illegal content or activity that they host but don't know about.  

By casting a wider regulatory net that scoops up lawful harms (such as cyberbullying and trolling, extremist content and activity and advocacy of self-harm, amongst others) alongside illegal harms, the UK's proposal raises many questions. How will this work in practice? Can we expect approaches to tackling both illegal and lawful harms to resemble each other? 

Current approaches to tackling online child sexual abuse material (CSAM)—an illegal harm where headway is being made—are worth studying to see what lessons they may contain for other categories of harm that the UK intends to regulate, especially those that are lawful. 

Four factors that facilitate the removal of online CSAM

1. Clear legal definitions

Enforcement starts with a shared understanding of what constitutes the harm. International and European laws offer clear legal definitions of online CSAM that are vital to its identification. 

The main international legal instrument addressing CSAM is the Optional Protocol to the (U.N.) Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography. The Council of Europe’s Convention on the Protection of Children against Sexual Exploitation and Sexual Abuse and the Council of Europe’s Convention on Cybercrime provide further definitions of CSEA offenses.  At EU level, Directive 2011/EU on combatting the sexual abuse and sexual exploitation of children and child pornography provides minimum standards for assistance to and protection of victims and guidelines for investigation and prosecution of crimes. 

Beyond these legal instruments, the International Child Sexual Exploitation (ICSE) database—an intelligence and investigative tool managed by Interpol and used by investigators worldwide—has established a baseline categorization to help classify and isolate the very worst CSAM. This baseline definition is intended to determine what is illegal in more than 50 jurisdictions, encouraging transnational cooperation.

Despite differences across countries, the very existence of legal definitions provides a starting point from which technology companies and law enforcement can begin to tackle CSAM. For harms such as cyberbullying or disinformation, an absence of legal definitions—or patchworks of ill-fitting regulation—can mean that social media companies are solely responsible for deciding what constitutes that harm and what does not. 

2. Tailored technology

Once a shared definition exists, technology can step in to offer support. A fully automated technology known as PhotoDNA can identify known illegal content without human assessment. Leading companies such as Google, Facebook, Twitter, and Adobe Systems are currently leveraging PhotoDNA to suppress CSAM imagery at scale—and it’s free for law enforcement to use. 

Developed by Microsoft and Dartmouth College in 2009, PhotoDNA works by creating a ‘signature,’ or digital fingerprint, for content known to be CSAM. Each image is converted to a grayscale format and has a grid format applied to it. Each tiny square within that grid has a number assigned to it and these numbers collectively form the ‘hash value’ of the image—or its signature. When the hash value is matched against a database of hashes of known CSAM, the tool is able to detect and report the content automatically.

While PhotoDNA's hashing technology is indispensable to the identification and removal of known CSAM, artificial intelligence (AI) technology is also being developed to identify material that is likely but unconfirmed to be CSAM. Google has offered its AI-powered Content Safety API, launched last year, for free to non-governmental organizations (NGOs) and industry partners to support human reviewers of online CSAM. It helps by identifying material that is likely to be CSAM and thus works to identify such illegal material at scale.

Harms that tend to be more word-based than image-dependent—such as hate speech, extremist propaganda, and trolling—are less straightforward to identify using technology. Determining whether content constitutes satire, false news, extreme but legitimate political views, or incitement to hate requires nuanced assessment of its context. Given these nuances, it’s dangerous to rely too much on overzealous filtering technology if we want to limit over-censorship of lawful and legitimate content. Human moderators should likewise err  on the side of caution if they’re unsure whether to remove content or not.

3. Known real-world impact

The grave and devastating effect that child sexual abuse has on its victims legitimizes any interventions required to tackle this harm. A vast body of empirical research exists that describes the myriad short- and long-term impacts of child sexual abuse and exploitation—and of the revictimization that occurs every time an image or video is viewed or shared.

For other harms where the real-world impact is less known, a stronger justification for intervention is required. Though researchers have found associations between exposure to content displaying self-harm and actual self-injury, for example, there is a worry that content promoting self-help for online users may be automatically taken down in efforts to remove harmful content. Complex issues like this reveal the need for a considered and sensitive approach to the regulation of content whose harmful impact is equivocal. 

4. Harmful to business

CSAM is a criminal offense as well as a phenomenon that society finds morally reprehensible. This lack of ambiguity around public tolerance of CSAM makes it commercially disastrous for any legitimate enterprise to be caught facilitating its distribution. Providing a platform for potentially offensive political issues, on the other hand, can be defended by social media companies—and is, depending on its severity, legality, and purpose. 

When ICF conducted a study looking into how online platforms report being incentivized to tackle harm, reputation was—unsurprisingly—highlighted as a key factor. For those platforms citing their unique value as championing freedom of speech and association, a desire to preserve these values could result in a less aggressive approach to tackling a range of online harms—or even a migration to less restrictive jurisdictions.

Whatever form the UK's online harms regulation eventually takes, it will be beneficial to consider whether those factors that support efforts to tackle online CSAM could work for lawful harms, too. In practice, this means that regulators will need to:
  • Develop concise common definitions and standards to guide social media companies on exactly the type of content they are expected to moderate and remove;
  • Encourage the development of technologies that are appropriate to tackling the harm in question, and deploy them in ways that are proportionate to the aim pursued; 
  • Build an evidence base that sheds light on the real-world impact of the online harms being regulated;
  • Align the online platforms’ commercial interests with concerned action to tackle the harm.
If these four factors cannot be guaranteed, it could be useful to consider what other options exist—offline as well as online—to ensure that UK citizens have the resilience and critical capacity to be discerning and confident internet users.
 

Subscribe to get our latest insights

Meet the authors
  1. Kate Regan, Researcher, Public Policy
  2. Simona Milio, Director, Public Policy