Can all online harms be tackled using the same regulatory approach?
It's not surprising then that many governments are passing the baton of regulatory responsibility to the tech firms. By making them responsible for the removal of illegal content, the UK's proposal is certainly not breaking new ground. What is novel—and potentially problematic—is including in its scope 'harms with a less clear definition' that are not necessarily illegal.
The scope of existing European online harms legislation is limited to content that contravenes criminal law. Internet companies are legally obliged to take down illegal content if they are made aware of it under Article 14 of the European Commission’s E-Commerce Directive. This obliges internet service providers to react to and remove illegal content or activity that they host rather than proactively identify it. Article 15, meanwhile, states that they must not be compelled to actively monitor content on their platforms. This protects them from legal liability for any illegal content or activity that they host but don't know about.
By casting a wider regulatory net that scoops up lawful harms (such as cyberbullying and trolling, extremist content and activity and advocacy of self-harm, amongst others) alongside illegal harms, the UK's proposal raises many questions. How will this work in practice? Can we expect approaches to tackling both illegal and lawful harms to resemble each other?
Current approaches to tackling online child sexual abuse material (CSAM)—an illegal harm where headway is being made—are worth studying to see what lessons they may contain for other categories of harm that the UK intends to regulate, especially those that are lawful.
Four factors that facilitate the removal of online CSAM
1. Clear legal definitions
Enforcement starts with a shared understanding of what constitutes the harm. International and European laws offer clear legal definitions of online CSAM that are vital to its identification.
The main international legal instrument addressing CSAM is the Optional Protocol to the (U.N.) Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography. The Council of Europe’s Convention on the Protection of Children against Sexual Exploitation and Sexual Abuse and the Council of Europe’s Convention on Cybercrime provide further definitions of CSEA offenses. At EU level, Directive 2011/EU on combatting the sexual abuse and sexual exploitation of children and child pornography provides minimum standards for assistance to and protection of victims and guidelines for investigation and prosecution of crimes.
Beyond these legal instruments, the International Child Sexual Exploitation (ICSE) database—an intelligence and investigative tool managed by Interpol and used by investigators worldwide—has established a baseline categorization to help classify and isolate the very worst CSAM. This baseline definition is intended to determine what is illegal in more than 50 jurisdictions, encouraging transnational cooperation.
Despite differences across countries, the very existence of legal definitions provides a starting point from which technology companies and law enforcement can begin to tackle CSAM. For harms such as cyberbullying or disinformation, an absence of legal definitions—or patchworks of ill-fitting regulation—can mean that social media companies are solely responsible for deciding what constitutes that harm and what does not.
2. Tailored technology
Once a shared definition exists, technology can step in to offer support. A fully automated technology known as PhotoDNA can identify known illegal content without human assessment. Leading companies such as Google, Facebook, Twitter, and Adobe Systems are currently leveraging PhotoDNA to suppress CSAM imagery at scale—and it’s free for law enforcement to use.
Developed by Microsoft and Dartmouth College in 2009, PhotoDNA works by creating a ‘signature,’ or digital fingerprint, for content known to be CSAM. Each image is converted to a grayscale format and has a grid format applied to it. Each tiny square within that grid has a number assigned to it and these numbers collectively form the ‘hash value’ of the image—or its signature. When the hash value is matched against a database of hashes of known CSAM, the tool is able to detect and report the content automatically.
While PhotoDNA's hashing technology is indispensable to the identification and removal of known CSAM, artificial intelligence (AI) technology is also being developed to identify material that is likely but unconfirmed to be CSAM. Google has offered its AI-powered Content Safety API, launched last year, for free to non-governmental organizations (NGOs) and industry partners to support human reviewers of online CSAM. It helps by identifying material that is likely to be CSAM and thus works to identify such illegal material at scale.
Harms that tend to be more word-based than image-dependent—such as hate speech, extremist propaganda, and trolling—are less straightforward to identify using technology. Determining whether content constitutes satire, false news, extreme but legitimate political views, or incitement to hate requires nuanced assessment of its context. Given these nuances, it’s dangerous to rely too much on overzealous filtering technology if we want to limit over-censorship of lawful and legitimate content. Human moderators should likewise err on the side of caution if they’re unsure whether to remove content or not.
3. Known real-world impact
The grave and devastating effect that child sexual abuse has on its victims legitimizes any interventions required to tackle this harm. A vast body of empirical research exists that describes the myriad short- and long-term impacts of child sexual abuse and exploitation—and of the revictimization that occurs every time an image or video is viewed or shared.
For other harms where the real-world impact is less known, a stronger justification for intervention is required. Though researchers have found associations between exposure to content displaying self-harm and actual self-injury, for example, there is a worry that content promoting self-help for online users may be automatically taken down in efforts to remove harmful content. Complex issues like this reveal the need for a considered and sensitive approach to the regulation of content whose harmful impact is equivocal.
4. Harmful to business
CSAM is a criminal offense as well as a phenomenon that society finds morally reprehensible. This lack of ambiguity around public tolerance of CSAM makes it commercially disastrous for any legitimate enterprise to be caught facilitating its distribution. Providing a platform for potentially offensive political issues, on the other hand, can be defended by social media companies—and is, depending on its severity, legality, and purpose.
When ICF conducted a study looking into how online platforms report being incentivized to tackle harm, reputation was—unsurprisingly—highlighted as a key factor. For those platforms citing their unique value as championing freedom of speech and association, a desire to preserve these values could result in a less aggressive approach to tackling a range of online harms—or even a migration to less restrictive jurisdictions.
- Develop concise common definitions and standards to guide social media companies on exactly the type of content they are expected to moderate and remove;
- Encourage the development of technologies that are appropriate to tackling the harm in question, and deploy them in ways that are proportionate to the aim pursued;
- Build an evidence base that sheds light on the real-world impact of the online harms being regulated;
- Align the online platforms’ commercial interests with concerned action to tackle the harm.