Is internet regulation the best way to protect us from online harms?

Is internet regulation the best way to protect us from online harms?
By Simona Milio and Kate Regan
Simona Milio
Director, Public Policy
Kate Regan
Researcher, Public Policy
Nov 12, 2019
9 MIN. READ

Regulation may protect people from online harms, but it may also activate the law of unintended consequences—and potentially fail to protect the internet’s fundamental features.

As the World Wide Web turns 30 its inventor, Sir Tim Berners-Lee, has chosen to highlight how it has given a voice to those who spread hatred and commit crimes. Sir Tim Berners-Lee’s concern that we’re no longer sure if the web really is a force for good chimes loudly with the growing number of governments who are edging towards stronger regulation of its enabling infrastructure, the internet. 

The UK, among other countries, believes the time has come to target internet companies with regulation aimed to prevent the spread of harmful content on their platforms. There is worry, however, that striving to protect internet users from harm could generate unintended consequences that might damage the core characteristics of the internet that users enjoy and value.

How do we weigh the potential benefits and drawbacks of regulation in this dynamic environment? 

Addressing a spectrum of online harms 

The UK's proposals for internet regulation, outlined in its recent Online Harms White Paper, bring this disquieting ramification into sharp focus. Under the proposals, platforms will have a statutory duty of care to keep users safe online—and an independent regulator will be put in charge of making sure they fulfill their legal obligations. Two additional aspects of the UK's regulatory approach position it as both pioneering and potentially problematic.

It is the first attempt globally to use regulation to address a comprehensive spectrum of online harms. Internet firms will not only be tasked with tackling plainly illegal harms, but will also be held accountable for a range of lawful content that is deemed harmful. The scope of the proposals, in terms of the types of organizations that will be subject to the regulation, is unprecedented. It covers all those that allow users to share or discover user-generated content or interact with each other online. 

In layering subjectivity and ambiguity onto an inherently complex regulatory framework, the UK's proposals have revived a fundamental question: is it actually feasible to apply a traditional regulatory approach to the internet? Andrew Sullivan, President and CEO of the Internet Society, laid out the challenge in a recent Chatham House talk: "It's a mistake to think of the internet as though it's a table, a single thing. The problem is it's a network of networks. So in thinking about regulation we need to see it more in terms of being like traffic." And, nowadays, it is a lot of traffic: 64,000 independently operated networks, compared to just five 25 years ago. 

It’s a difficult challenge for sure, but one we should try and rise to. The growing volume of hate speech, fake news, cyber bullying, child sexual abuse material, and more makes action imperative. But can we learn from the different approaches governments and internet companies have already taken? Are there features common to the most successful initiatives and laws that shine a light on the best way forward?

At one end of the regulatory spectrum are countries like China, Russia, and Vietnam. Their strong censorship of online content and violation of freedom of speech and user rights and privacy are well known. More useful for the UK to consider are countries taking steps to tackle a similar range of harms through regulation. How are they holding technology firms to account and what are the consequences of non-compliance—and the unintended consequences of regulating this space?

Diminishing different voices and views

Germany's 2018 Network Enforcement Law (NetzDG) legally obliges platforms to remove or block access to content that is manifestly unlawful within 24 hours of a complaint. They must also establish user-friendly reporting channels and, if they receive more than 100 complaints, publish a transparency report. Fines are applied for specific intentional or negligent failures to comply with the law, such as not maintaining an effective complaints management system. 

An early evaluation by the Centre for European Policy Studies (CEPS) looked at the six-month period after the law came into force. It observed that it had resource implications that were particularly significant for smaller companies for whom the response deadlines might be unattainable and fines crippling. This threat to smaller firms could be greater in the UK, where the proposed law will apply effectively to the whole of Web 2.0, including organizations operating any kind of user forum or search engine. 

Claims of over-censorship have been levied at Germany’s NetzDG law. There is an inevitable risk that platforms will remove anything that might possibly be deemed politically sensitive or harmful to protect themselves. The concern is that this leads to the removal of the partisan voices and independent commentary that are an essential part of the internet and, more damagingly, to the increased marginalization of minority and dissident voices. 

Becoming arbiters of the truth 

By bundling together illegal and lawful harms—everything from inciting violence to excessive screen time—the UK's proposal would raise the specter of, "much heavier censorship than at present," according to Christopher Haley of NESTA. "Many firms would inevitably filter out entirely lawful material as well as having to make near-impossible judgments about individuals' means and motives."

New legislation adopted by France to tackle fake news, especially during election periods, raised similar concerns about social media companies having to become arbiters of truth. It is significant that the Senate's initial rejection of the text of the legislation—Law No. 2018-1202 on combatting the manipulation of information—revealed misgivings that it had been prepared without in-depth evaluation or impact assessment. The law was eventually passed by Parliament at the end of last year, with €75,000 fines for violation, but whether it is suited to its purpose remains moot. 

During testimony before British lawmakers last year, Twitter's senior strategist Nick Pickles was unequivocal. "The one strength that Twitter has is its hive of journalists, of citizens, of activists correcting the record, correcting information. I don't think technology companies should be deciding during an election what is true and what is not true. I think that's a very important principle."

Marking the end of an era

Australia's internet regulation has focused on an issue on which there is far greater global consensus: the protection of children from harm. Its 2015 Enhancing the Online Safety for Children Act established a notification mechanism for cyber bullying content to be taken down from all large social media websites and penalties for non-compliance. Appointing a Commissioner to enforce the act—and its subsequent extension to cover revenge porn—has been widely seen, and formally evaluated, as a highly effective move. 

In the UK, meanwhile, the voluntary approaches that have characterized the regulatory landscape for some time have begun to feel inadequate. When he announced that "the era of self-regulation is over" at the launch of the UK's regulation proposals, Secretary of State for Culture Jeremy Wright likely had incidents like the Christchurch massacres in mind. The live streaming of these attacks on Facebook—widely seen as exemplifying the lawlessness of the internet—was viewed and shared thousands of times before Facebook removed it. 

It is understandable that such profoundly shocking content has put the self-regulatory regime on the chopping block. Why did it take so long to take down the video? Are the self-regulatory codes of conduct on which industry bases its decisions serving platforms' interests over users'? 

Jeopardizing impactful initiatives

ICF conducted a study that investigated how online platforms define a range of online harms and their strategies, capabilities, and incentives for tackling them. The platforms favored a self-regulatory landscape, listing benefits such as knowledge sharing and collaboration. These features are worth exploring—and preserving—as we consider more robust regulation.

First, knowledge sharing. Our discussions with the industry suggested that self-regulation encourages the open exchange of information and fosters an iterative approach that improves the effectiveness of strategies to tackle harm. Most of the platforms we spoke to consult with experts and civil society organizations to make sure different perspectives inform the design of their policies and processes. 

Knowledge sharing even extends to competitors. Platforms reflected that they are incentivized by the sharing of best practice and knowledge with competitors to get involved in more significant and sustained collaborative networks, forums, and initiatives tackling a certain harm. This is particularly true where there is a consensus on the real-world impact of the harm and its legal status and definition. 

The Technology Coalition and Global Internet Forum to Counter Terrorism were cited as industry-led initiatives that help bring together a variety of actors to work with a shared focus on eradicating online child sexual abuse material, exploitation, and terrorism at scale. Such voluntary initiatives and associations are particularly good for smaller companies. Where there is a lack of in-house technical or financial capacity to tackle online harms effectively, the support and expertise provided by collective action can be game-changing. 

Our study uncovered concern that a one-size-fits-all approach might eradicate existing solutions tailored to distinct harms or be incompatible with different platform types. Platforms suggested that prescribing exactly how harms must be moderated could lead to resources being reallocated away from initiatives that are already successful. 

Singling out success factors

If the era of self-regulation truly is over, how do we avoid losing its positive features and triggering unintended consequences that could compromise the internet? As Andrew Sullivan says, "there's a lot of baby in the bathwater."

There are two principal ways to reduce the risk of breaking what we seek to fix. The first is to mobilize government, tech companies, civil society, and law enforcement, in a concerted collective effort to make the internet a safer place. The second is to make sure that harms being targeted by regulation are understood and well-defined. 

If harms whose legal status is ambiguous are to be brought into the regulatory fold, the definitions of these harms need to be clear. The UK aims to tackle three categories of harm: clearly illegal harms, legally ambiguous harms, and harms that are lawful but harmful when accessed by children. Unless there is empirical evidence concretely confirming actual harm being experienced, the risk to crafting such robust regulation is, according to Dr. Victoria Baines of the Oxford Internet Institute, that "responses may be emotionally driven...and beliefs and anecdotes...taken to be representative." 

Achieving the UK's ambition 

The UK government has set itself an ambitious challenge in attempting to be the first to address a comprehensive spectrum of online harms in a single and coherent way. But is it doable? Does it make sense to try to homogenize harms in this way when most progress is being made by single-harm action? What outcomes can we expect over time, given the results emerging from regulation in Europe and further afield?

There's a lot riding on this new regulation. It is intended to make the UK both the safest place in the world to go online and the best place to start and grow a digital business. Now is the time to gather together the evidence, actors, and insights needed so that the final regulation will be the first to achieve its intended purpose and protect the internet's core nature. 
 
Meet the authors
  1. Simona Milio, Director, Public Policy
  2. Kate Regan, Researcher, Public Policy

Subscribe to get our latest insights