02 Sep 2024

Is the Online Safety Act “fit for purpose”? Thoughts on its application in the recent UK riots

By Ibrahim Chaudry and Maria Paz Canales

In July, the United Kingdom experienced a wave of racially charged riots following a mass stabbing in the town of Southport. Among multiple contributing social and political factors, one clear driver was disinformation spread via social media. 

In the aftermath, several British politicians have called for reforms to the recently passed Online Safety Act 2023 (OSA), which imposes a set of responsibilities and duties on online platforms. Notably, London Mayor Sadiq Khan labelled it as “not fit for purpose”, calling for it to be reviewed “very, very quickly”. 

In response, the government has committed to “look more broadly at social media” and keep the OSA under review, though Ministers ruled out further legislative amendments that might delay its full implementation for now. In addition, the Home Secretary further warned that so-called “keyboard warriors” would be liable for prosecution, leading to some of the first arrests and convictions under the OSA.

The impacts of disinformation in this incident are clear, and must be addressed. But in the rush to respond, it is critical that the state’s responsibility to protect human rights is not forgotten. This is perhaps the first major test of the OSA, a controversial framework with many rights-threatening provisions. It’s critical that we give due scrutiny to how it is being applied here, so that this can serve as a case study for its future implementation. 

Below, we take a closer look at how the OSA and other statutes have been employed in this context, the issues for online expression that this raises, and some recommendations and next steps for the government.

 

What is the government toolbox for dealing with the digital incitement of the riots?

While most arrests related to the unrest have been for violent acts committed in the streets, some individuals have been charged—and even jailed—for social media posts linked to the disorder. It’s important to note that most of these charges have been brought under legislation which predates the OSA: Section 19 of the Public Order Act 1986, which criminalises the publication of material that incites racial hatred; and Section 127 of the Communications Act 2003, which criminalises sending “grossly offensive” electronic communications.

The latter law has long been particularly controversial, sparking backlash from civil liberties organisations and politicians worldwide who argue that it undermines free speech. GPD has long called for reform to Section 127 of the Communications Act 2003. The criticism around recent prosecutions has further demonstrated that the existing statute is broad and subjective, which ultimately risks stifling legitimate – even if offensive – speech.

However, several people have also been arrested under Section 179 of the OSA—a provision we have previously described as “outrageously broad”. Section 179 created a new “false communications offence”, which criminalises “information that the person knows to be false” that would risk causing “non-trivial psychological harm” to anyone who could encounter the message (though accredited journalists are exempt from this). The initial arrests under this provision have generated substantial controversy, demonstrating the challenges in interpreting its reach.

The most prominent arrest under the OSA has been over an “inaccurate social media post” that wrongly questioned whether the perpetrator arrived on a small boat and whether he was on an SIS (MI6) watchlist. Although the poster has not been charged, Cheshire Police claimed that the arrest was “a stark reminder of the dangers of posting information on social media platforms without checking the accuracy”. This has led to criticism from civil liberties organisations such as Big Brother Watch that the threshold for harm enforced by the police under this provision —ie. ‘not fact-checking’—is too low and misrepresents the existing statute. 

Other charges under this provision include a man jailed for three months after a fake claim in his TikTok live stream that he was “running for his life” from rioters, and a rapper who wrongly claimed that Tommy Robinson had encouraged attacking mosques. Given that an individual must knowingly post a message intended to cause “non-trivial psychological or physical harm” to have committed an offence, these first prosecutions could potentially become ‘test cases’ in establishing the level of harm caused for the authorities to pursue a case under the false communications offence. In addition, they also pose questions as to what constitutes misinformation which is “intentionally” false and not just misleading, as well as difficulty in establishing the threshold for harm in a way that is compatible with protecting not only facts, but also opinions. As we’ve highlighted previously, this is especially problematic for something as subjective as disinformation—for instance, the difference between satire and disinformation is often hard to discern. 

 

What could be the riots’ impact on the OSA?

After some initial confusion, the government clarified that it will not immediately pursue legislative amendments to the OSA on the grounds that it would delay the introduction of existing provisions; these will place duties on social media companies to address harmful content from 2025. However, once these provisions are in force, the government has committed to a review of the OSA’s impact, for which they have not ruled out subsequent legislative amendments.

In the meantime, Ofcom, the regulator mandated to enforce the OSA’s provisions, has been tasked with drafting codes of practice which will also constitute binding regulation from next year. Following the riots, Ofcom requested that social media companies act voluntarily to prevent “content involving hatred, disorder, provoking violence or certain instances of disinformation”. Companies can currently only do this through applying their own internal policies, as the binding codes of practices are not still in place. 

Despite the current absence of guidance, the Prime Minister criticised social media companies for their perceived failure to comply with Ofcom’s requests. This appears to be particularly aimed at X, whose owner, Elon Musk, stated that “civil war is inevitable” and promoted fabricated articles about the riots. Based on this seemingly uncooperative approach, it is reported that police chiefs and senior politicians have sought tighter regulation from Ofcom’s codes of practice. This is concerning: we wouldn’t want to see Ofcom develop regulation that exceeds their remit under the OSA, because this would risk facilitating unintended restrictions on legitimate speech. Instead, Ofcom should solely concentrate their codes of practice around the issues which the OSA mandates them to. 

Amid growing support for a firmer OSA, we’re particularly concerned about reports suggesting a restoration of the so-called ‘legal but harmful’ provision. During the Online Safety Bill’s passage, civil society groups consistently warned the government and Parliament against imposing obligations on platforms to remove such nebulous and ill-defined forms of content. This would give private companies extensive control over online expression in a manner incompatible with international human rights standards, as well as increasing the risk of over-censorship from companies threatened with fines for not removing harmful content. The government was right to listen to concerns over freedom of expression and drop this provision. It would be a grave error to reinstate it now; nor is there any evidence it would effectively prevent future unrest. 

 

What precedents could this set?

After moments of national crisis, it has become relatively common for governments to seek ‘easy’ legislative solutions that might ostensibly help prevent such events from reoccurring. However, we are concerned about the OSA becoming an ever-expanding framework that is modified with each new crisis or social unrest. The OSA already contains an extensive range of “priority content” categories which social media companies will be statutorily obligated to address, such as discriminatory or violent content. Any tendency to expand this further without sufficient debate could lead to a gradual erosion of civil liberties, where each new category poses the potential for increased censorship of legitimate debate. 

Moreover, the global implications of the OSA cannot be overlooked. Successive governments have positioned the OSA as a “world-leading” piece of legislation, and its influence is already being felt internationally. Countries including Nigeria, Brazil, and India are looking to the OSA as a blueprint for their own regulatory frameworks. If the UK’s approach to online safety is seen to permit constant expansion in response to crises, other governments may adopt similar strategies, potentially leading to a global trend of increasingly restrictive online environments.

 

Moving forward:

This is a critical test for the new UK government’s approach to challenges around online platform governance, as well as for the implementation of the novel OSA framework. In responding to this unrest, we urge the government to hold fast to the principles of democracy and the rule of law, and ensure that human rights are centred and protected. 

Specifically, we urge the government to:

  • Avoid ‘knee-jerk’ policy-making: Calls to clamp down on social media platforms and content must be considered in a nuanced manner and within the context of maintaining rights-respecting legislation. A return of the ‘legal but harmful’ provision in the OSA, for instance, would be too broad and risk inconsistent or disproportionate enforcement – especially given the criticism that the Government is already facing for infringing freedom of expression within the existing legislative framework.  
  • Commit to a thorough and proportionate enforcement of the OSA: Recent events have shown the danger of disinformation, but also highlighted challenges around enforcing broad definitions too: how should the OSA’s definition of what constitutes a ‘false communications offence’ be interpreted; and what is the right harm threshold to consider? There is also a risk that freedom of expression might be infringed through regulation that encourages social media platforms to be overly zealous in removing content. In particular, Ofcom’s enforcement efforts through codes of practices should not be used as an opportunity to increase the range of content subject to restrictions. Even in exceptional circumstances, including social unrest, the government’s response must be guided by proportionality. Only then can freedom of expression be preserved.