The UK’s Online Harms White Paper: our response
GPD has responded to the UK government’s consultation on its Online Harms White Paper, published in April. The White Paper sets out the government’s plans to regulate social media and other tech companies in order to address concerns around harmful online content and activity.
While we recognise the legitimate desire of the government to tackle illegal and harmful online content, many of the proposals in the White Paper—including a new duty of care on online platforms, a new regulatory body, and even the fining and banning of online platforms as a sanction—pose serious risks to individuals’ rights to freedom of expression and privacy; points we raised in our initial response back in April.
Since the publication of the White Paper, GPD has actively engaged with a range of stakeholders involved in the process, including government departments, regulatory bodies, human rights organisations, children’s rights and protection organisations, the academic community, industry bodies, and tech companies both large and small. As part of this, we co-facilitated a full-day multistakeholder roundtable on the proposals (you can read the report from that event, published by the Oxford Internet Institute, here).
Informed by that engagement process, and following a detailed analysis of the proposals in the White Paper, GPD has submitted a comprehensive response to the government, with the hope that it will assist the policymaking process going forward.
Our full submission makes over 40 specific recommendations on how the proposals in the White Paper should be refined and revised, as well as guidance on building in specific safeguards for freedom of expression and privacy. But here are the key points:
*
GENERAL
- After undertaking a comprehensive human rights analysis of the current proposals, we’ve concluded that, if implemented, they would put the UK in breach of its obligations under both international human rights law and the European Convention on Human Rights, as incorporated into domestic law through the Human Rights Act 1998.
- It is, however, difficult to fully assess the ultimate impact upon the rights to freedom of expression and privacy as a result of many of the issues highlighted above being subject to consultation and further refinement. As such, it will only be when the actual legislation is presented that a fully informed analysis can be undertaken. Given the novelty of this policy area, and the importance of proceeding with caution, the proposed legislation should be published in draft and subjected to pre-legislative scrutiny before any Online Harms Bill is put before Parliament.
SCOPE OF HARMS COVERED
- Some of the “harms with a clear definition” such as harassment and disclosing private sexual photographs or films do not, in fact, have clear definitions, and have been highlighted by the Law Commission as being unclear, ambiguous or overly complex. Such forms of “harmful content or activity” should not be included in the Online Harms Bill until their lack of clarity has been rectified.
- Further, some of the “harms with a clear definition”, such as harassment, stirring up hatred, and other hate crimes, are defined broadly in legislation, and encompass some speech and activity which is protected by the right to freedom of expression. While safeguards exist which prevent the police and Crown Prosecution Service from charging or prosecuting people in such circumstances, no equivalent safeguards are proposed when such speech or activity occurs online. This risks creating a very different regime for the application of the criminal law online as opposed to offline, with a greater amount of speech restricted, and with reduced transparency and accountability.
- The inclusion of “harms with a less clear definition” (i.e. content and activity which is “legal but harmful”) in the White Paper is concerning. Requiring online platforms to take steps to remove, restrict or otherwise moderate such content or activity would lead to two different standards of permissible expression depending on whether it occurs online or offline. If these forms of “harmful content or activity” are to be included in the Bill, it should make clear that compliance with the duty of care can be achieved without removing, restricting or moderating such content or activity, but through alternative means.
THE REGULATORY MODEL PROPOSED
- The “duty of care” proposed in the White Paper bears little resemblance to any existing understanding of what the term means, and is an inappropriate model for regulating online harms. It is highly likely to lead to online platforms monitoring all content on their platforms, and using artificial intelligence to identify and remove content, both of which pose serious risks to the rights to freedom of expression and privacy.
- With no other model under consideration, however, the Online Harms Bill should explicitly state that compliance with the duty of care does not require, and should not be interpreted as requiring companies within scope to: filter content at the point of upload, generation or sharing; generally or proactively monitor content; or use artificial intelligence or other forms of automated decision-making.
- We recognise the value that codes of practice could add in helping online platforms understand how they can comply with their duty of care. However, the codes of practice proposed by the White Paper are highly prescriptive. They include inappropriate requirements such as: requiring online platforms to prevent certain forms of content from being made available to users at the point of upload; mandating particular processes and technologies to moderate content; and setting particular timeframes for the removal of certain forms of content. These sorts of requirements risk incentivising the removal of lawful and harmless content, and so should not be included. Before any codes of practice are made binding, they should be published in draft with a full consultation process which includes an assessment of the potential risks to freedom of expression.
- The proposals for mandatory transparency reporting have the potential to enhance the right to freedom of expression by encouraging companies to develop clear terms of service which explain what content is and is not allowed on the platform, and how decisions are made relating to content removal and moderation. Good practice could then be more easily identified and adopted by other companies. Qualitative reporting requirements on steps taken to improve processes would encourage companies to make better and more consistent decisions, rather than simply remove more content and more quickly. The mandatory transparency reporting templates should facilitate this, be published by the regulator in draft form and be subject to consultation before a final template is adopted.
- The scope of companies who will be subject to the regulatory framework is excessively broad. We do not consider that there is a sufficiently strong evidence base for the scope of companies to be as broad as it is, and are concerned that many types of online platforms will be captured, despite there being no evidence of harm being caused or facilitated by them. We have particular concerns around the inclusion of online platforms which provide private communication services within the scope of the regulatory framework.
ENFORCEMENT
- The existence of a regulatory body does not, in and of itself, pose any particular concerns in relation to the right to freedom of expression and privacy. However, risks could stem from how the regulatory body fulfils its functions, particularly in relation to the content of the codes of practice it develops, and its approach towards enforcement. The Online Harms Bill should therefore include appropriate provisions in relation to how the regulatory body will operate and exercise its functions, and which mitigate such risks. Under no circumstances should the government be able to direct the regulatory body with regard to the development or enforcement of its codes of practice, as is proposed in the White Paper.
- It is essential that the proposed power of the regulatory body to issue fines as a sanction for non-compliance is constrained to ensure that it is neither disproportionately used, nor incentivises online platforms to remove content which may be protected by the right to freedom of expression. We have serious concerns in relation to the further powers under consideration, namely: the imposition of civil or criminal liability on senior managers of online platforms; the ability to compel ISPs to block websites; and the ability to disrupt business activities. We do not consider that any safeguards could mitigate the risks to freedom of expression that would arise from such a sanctions regime.
*
NEXT STEPS
The government has announced that it intends to set out its response to the consultation feedback by the end of the year, and that this will be followed by the introduction of legislation in 2020. We will continue to engage closely and constructively with the process around the UK government’s plans for online content regulation.
For regular insight and updates on the Online Harms White Paper, sign up to our monthly Digest.