30 Jul 2024

What would a human rights-based approach to platform regulation look like?

In the past few decades, the landscape of online platform regulation has undergone significant changes. For many years, online platforms were largely left outside the scope of government regulation, based on claims from companies and policymakers that platforms could successfully self-regulate, promote free expression and innovation, and that regulation could threaten the open and interoperable nature of the Internet as we know it. 

This normative approach has since evolved due to a number of factors. One is the changing perceptions of some key stakeholders, who have seen increased impacts on democracy and human rights from social behaviours on online platforms (as well as from policies by platforms to address these behaviours).  In response, the international community began to challenge the existing paradigm and assert that the online ecosystem required more oversight and collective action—contending that more stringent obligations were needed to protect user data, human rights, and ensure fair competition. Various governments have since developed and introduced laws and policies that include further requirements from online platforms, such as on transparency and accountability. 

This trend in regulatory efforts is observable across the globe. Sometimes this takes the form of comprehensive regulatory frameworks: like the EU Digital Services Act (2022), the UK Online Safety Act (2023) or the Online Criminal Harms Act (2023) in Singapore. But platform regulation is also manifesting  through more specific laws: for example, on disinformation, telecommunications and media, competition, and cybercrime. While companies continue to develop their own internal policies, notably on content governance, these efforts are very much informed and guided by binding requirements in relevant jurisdictions, global human rights standards such as the UN Guiding Principles on Business and Human Rights (UNGPs), and voluntary commitments such as the Global Network Initiative Principles

Concerns around harms originating from behaviours on online platforms are valid, especially when they pertain to forms of illegal and harmful content, such as hate speech and disinformation. We acknowledge the need to address them. However, while many legal frameworks that seek to address these concerns are genuinely aimed at protecting individuals or have other legitimate purposes, there are cases in which government efforts—notably in authoritarian contexts—have been directed at the restriction of legitimate expression, suppression of dissent and marginalisation of groups. And, in other cases, government efforts have inadvertently brought about the same result. Accordingly, it is equally important to understand and mitigate any risks to the exercise of human rights which can arise from platform regulation itself, whether they stem from well-intentioned or deliberate efforts by governments. This can help ensure that states uphold their obligations to respect, protect and promote human rights online, and that companies uphold their responsibility to respect human rights, particularly freedom of expression, the right to privacy and the right to non-discrimination. 

In the following sections, we will explore these impacts and set out some high level principles for a rights-respecting approach to online platform regulation and content governance. 

 

Terminology

Before exploring how platform regulation is relevant to human rights, it is necessary to first set out what we mean by online platform regulation, and how this differs from content governance and other related terms. 

Online platform regulation broadly refers to the legal and policy frameworks designed by states to address the challenges posed by digital platforms—often in the form of binding requirements regarding transparency, or how they handle user content or personal data. 

Content governance focuses on the policies and practices that platforms themselves undertake, such as automated content moderation and the enforcement of community standards. 

There are additional terms such as platform governance, which is a broader concept encompassing both regulatory and internal policies that govern how platforms operate. Meanwhile, information integrity, another related term, generally refers to the reliability, authenticity and protection of information online. 

 

Relevance to human rights 

The proliferation of illegal and harmful content online, and the policies and actions of platforms themselves, both pose clear risks to human rights. Regulatory efforts by states are often legitimate and necessary, as they aim to tackle the proliferation of illegal and harmful content produced by the economic incentives and the outcomes of platforms’ internal policies, and to ensure that platforms handle user data in an appropriate manner. 

On the other hand, it is important to highlight how governments can co-opt platform regulation for their own repressive means or succumb to ‘mission creep’ in their legitimate efforts. This can happen, for example, when regulation imposes overly broad and vague provisions that chill expression, stifle dissent, and control the flow of information under the pretext of maintaining security or public order. Even well-intended but inappropriately designed online platform regulation can itself pose significant risks for human rights.

  • The right to freedom of expression (Article 19, ICCPR): States can use platform regulation to criminalise dissent and censor legitimate expression online, particularly in authoritarian contexts. Regulation may also inadvertently lead to the removal of permissible content as companies ‘overblock’ in order to avoid penalties.
  • The right to privacy (Article 17, ICCPR): States may invoke legitimate aims of addressing harmful content such as disinformation to demand greater access to personal data. Provisions that seek to weaken encryption may pose further risks, resulting in infringements on the right to communicate privately and facilitating state surveillance. 
  • The right to non-discrimination (Article 26, ICCPR): Platform regulation can pose risks to the right to non-discrimination when automated processes and content moderation result in disproportionate impacts on particular groups. 
  • The right of peaceful assembly and freedom of association (Articles 21 & 22, ICCPR) Platform regulation can pose risks to the right to peaceful assembly and freedom of association by requiring platforms to remove or restrict certain types of content useful for coordination of social movements, which can be used to silence activists, political opponents or grassroots movements. This can also happen through regulations that mandate collection of user data and facilitate surveillance.  
  • Economic, social and cultural rights (ICESCR): Platform regulation may pose risks to economic, social and cultural rights by restricting access to economic opportunities and education, or imposing limitations on the right of everyone to take part in cultural life and enjoy the benefits of scientific progress. 

 

What would a human rights approach look like ?

A human rights-based approach to platform regulation and content governance ensures that human rights such as freedom of expression, privacy, and the right to non-discrimination are safeguarded, ultimately fostering a digital environment that protects individuals. 

 

Principle 1: Align with international human rights law and standards

Platform regulation should ensure that approaches are grounded in international human rights law and standards. States have an obligation to respect, protect and promote human rights, as enshrined in international treaties such as the ICCPR and ICESCR. This means that any measures must be designed and implemented in a way that upholds these obligations. Furthermore, companies have a responsibility under the UNGPs to respect human rights, which involves undertaking due diligence to prevent and address any adverse impacts their operations may have on the enjoyment of human rights. The UNGPs emphasise the importance of conducting human rights impact assessments (HRIAs) as part of this due diligence process. They are helpful to identify, evaluate and address potential risks to human rights associated with business activities, ensuring that online platforms not only mitigate negative impacts on human rights, but also positively contribute to the enjoyment of human rights through their operations.

 

Principle 2: Be clear and proportionate, and pursue legitimate aims

Any policies or restrictions on content or activity online should be explicitly set out, ensuring that rules are consistent with international human rights law and standards, and thus clear for users. States should ensure that regulation does not incentivise nor unduly pressure platforms to remove content which is permissible under international human rights law. It is critical that regulatory measures pursue legitimate aims, such as addressing illegal activity and safeguarding human rights, and are designed in a way that avoids excessive interference, reflecting a proportionate approach. 

This should align with the requirements of permissible restrictions of freedom expression as set out by the “three-part test” in Article 19 of the ICCPR, which mandates that restrictions on expression are only permissible if they are provided by law and in pursuance of a legitimate aim, and must be necessary and proportionate to achieving that specific legitimate aim (e.g. it must be the least restrictive means of accomplishing the legitimate aim). 

 

Principle 3: Take an inclusive, holistic and multistakeholder approach

States should develop online platform regulation in an inclusive and transparent fashion, conducting adequate research on the potential impacts of policy approaches on particular groups of people. The development and enforcement of policy should be informed by meaningful consultation with all stakeholders, including the private sector, civil society, academia, the technical community and others. It is vital to be mindful of the potential negative impacts platform regulation may have on groups in vulnerable situations, and ensure that systems embrace and foster diversity. A holistic approach involves considering not just individual or singular violations of human rights, but collective impacts as well.

 

Principle 4: Require transparency from online platforms

Regulation should require greater transparency from online platforms around their policies and services, data handling practices, and decision-making processes. This allows states and users to understand the risks they may pose to human rights, and hold them accountable for ineffective or discriminatory practices, such as disproportionate content moderation practices. Online platforms should be required to develop fair, straightforward and transparent oversight mechanisms for removal requests and appeals, in line with the Santa Clara Principles on Transparency and Accountability in Content Moderation.

 

Principle 5: Ensure accountability and user redress

Regulation should ensure that online platforms are accountable for their actions and decisions. This involves regular reporting on enforcement of policies, providing justification for content removal or user bans, and allowing independent audits of their practices. It is also critical that any sanctions levied against platforms for lack of compliance are proportionate, subject to judicial review, and do not incentivise activities that pose further risks to human rights.

Regulation should integrate robust mechanisms for user redress, including a requirement for platforms to provide clear and accessible means for users to challenge and appeal decisions relating to content moderation, data usage or other policies. This requires clear communication and accessible mechanisms of complaint to empower a diversity of users to take advantage of them. Users should also be able to control their online presence more broadly, including with tools for privacy or customised content curation.

 

Moving forward

For years, Global Partners Digital has been at the forefront of addressing the complex challenges associated with platform regulation, content governance, information integrity and disinformation. Our work emphasises the importance of a rights-respecting and inclusive approach to these issues: one which is grounded in the international human rights law framework, and embraces a multistakeholder approach at the global, regional and national levels. 

GPD’s recent efforts on platform governance include supporting the development and implementation of international guidance, including our contributions to UNESCO’s Guidelines for the Governance of Digital Platforms (our thoughts on the final text) and promoting the outputs of the OHCHR’s B-Tech Project on the implementation of the UNGPs. We have also been active in framing the discussion around information integrity, providing submissions to the UN Global Principles for Information Integrity, and continue to advocate for rights-respecting approaches to specific forms of content such as disinformation through resources on disinformation and human rights and our LEXOTA interactive tracker. 

This engagement has also extended to regional and national efforts, notably on the UK Online Safety Act, where we worked alongside other civil society organisations to advocate for rights-respecting outcomes. Much of our work has taken place in collaboration with partner organisations. We have collaborated in putting on regional events, including in the LATAM region, where we discussed the UNESCO Guidelines and how they can support a rights-based approach to platform regulation discussions in the region. We’ve also unpacked more granular topics and examined the impacts of translating global North frameworks and provisions into new context in the global Majority: for example, on the issue of data access for researchers

For more insight and resources around platform regulation, content governance, and related issues, see our dedicated hub