Content regulation laws threaten our freedom of expression. We need a new approach
In the wake of what has been described as tech’s ‘annus horribilis’, it has become almost unfashionable to speak of online platforms as enablers of freedom of expression and other human rights. But it is unquestionably true. Many of the most successful social movements of the last ten years, from the Arab Spring, to Ferguson, #metoo and the recent mobilisations around gun violence in the US, have relied on the unique mobilising capacities of platforms like Facebook and Twitter. And in countries where public expression is heavily censored, they remain a crucial and indispensable tool – even a lifeline.
It’s also true that a proportion of the content uploaded to these platforms is harmful and illegal – whether that’s hate speech and incitement to violence, like the kind which may have stoked communal violence in Sri Lanka recently, child pornography, or the everyday harassment which is a distressingly common feature of the online experience for women and the LGBT+ community. It’s important to emphasise that these are manifestations of problems which already exist in the offline world, rather than intrinsic features of online platforms themselves. But as hosts and facilitators, they have a responsibility to make sure that such content is dealt with appropriately, while also ensuring the freedom of expression of their users is respected.
The record of platforms on this count has been patchy. In some cases their systems have proved unresponsive, with Twitter users reporting feeling ignored when they report harassment or threats. In other instances, they have been all too heavy-handed – erroneously taking down legitimate content, or banning accounts with no explanation or opportunity to challenge the decision offered, undermining users’ freedom of expression. Often these cases have only been remedied after public outrage or attention, while little is disclosed publicly about the internal workings of their decisionmaking, compounding the problem.
In this absence of leadership and accountability around content moderation from platforms, governments have been stepping in. In the last year, we’ve seen legislation like Germany’s Network Enforcement Act (or NetzDG), which requires online platforms with more than two million subscribers to remove content which is ‘manifestly unlawful’ within 24 hours, and gives any who fail to comply a fine of up to €50 million. At the EU level, the Home Affairs Commissioner has demanded that platforms take down illegal content within two hours and suggested that the Commission would introduce legislation if platforms fail to do so voluntarily. In the UK, a Home Office Minister proposed taxing online platforms which fail to take down ‘radical’ or ‘extremist’ content.
Regardless of the motivations behind these measures, they pose grave risks to freedom of expression. The tight time limits and the threat of heavy penalties create incentives for tech companies to err on the side of caution when removing content. Since the introduction of NetzDG, there have been several high-profile examples of Twitter removing controversial or satirical – but lawful – tweets, with one of the users implicated pointing out that before its inception she had “tweeted things that were significantly more extreme” without being blocked. In Malaysia, a new law which criminalises ‘false news’ has already seen a Danish citizen imprisoned for ‘inaccurate’ criticism of the Malaysian police.
From our perspective, there is a clear, pressing need for a model of content regulation by platforms which respects users’ right to free expression, while also meeting the legitimate interest of both governments and users in having unlawful and harmful content removed. It’s clear that neither pure self-regulation by platforms nor government intervention is capable of delivering this. In a white paper published today, we formally propose and outline an alternative model.
The way the model would work is simple. First, interested online platforms would appoint a set of independent experts, who would, following a multistakeholder consultation, develop a set of Online Platform Standards. These would set out, among other things, how platforms should respond to harmful content, establish rules on accountability and transparency, and require grievance and remedial mechanisms to be put in place allowing users to challenge decisions. Adherence to these Standards would be monitored by an international, global multistakeholder oversight body, comprised of representatives from online platforms, civil society organisations, academia and, potentially, relevant national bodies such as national human rights institutions. Platforms that failed to meet the Standards would be publicly called out and provided with recommendations for improvement.
While this might seem like a radical idea at first glance, in practice many sectors with an acknowledged public interest – from the media to utilities – already employ certain forms of independent oversight. It’s also worth noting that many big tech companies are already signatories to more limited forms of oversight, like the Global Network Initiative, and some platforms are also already individually doing some of the things we suggest, such as transparency reports on content removal.
The potential advantages, as we see it, are obvious: an online space where the right to freedom of expression is protected, the challenges of unlawful and harmful content addressed, underpinned by greater transparency and accountability from the platforms themselves. We do not expect everyone to agree with our approach to online content regulation, nor do we intend to suggest that it is the only one available. Above all, we hope to stimulate debate and discussion in a field where progress is urgently needed.