Initial thoughts on Facebook’s “Blueprint for Content Governance and Enforcement”
As the world’s largest social media platform, and one of its largest companies, it’s not surprising that Facebook attracts a great deal of scrutiny over its actions, and, in particular, how it moderates content on its platform.
Criticism comes from many angles; with the platform standing accused of taking too much content down, not taking enough down, and failing to be sufficiently transparent about how it takes down content in the first place.
In response to these criticisms, Facebook’s CEO Mark Zuckerberg has announced a series of new measures the company would take, under the title of a “Blueprint for Content Governance and Enforcement”. The plan is wide-ranging, and commits to significant changes to many aspects of Facebook’s policies and processes – including a new approach to enforcement of its Community Standards (the rules that set out what kind of content is and is not allowed), new opportunities for individuals to appeal decisions, and a new mechanism for independent governance and oversight of the processes.
At GPD we’ve been calling for action in all of these areas since the launch of our white paper on content regulation by platforms earlier this year, and received news of the Blueprint with considerable interest. Below, we take a closer look at what’s in the proposals – what’s welcome, what isn’t, and where questions remain.
A good start…
With the caveat that more detail is needed on the proposals, three aspects can be broadly welcomed. The first is the new proposals for greater transparency over how the Community Standards are developed and enforced. In the Blueprint, Facebook make two specific commitments towards this: first, to publish the minutes from all meetings where policies on content are determined; second, to add additional metrics to its transparency reports, including on the rate of mistaken decisionmaking, and the speed of actions. By the end of next year, these reports will be published on a quarterly basis.
Second is Facebook’s plans to address algorithmic biases which treat people unfairly. As we note below, we have serious concerns around the use of artificial intelligence and algorithms when it comes to content moderation. At this stage, even just the recognition that they are imperfect, and may perpetuate offline biases, is a welcome signal from Facebook.
Perhaps most welcome of all, however, is Facebook’s intention to create a new, formalised appeals process for content removal, with a commitment to greater transparency when decisions are made, and more detail to be provided on whether Community Standards were breached, and why. There is also a commitment to establishing a new, independent body, to which individuals could appeal content decisions, and whose decisions would be transparent and binding. Facebook has opened a consultation to examine questions around the composition of the body, how members will be selected, how its independence will be assured, and what criteria it will use to pick cases.
The creation of this body would be a significant step forward, and is something which many organisations, including GPD, have been calling for. In our white paper published earlier this year, we recommended that platforms jointly establish and fund a new global oversight mechanism – which we named, speculatively, the Independent Online Platform Standards Oversight Body – which would have the power to develop Online Platform Standards relating to content moderation.
While the proposals in the Blueprint relate solely to Facebook’s Community Standards, and focus on the enforcement of Facebook’s own standards rather than the development and monitoring of independently established rules on content moderation, the recommendations we make in the white paper remain relevant, and warrant attention by Facebook:
- The body should be multistakeholder and comprise representatives of civil society organisations, academia and, potentially, relevant national bodies such as national human rights institutions; consideration should also be given to the need for representatives of different regional and cultural groups;
- The appeal process should be clearly accessible to users, with sufficient information on how to use it, and assistance should be provide those who may face particular barriers to access related to language, disability or otherwise;
- The body’s decisionmaking process should be predictable, with clear information on how it will make its decisions, an indicative time frame and what the available remedy (or remedies) will be if the appeal is successful;
- The remedies available for users who succeed on appeal should be effective. This may simply be the reinstatement of the content, however other remedies should be available, such as compensation, a public apology, a guarantee of non-repetition, or a review/reform of a particular policy or process;
- The body’s work should be transparent, and able to publish reports on its decisions as well as recommendations for Facebook to help ensure non-repetition of incorrect decisionmaking.
But questions remain…
Alongside these positive aspects of the Blueprint, there are some which raise concern. Three in particular, jump out. First, and disappointingly, the proposals relating to enforcement of the Community Standards all largely rely on the use of artificial intelligence to determine what is “harmful content”. The limits of artificial intelligence when it comes to analysing content are well known, and even acknowledged by Facebook itself in the Blueprint. Certain forms of content, for example – particularly which relates to minority groups – tend to attract high rates of error by algorithms. While the sheer scale of content to be reviewed makes the use of artificial intelligence unavoidable, at the very least Facebook should be clear that no content will be removed without human involvement at some point, in order to mitigate the risks associated with its use.
Second, Facebook’s plans to deprioritise “borderline content” remain unclear. While algorithms will be tweaked to reduce the “distribution and virality” of this category of content, there is no exhaustive definition of what exactly borderline content is. The Blueprint refers to “more sensationalist and provocative content”, such as clickbait and misinformation, as well as photos with revealing clothing or sexually suggestive positions, and offensive posts that don’t meet the threshold of hate speech. However, greater clarity is needed on precisely what falls into this grey area, given that the right to freedom of expression includes expression which is provocative, shocking, disturbing, and even offensive. While Facebook does, eventually, plan to give users more control over whether they see “borderline content”, this will only be when “artificial intelligence is accurate enough to remove it for everyone else who doesn’t want to see it” which, given its limitations, could be some time away.
Third, the Blueprint concludes by accepting the need for regulation of platforms, and commits to “working with the French government on a new approach to content regulation” as well as others, including the European Commission, in the future. While we promote multistakeholder policymaking when it comes to issues related to the internet (which would, of course, include tech companies) it is important that policymaking includes all relevant stakeholders and is open and transparent. Decisions made behind closed doors between a government and one company do not meet these criteria; and so it is critical that any policy processes include other actors, and are based on principles of openness transparency.
Conclusions
In its current form, there’s much to welcome in the Blueprint – but areas of concern remain, and more detail is needed. Facebook’s consultation on its new governance mechanism is an encouraging sign that it is prepared to listen.
Given the wide-ranging human rights impacts involved in online content regulation, it’s crucial that this consultation process – as well as the new oversight body itself – is as open and inclusive as possible, to ensure that the full range of affected stakeholders and user groups are able to participate.