30 Sep 2024

The Final Report of HLAB-AI: our analysis and thoughts

The Final Report of the UN High-level Advisory Body on Artificial Intelligence (HLAB-AI), was published on 19 September 2024. 

It marks the conclusion of a yearlong process, set up with the aim of undertaking analysis and providing advance recommendations on the international governance of artificial intelligence. GPD has closely supported the Advisory Body members’ work throughout the process. We welcomed the HLAB-AI’s work as a potentially generative and useful opening to the UN taking leadership of coordination efforts within the currently fractured ecosystem of AI governance. 

The Final Report follows the Interim Report, which was published in December 2023. While the identified principles remain the same, it somewhat departs from its structural design: de-emphasising the Interim Report’s focus on the importance of normative coordination in favour of international cooperation. In spirit, it feels more like a ‘plug-in’ proposal than the ambitious intervention promised by the Interim Report. 

In this summary, we briefly run through the general direction of the report, and provide a few top-level thoughts on its contribution to the current AI governance landscape. Below this, we present a full analysis of all the recommendations contained in the report.

 

Key messages of the report

A central message of the Final Report is its emphasis on the “global governance deficit” in AI. The report highlights that the current “patchwork of norms and institutions is still nascent and full of gaps”—exemplified by the exclusion of entire regions from international AI governance discussions, raising the risk of creating “disconnected and incompatible AI governance regimes.” Additionally, the report argues, the UN’s fragmented approach, due to the specific mandates of its entities, fails to address AI governance comprehensively.

The report underscores the disparity in representation among states involved in AI governance, pointing out that no high-performance computing clusters are hosted in developing countries. This demonstrates the scale of the challenge in ensuring equitable access to advanced AI resources. To mitigate this, the report advocates for supporting distributed and federated AI development models to bridge the AI divide.

Data-related issues also feature prominently in the report, including the misuse of–and lost opportunities around—data for AI, and the lack of data reflecting the world’s linguistic and cultural diversity, which contributes to AI bias. The report calls for shared resources, such as open models, to address these gaps and promote inclusivity.

 

Our analysis

The report presents a series of proposals and recommendations to create a coherent framework for global AI governance, addressing the multifaceted challenges and opportunities AI presents. We welcome the inclusion of some of these proposals. 

However, the final report departs from the original premise of “form follows function” embraced in the Interim Report, which intended to interrogate the specific functions required to provide robust AI governance and set a roadmap for when and how to advance its implementation. Instead, the Final Report focuses on mechanisms for filling gaps within the existing patchwork. Such a piecemeal approach cannot provide the procedural and substantive elements necessary to ensure the achievement of the intended outcomes. Nor will it strengthen existing AI governance initiatives, which poses the risk of making the mechanisms proposed by HLAB-AI irrelevant due to the currently crowded AI governance landscape.

Normative coordination, which was correctly highlighted as critical in functions 2 and 3 of the Interim Report, receives notably less emphasis in the Final Report—replaced by a focus on the urgency of international cooperation. . This is, in our view, an error if the aim is building a progressive and strengthened path for accountability in AI governance. After all, normative harmonisation—anchored in human rights standards and bodies—sits at the core of the UN mission. We believe that the report’s proposals could be improved by better aligning with the UN’s expertise and clearly prioritising actions. We continue to believe—as highlighted in our previous research offered as input to the HLAB-AI’s work—that institutional capacities for evidence-based and multidisciplinary risk monitoring and harmonisation of standards should be established prior to the facilitation of access.

The final report centralises the majority of its newly proposed entities and processes within a singular office, building upon the existing Office of the Technology Envoy, and anticipated to be based in New York. This, when read together with the recently adopted Global Digital Compact (GDC), has raised serious concerns of a departure from the existing landscape of digital technology policymaking, which is characterised by the multistakeholder precept set out in the Tunis Agenda and by a distributed ecosystem of UN institutions and multistakeholder forums and venues. While all of these entities individually face challenges around stakeholder engagement, the overall ecosystem, by virtue of its decentralisation, provides a range of avenues for non-governmental actors to engage and shape outcomes. By comparison, a more centralised—and, potentially, more closed and opaque—entity risks hindering non-governmental engagement. 

The final report of the HLAB-AI provides a comprehensive overview of the challenges and gaps in the current global AI governance landscape, emphasising the need for cohesive and inclusive strategies. While several of its recommendations are promising and could potentially advance international AI governance, we have concerns regarding their implementation and alignment with existing human rights frameworks and the bodies charged with overseeing compliance with those frameworks. The desire to fill gaps in AI governance should not sideline a proper consideration of which governance functions the UN is best placed to fulfil. Such an approach risks creating yet another initiative with too little buy-in to be impactful in global AI governance globally. 

These recommendations therefore require further refinement to ensure they effectively integrate human rights and facilitate meaningful international cooperation in an open, transparent and inclusive way. A more robust focus on normative coordination and human rights standards will be crucial for achieving a proportionate and effective global AI governance system.

 

Next steps

The just-adopted Global Digital Compact has provided direction on how a number of the concrete entities proposed by HLAB-AI will be taken forward. In a process similar to the one used to elaborate the Compact itself, the General Assembly will appoint two co-facilitators to identify “through an intergovernmental process and consultations with other relevant stakeholders” the terms of reference and modalities for the establishment and functioning of the Independent International Scientific Panel on AI and the Global Dialogue on AI Governance.

This process will commence during the General Assembly’s 79th session (from September 2024 – September 2025). It will hugely benefit from input by human rights groups and those communities most impacted by AI applications. GPD will continue to closely engage and provide updates on this process.