2025: a stress test for the multistakeholder model?
For those of us advocating for human rights-based digital policy, 2024 was a difficult year. An unprecedented “race to governance” saw states battling to assert sovereignty on the terrain of digital technologies—heralding a sharp multilateral turn in the field, evident in processes from the Global Digital Compact (GDC) to the UN Cybercrime Convention. Coupled with geopolitical and technological shifts, engaging on this fraught and narrowing terrain as a human rights defender was challenging, even exhausting.
But exhaustion is not an option in 2025. Decisions soon to be made in processes including the World Summit on the Information Society 20-year Review (WSIS+20) and GDC implementation could substantially shape the future of digital technologies governance. In these discussions, the continuance of the multistakeholder approach—the bedrock of the checks and balances that protect our rights—will come under direct question, along with the wider application of international human rights institutions and frameworks to digital technologies.
Will the future of the Internet and digital technologies be distributed, horizontal, and human-centered; or centralised, monopolized and extractive? This question has always latently shaped our collective engagement in these debates. That it is now being tabled explicitly at the global level is both alarming and potentially generative. This is a moment for human rights defenders from the global Majority and North to unite and coordinate around a shared vision of a plural, inclusive and rights-respecting digital policy environment. It is a moment to innovate and adapt—using available openings to unite on ensuring human-centred outcomes, while navigating complex inter-state and cross-stakeholder dynamics.
In hopes of opening discussion and constructive reflection, we explore below some of the dynamics, trends and directions we expect to shape the field in 2025.
Digital governance foundations under revision
As we set out above (and in our explainer from late last year), WSIS+20 should be an urgent preoccupation for anyone concerned with digital policy. The Review will see states and stakeholders putting the WSIS framework, outcomes and Action Lines, its broad vision of people-centred development, and the institutions it created, like the Internet Governance Forum (IGF), under close scrutiny.
In parallel, the new UN Office for Digital and Emerging Technology (ODET) will oversee the follow-up and implementation of the GDC, including new entities for AI governance, with various (though still uncertain and likely limited) opportunities throughout the year for stakeholder engagement.
Across both forums, there is an opportunity to build outcomes that actually strengthen the implementation of international human rights standards, including through enhanced normative coordination with UN human rights mechanisms like the OHCHR. It’s also a moment to think about how structures like the IGF can be permanently embedded, and the multistakeholder approach and principles more broadly and effectively applied in digital policy discussions.
At the same time, we need to be alert to potential efforts within these processes to undermine the multistakeholder approach—whether through calling out its effectiveness, or the assertion of sovereignty through a multilateral push. See our WSIS explainer for more detailed guidance on opportunities and risks anticipated within the Review.
Growing scrutiny of digital monopolies
The global digital technology ecosystem remains markedly unequal, with a third of the world still lacking meaningful access to the Internet. In addition, a handful of mostly US-based technology companies control every layer of the digital stack—from the pipes and cables which transport data internationally, to the platforms we use to interact with one another. While this centralisation of ownership can offer efficiency and expand connectivity, it also risks stifling local digital ecosystems, leading to oligopolies and an erosion of sovereign control over essential services and critical resources. Concern about the power of these globe-spanning corporations, and their ability to destabilise the political landscape, has led to a push by many states to exert greater control over the governance of digital technologies.
Some governments have responded with antitrust regulations—for instance, the EU’s Digital Markets Act. Others, particularly in the global Majority, look to international bodies to ensure access to technologies and reset the global balance of power. We expect concerns about the concentration of power in companies based in a handful of global North countries to influence the WSIS+20 review process over the next year, and to play a central role in conversations on global governance of AI.
The human rights impacts of this trend are perhaps best understood in aggregate, and GPD will continue our work to monitor, analyse and contextualise developments across a broad range of policy forums over the next year, with the aim of supporting collective civil society thinking around digital policy.
Increasing politicization of standards-setting bodies
States are increasingly using technical standards-setting bodies (TSSBs) like the International Telecommunication Union (ITU) to push their own national visions of a more centrally controlled Internet and digital technologies.
These efforts are contributing to Internet fragmentation, the splintering of the open, global Internet into separate, isolated networks. As we set out in our explainer, an open, global and interoperable Internet creates the environment conducive for information to be accessed, shared and created seamlessly across borders. When such a right to information is impeded, it brings cascading consequences to individual rights to privacy, free speech, health and freedom of association, among others.
For a long time, TSSBs have run on a parallel track to policy-oriented Internet governance discussions, and—based on a presumption of political neutrality—have rarely addressed human rights directly. The multilateral nature of some TSSBs (like the ITU) or highly technical language employed (the Internet Engineering Task Force [IETF]), make these forums less open to civil society participation, and only a few groups have historically engaged there. This has to change: these forums are now urgent sites of geopolitical contestation over the future of the Internet and digital technologies, and standards developed by them can have wide-ranging impacts on human rights. For instance, the IETF’s creation of the HTTPS encryption standard increased the privacy and safety of personal data collected, and has now become the default expectation, especially when using banking services.
A key strategic focus for us this year will be supporting groups, particularly from the global Majority, to push back against fragmenting efforts in TSSBs, as well as advocating for inclusive and multistakeholder approaches. See our recent case study of engaging at ITU-T Study Group 13 for an illustration of the challenges (and opportunities) inherent in this effort.
The ‘AI hype’ evolution
In policy and regulatory discussions of AI, we’ve seen a significant shift away from narratives of existential threat, towards a greater focus on present-day risks and the redistribution of the benefits of AI as an impetus for policy intervention. However, it’s still unclear whether recent and ongoing AI governance efforts—such as the newly adopted Council of Europe AI treaty, or incoming institutions, like the Independent Scientific Panel on AI—will address growing concerns relating to human rights, uneven access, and geopolitical inequalities.
2025 may, in many ways, shape how AI is designed, developed and deployed, and decide whether initiatives can truly fill the governance gaps that exist with effective coordination of already existing elements, rather than through duplication. This will require more concerted efforts by the international community, particularly civil society, to ensure that proportionate and rights-respecting approaches are undertaken, including through the design and practices of new institutions and bodies, or through AI standards development consistent with international human rights standards.
Digital sovereignty expanding to infrastructure
Digital sovereignty broadly refers to an approach to policymaking which seeks to ensure a country’s security, independence and economic interests. Actions under this rubric are often associated with repression at the content layer of the Internet: for example, digital sovereignty is an oft-cited justification for Internet shutdowns.
This term is expanding to encompass regulatory efforts by states to control infrastructure and data. For example, the key resources needed to develop AI, like semiconductor chips, are concentrated in a few firms, and the infrastructures needed to develop AI span borders. This has led to a push for ‘sovereign AI’—the development of the resources, capabilities, and infrastructures needed for countries to create their own AI industries or even systems.
This turn to sovereignty is a further expression of concerns about unequal power distribution and a loss of control. Over the next year, we can expect to see governments increasingly placing the development and operations of key digital technologies in the hands of national actors. This points to their increasing reliance on technology to deliver essential services, as well as maintaining economic competitiveness around the world.
The digital public infrastructure narrative inherits and expands the digital transformation strategy that started in the global Majority a few years ago as a springboard for economic growth. Here, the lines between public and private are becoming increasingly blurred, and this has huge significance for civil society. As governments increasingly rely on and are involved in the development of certain technologies, the ability of civil society actors to push back against harms to human rights will depend on openings available to participate in and shape these processes.
Novel technologies (and existing solutions)
New and emerging technologies besides AI, particularly quantum computing and neurotechnologies, will be on the policy agenda this year as stakeholders aim to better understand and address their potential positive and negative impacts. While it’s unlikely we’ll see quantum computing pose insurmountable challenges to encryption in the near future, we are already seeing the risks posed by neurotechnology to freedom of opinion and expression, thought, non-discrimination and privacy. The UN General Assembly has officially declared 2025 to be the International Year of Quantum Science and Technology and efforts to regulate neurotech at the national level are taking place, notably in Latin America.
These efforts highlight the need for approaches to be evidence-based, equitable, grounded in human rights and informed by all stakeholders, specifically from outside the global North. We are hopeful that such lessons can be drawn from efforts around AI, recognising that while technological innovations may pose novel challenges, these challenges do not necessarily require novel solutions. Existing holistic strategies (such as comprehensive data protection laws) should be considered.
How does civil society engagement need to evolve to grapple with this landscape?
Amid the shifts and uncertainties in digital governance explored above, civil society must critically evaluate how to operate effectively across an increasing number of forums and issues.
A key consideration is the increasing centralization of digital policy discussions in New York, which has historically less expert presence on digital issues and the human rights approach. This presents challenges for access and representation—both practically, in terms of visas and budget for attending meetings, but also since diplomatic processes there tend to be less multistakeholder in nature. These are challenges which most acutely impact lesser-resourced stakeholders, such as civil society groups, which will need to be more coordinated, agile and dynamic than ever to take advantage of finite resources and limited opportunities to engage. Playing a connective role—bringing voices to forums, linking groups, keeping them updated and facilitating stakeholder collaboration—will be crucial in ensuring inclusive participation and meaningful representation of global Majority voices.
Ensuring the continuation of the multistakeholder approach is central to this. This means acting as both its critical friend and ardent champion: reflecting on how it can be optimised for different policies and contexts, while rigorously defending its proven record in effective Internet governance, and as our best hope to govern the emerging roster of multidimensional and complex issues arising in 2025. The currently fractured geopolitical environment calls for more—not less—meaningful stakeholder engagement.