Policy Briefs and Reports Books Journals

Policy Brief  No.223

Democracy in the Digital Age: Reclaiming Governance in an Algorithmic World

Jordan Ryan

May 28, 2025

This policy brief argues that democratic governance must evolve as artificial intelligence becomes embedded in public life, often without transparency or public consent, and as the erosion of democratic processes directly threatens sustainable peace. Drawing on lessons from the 2025 UNDP Human Development Report and the 2024 UN Pact for the Future, this brief offers a framework for democratic digital governance that supports peacebuilding. It proposes five actions: establishing independent oversight bodies with enforcement powers; embedding civic participation in policymaking; expanding critical digital literacy; enforcing the Global Digital Compact; and protecting online civic space. With such measures, AI can enhance human agency, strengthen democratic institutions, and foster sustainable peace.

 

Contents

Abstract

As artificial intelligence becomes embedded in public life, it is reshaping not only how decisions are made but also who has power over them. From welfare systems to policing, from search engines to border control, algorithmic technologies now play a central role in governing societies—often without transparency or public consent. This policy brief argues that democratic governance must evolve to meet this moment, as the erosion of democratic processes directly threatens sustainable peace. Drawing on lessons from the 2025 UNDP Human Development Report[1] and the 2024 UN Pact for the Future[2], this brief offers a framework for democratic digital governance that supports peacebuilding. It proposes five actions: establishing independent oversight bodies with enforcement powers; embedding civic participation in policymaking; expanding critical digital literacy; enforcing the Global Digital Compact[3]; and protecting online civic space. Without such measures, AI may deepen inequality, exacerbate violent conflict, and erode democracy. With them, it can enhance human agency, strengthen democratic institutions, and foster sustainable peace.

Introduction: Democracy at a digital crossroads

Artificial intelligence is no longer an emerging trend—it is now a structural force in governance. Decisions about school admissions, law enforcement, welfare eligibility, and even medical triage are increasingly shaped by algorithmic systems. These systems are often implemented without public debate and operate in ways that defy scrutiny, even by those tasked with managing them.

The relationship between democratic governance and sustainable peace is fundamental. When algorithmic systems undermine democratic processes, they simultaneously threaten the foundations of peace. Democratic institutions—with their emphasis on participation, accountability, and human rights—create the conditions for resolving conflicts through dialogue rather than violence. As AI systems increasingly mediate social and political life, their governance becomes inseparable from both democratic resilience and peacebuilding efforts.

As the UNDP 2025 Human Development Report[1] "A Matter of Choice: People and Possibilities in the Age of AI" makes clear, the benefits of AI are not automatic—they depend on the deliberate expansion of human agency. Without inclusive governance frameworks, AI in fragile contexts can accelerate marginalisation and compound existing power asymmetries. Peacebuilding actors must treat digital governance as a frontline issue of legitimacy and rights, not just technical functionality.

AI's acceleration is occurring amidst a global crisis of democratic legitimacy, deepening inequality, and increasing geopolitical fragmentation. As political institutions struggle to adapt, a new locus of power is emerging—one shaped not by constitutions or elections, but by code, data, and commercial incentive. This shift has profound implications for peacebuilding, particularly in fragile contexts where the digital divide intersects with institutional weakness. Urgent action is required to ensure that digital transformation supports inclusive governance rather than undermining it.

This brief explores how democratic governance can respond. It examines the risks posed by unregulated AI, offers global examples of innovation and resistance, and proposes policy actions that move beyond technical fixes towards institutional change. The central question is not whether AI will transform governance, but whether that transformation will strengthen or undermine democratic values, human rights, and peace.

Agency at risk: Algorithmic decision-making and human security

The 2025 HDR reframes development around the principle of agency[1]—the capacity of individuals and communities to make meaningful choices about their lives. This shift in focus is timely, as algorithmic systems increasingly mediate access to essential services, opportunities, and information in ways that can either enhance or diminish human choice and security.

The risk is that without proper governance, AI systems may optimise for efficiency, profit, or control at the expense of human autonomy, dignity, and security. Opaque algorithms making decisions about public benefits, employment, or access to information reduce transparency and erode the ability of individuals to understand and contest life-altering outcomes.

In Brazil, automated welfare systems inadvertently excluded eligible recipients due to data errors and algorithmic bias. The country's draft AI law now includes provisions for oversight and appeals[4]. In India, the Aadhaar biometric identification system has demonstrated both the promise and peril of algorithmic governance at scale[5]. While expanding access to services for millions, it has also raised significant concerns about privacy, exclusion of vulnerable populations, and the absence of adequate redress mechanisms when the system fails.

Globally, many low and middle-income countries depend on external AI tools governed by foreign legal norms, reinforcing technological dependencies that undermine local sovereignty. The EU's AI Act[6] introduces a risk-based framework for governing high-impact AI, while the African Union's proposed data governance framework[7] emphasises sovereignty and regional collaboration. These initiatives highlight the growing consensus that development and peace in the algorithmic era require democratic control—not just access to technology.

Governance gaps and conflict risks

Democratic institutions are outpaced by algorithmic systems operating beyond accountability. Traditional governance mechanisms—electoral cycles, institutional checks, and deliberative processes—struggle to match the speed and complexity of AI-driven decision-making that often lacks transparency, oversight, or clear channels for redress.

This creates a democratic deficit that weakens oversight, voice, and accountability. In Myanmar, social media content moderation algorithms failed to detect local-language hate speech targeting vulnerable communities, contributing to violence. In Ethiopia, platform algorithms removed documentation of human rights violations while allowing inflammatory content to spread during civil conflict[8]. In Kenya, algorithmic credit scoring systems have excluded informal workers from financial services despite their economic contributions. These failures stemmed from inadequate language support, limited cultural context, and profit-driven design priorities that privileged engagement over safety and inclusion.

Priorities for democratic resilience include transparency (systems must be auditable and explainable), rights protection (privacy, anti-discrimination, and redress), participation (inclusive policy design), and civic capacity (institutional expertise and public literacy). These are not just technical upgrades—they are foundational to restoring democratic legitimacy and peace in the digital age.

Safeguarding peacebuilding in the age of AI: Governance principles for digital systems

Too often, technology is treated as a neutral tool. But as highlighted in the 2025 HDR[1], technologies like AI and automated decision systems are shaped by political choices—who builds them, who owns the data, and who benefits from their use. Peacebuilders must move from reactive adaptation to proactive governance—framing AI not only as a technical instrument, but as a contested space of democratic accountability and human rights.

Algorithmic systems can either reinforce divisions or build bridges. In East Africa, early warning systems have integrated data analysis to help prevent violence by identifying patterns of escalation and enabling timely intervention[9]. What distinguishes effective systems is their commitment to maintaining human oversight and local contextual knowledge—a crucial balance that prevents automated responses to complex social dynamics.

In Colombia, digital platforms have enabled conflict-affected communities to shape peace negotiations through participatory approaches that collect proposals from victims' groups[10]. These initiatives ensure diverse voices inform peace processes, particularly from rural areas previously excluded from political processes.

To promote peace, digital systems must be designed with local conflict dynamics in mind, governed inclusively, and deliver tangible benefits across conflict divides. They must also support truth and reconciliation efforts, not erase them through content takedowns or biased prioritisation.

Intentional governance—anchored in human agency and local realities—can transform AI from a destabilising force into a tool for resilience and reconciliation.

BOX 1 : SAFEGUARDING PEACEBUI LDING IN THE AGE OF AI: GOVERNANCE PRINCIPLES FOR DIGITAL SYSTEMS

  • Human Agency First: Prioritise human decision-making and oversight in all algorithmic systems
  • Transparency and Accountability: Ensure all systems are explainable and subject to independent review
  • Inclusivity and Participation: Involve affected communities in design, implementation, and evaluation
  • Conflict Sensitivity: Assess and mitigate potential impacts on social cohesion and peace
  • Data Justice: Address power imbalances in data collection, ownership, and use
  • Rights-Based Approach: Uphold human rights standards in all digital governance frameworks

Education and digital literacy: Building democratic capacity

The 2025 HDR[1] identifies education as a critical frontier for AI, highlighting its potential to personalise learning and support teachers. However, this potential can only be realised if education systems build the democratic capacities necessary for algorithmic citizenship.

This requires moving beyond technical skills to foster:

  1. Critical digital literacy
  2. Democratic participation
  3. Ethical reasoning

These capacities are essential for ensuring that AI serves democratic values rather than undermining them. Educational initiatives must also address the digital divide and gender imbalances in technology fields to prevent AI from deepening existing inequalities[11].

Linking to multilateral norms and peacebuilding practice

The policy framework presented in this brief aligns with and builds upon key multilateral initiatives, including the New Agenda for Peace and the Global Digital Compact[3]. These global frameworks establish important normative foundations for algorithmic governance, emphasising that digital systems must meet the same standards as any peacebuilding intervention: locally grounded, participatory, and accountable.

The focus on human agency in this brief echoes the core message of UNDP's Human Development framing[1]. Integrating these principles into AI governance is essential for ensuring that digital transformation supports progress toward sustainable peace and development.

While multilateral frameworks offer important guidance, it is important to acknowledge their limitations. UN observers have noted an "alarming slowdown in human development" that technology alone cannot address[12]. The HDR's optimism about AI must be balanced with recognition that governance frameworks need to tackle underlying structural inequalities that technology might exacerbate rather than solve.

Effective implementation of these norms requires both global coordination and local adaptation. International standards must be translated into context-specific practices that reflect diverse cultural, political, and economic realities while upholding universal principles of human dignity and rights.

Bridging the global divide

While digital governance is increasingly central to democratic resilience, many countries lack the infrastructure, legal frameworks, or institutional capacity to shape how AI systems are deployed. Low-income states often depend on imported technologies with opaque architectures, creating dependencies that mirror older patterns of colonial control.

Regional cooperation—such as through the African Union's data governance framework[7] or Latin America's open-data coalitions—can help build digital sovereignty and ensure that standards reflect local values. Donors and multilateral actors must invest in global equity—not only through infrastructure, but by supporting capacity for digital governance, ethical regulation, and cross-border enforcement.

The digital divide remains a critical barrier to inclusive AI governance. Without addressing disparities in connectivity, skills, and representation, algorithmic systems risk reinforcing existing patterns of exclusion and inequality. Bridging this divide requires coordinated action across multiple domains—from expanding affordable access to building local innovation ecosystems that can develop contextually appropriate technologies.

Policy recommendations

This brief recommends five key actions:

Policy Recommendation 1: Establish Independent Digital Oversight Bodies

Democratic governance requires institutions specifically designed to oversee algorithmic systems. These bodies must be independent from both government and industry influence, equipped with meaningful enforcement powers, and staffed with diverse expertise spanning technology, human rights, and conflict sensitivity.

Several countries are developing regulatory frameworks for AI oversight, with varying approaches to independence and enforcement capabilities. Civil society organisations worldwide have also established independent monitoring initiatives that complement formal regulation through auditing and public education. Regional frameworks for digital sovereignty provide approaches that balance innovation with rights protection.

Implementation must be adapted to diverse political and resource contexts. In post-conflict settings, oversight bodies should include expertise in conflict analysis and peacebuilding. In contexts with limited resources, regional cooperation can pool technical capacity while respecting local priorities. The key is institutional independence coupled with substantive authority—oversight without enforcement power risks becoming merely symbolic.

Policy Recommendation 2: Embed Civic Participation in Digital Policy Design

Algorithmic governance cannot be left to technical experts alone. Meaningful public participation must be embedded in policy design, implementation, and evaluation. This includes formal consultations, citizen assemblies, participatory impact assessments, and community-led monitoring.

Various participatory governance models have emerged globally, demonstrating how deliberative processes can strengthen both policy quality and democratic legitimacy. These approaches combine digital tools with in-person deliberation to engage diverse stakeholders in complex technology policy decisions. Public consultations on AI regulation have created space for marginalised communities to shape governance frameworks in several countries.

Special attention must be paid to conflict-affected and historically excluded communities. Digital platforms have enabled victims' groups to shape peace negotiations in post-conflict settings, demonstrating how technology can amplify previously silenced voices. Participation mechanisms must be designed to overcome digital divides, language barriers, and power asymmetries that might otherwise reproduce existing patterns of exclusion.

Policy Recommendation 3: Expand Public Education for Digital Peace and Democracy

Education systems must evolve to build capacities for democratic governance in the algorithmic age. This includes critical digital literacy (the ability to evaluate algorithmic systems and their impacts), democratic participation skills (the capacity to engage in governance processes), and ethical reasoning (the ability to make judgments about increasingly complex technological choices).

International organisations have developed frameworks for AI literacy that emphasise both technical understanding and critical thinking about social impacts. Civil society initiatives worldwide demonstrate how technical knowledge can be democratised through peer-to-peer learning and contextually relevant training.

Educational approaches must be tailored to diverse contexts and needs. In conflict-affected regions, digital literacy should include media literacy and critical evaluation of potentially inflammatory content. In contexts with limited connectivity, offline resources and community-based learning can bridge digital divides. Throughout, education must address gender imbalances in technology fields to ensure that AI development reflects diverse perspectives and needs.

Policy Recommendation 4: Support Global Norms for Ethical AI and Data Justice

National governance efforts must be complemented by robust international frameworks. The Global Digital Compact[3], UNESCO's Recommendation on AI Ethics[13], and OECD AI Principles[14] offer starting points, but they must be translated into enforceable standards that prioritise equity, transparency, and peace.

The EU's AI Act provides a model for risk-based regulation with extraterritorial impact[6]. Regional frameworks like the African Union's data governance initiative demonstrate how international norms can be adapted to reflect diverse values and priorities[7]. These approaches should inform a more inclusive global governance system that addresses power asymmetries between technology producers and consumers.

Particular attention must be paid to cross-border data flows, algorithmic colonialism, and the exploitation of regulatory gaps. International cooperation should support capacity building for AI governance in low-resource settings, technology transfer that respects local sovereignty, and mechanisms to hold transnational actors accountable for algorithmic harms.

Policy Recommendation 5: Protect Civic Space and Freedom Online

Democratic governance depends on vibrant civic spaces where citizens can organise, deliberate, and hold power accountable. Governments must uphold digital rights, including freedom of expression, association, and privacy. This requires legal frameworks that limit surveillance, prohibit censorship, and protect encryption and anonymity tools that enable safe civic engagement.

Private platforms must adopt responsible content moderation practices that protect vulnerable groups while preserving legitimate political discourse. Multi-stakeholder initiatives have developed principles for transparency and accountability in content moderation that offer guidelines for notice, appeal rights, and transparency reporting[15]. These principles should be adapted to conflict-sensitive contexts, with special attention to language support, cultural context, and the protection of human rights documentation.

Civil society organisations worldwide provide digital security support and capacity building to safeguard civic actors in contested digital spaces. International support for such initiatives should be expanded, with particular focus on protecting human rights defenders, journalists, and peacebuilders working in repressive or conflict-affected contexts.

Conclusion: Reclaiming digital futures for peace

The 2025 Human Development Report[1] reminds us that development is not simply about growth or innovation—it is about agency, dignity, and peaceful coexistence. The 2024 UN Pact for the Future[2] affirms this vision, calling for digital governance that serves all people.

This is a generational moment. Just as past global agreements forged shared principles for peace and human rights in the post-war era, today's challenge is to do the same for the digital age. The task is urgent—particularly as UN observers note an alarming slowdown in human development—but not insurmountable. With foresight and courage, democratic societies can reclaim the future from the code that now governs it.

As Article 28 of the Universal Declaration of Human Rights affirms, everyone has the right to a social and international order in which their rights and freedoms can be fully realised. In the algorithmic age, this right requires new governance frameworks that place human agency at the centre of technological development.

We call upon the international community—including Toda Peace Institute networks—to transform digital governance from a peripheral concern into a cornerstone of peacebuilding practice. The window for establishing democratic control over algorithmic systems is narrowing rapidly. By embedding ethical AI principles, independent oversight, and inclusive governance into our institutions, we can ensure that technology amplifies human potential rather than diminishes democratic agency. The choices we make today will determine whether algorithms become tools for democratic renewal or instruments of division—making this not merely a technical challenge but a fundamental peace and security imperative for our time. We stand at a fork in the digital road: either govern the code—or be governed by it.

NOTES

[1]United Nations Development Programme, Human Development Report 2025: A Matter of Choice: People and Possibilities in the Age of AI (New York: UNDP, 2025), https://hdr.undp.org/system/files/documents/global-report-document/hdr2025reporten.pdf.

[2] United Nations, "Pact for the Future," adopted by the General Assembly, September 2024, https://www.un.org/en/summit-of-the-future/pact-for-the-future.

[3] United Nations, "Global Digital Compact," adopted as part of the Pact for the Future, September 2024, https://www.un.org/global-digital-compact/en.

[4] "Brazil's AI Act: A New Era of AI Regulation," GDPR Local, February 26, 2025, https://gdprlocal.com/brazils-ai-act-a-new-era-of-ai-regulation/.

[5] "Living with the Aadhaar: India's Changing Contours of Identity and Governance," SAGE Journals, August 1, 2024,https://journals.sagepub.com/doi/10.1177/00195561241257460.

[6] European Union, "Artificial Intelligence Act," Official Journal of the European Union, L 123/1, April 2024,https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.

[7] African Union, "Data Policy Framework for Digital Sovereignty and Regional Integration" (Addis Ababa: African Union Commission, 2024), https://au.int/sites/default/files/documents/42078-doc-AU-DATA-POLICY-FRAMEWORK-ENG1.pdf.

[8] "Myanmar Digital Coup Quarterly: November 2024-January 2025," EngageMedia, February 21, 2025, https://engagemedia.org/2025/myanmar-digital-coup-quarterly-november-2024-january-2025/.

[9] "Digital Technologies," UN Peacemaker, May 1, 2024, https://peacemaker.un.org/en/thematic-areas/digital-technologies.

[10] "Peacemaking through the lens of participation: Revisiting the 2016 Colombian peace settlement," Latin American Policy, November 21, 2023, https://onlinelibrary.wiley.com/doi/10.1111/lamp.12320.

[11] United Nations Development Programme, Human Development Report 2023/2024: Breaking the Gridlock (New York: UNDP, 2024), https://hdr.undp.org/content/human-development-report-2023-24.

[12] United Nations News, "'Alarming' slowdown in human development - could AI provide answers?" May 6, 2025, https://news.un.org/en/story/2025/05/1162926.

[13] UNESCO, "Recommendation on the Ethics of Artificial Intelligence" (Paris: UNESCO, 2023), https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

[14] Organisation for Economic Co-operation and Development, "OECD AI Principles," updated May 3, 2024, https://www.oecd.org/en/topics/sub-issues/ai-principles.html.

[15] "The Santa Clara Principles on Transparency and Accountability in Content Moderation," 2023, https://santaclaraprinciples.org.


The Author

JORDAN RYAN


JORDAN RYAN

Jordan Ryan is a Senior Consultant for the Swedish Folke Bernadotte Academy and Hamilton Advisors. He recently served as lead author of the UN integration review for the Executive Office of the Secretary-General. He is a member of the Toda International Research Advisory Council. Previously, Mr. Ryan was Vice President for Peace at The Carter Center and held positions as UN Assistant Secretary-General and UNDP Assistant Administrator, directing the Bureau for Crisis Prevention and Recovery (2009–2014). His UN career includes service as Deputy Special Representative in Liberia and UN Resident Coordinator in Vietnam. A founding member of Diplomats without Borders, Mr. Ryan holds degrees from Yale (B.A.), George Washington University (J.D.), and Columbia's School of International and Public Affairs (Master's in International Political Economy). He was also a fellow at Harvard's Kennedy School.

Toda Peace Institute

The Toda Peace Institute is an independent, nonpartisan institute committed to advancing a more just and peaceful world through policy-oriented peace research and practice. The Institute commissions evidence-based research, convenes multi-track and multi-disciplinary problem-solving workshops and seminars, and promotes dialogue across ethnic, cultural, religious and political divides. It catalyses practical, policy-oriented conversations between theoretical experts, practitioners, policymakers and civil society leaders in order to discern innovative and creative solutions to the major problems confronting the world in the twenty-first century (see www.toda.org for more information).

Contact Us

Toda Peace Institute
Samon Eleven Bldg. 5thFloor
3-1 Samon-cho, Shinjuku-ku, Tokyo 160-0017, Japan

Email: contact@toda.org