The United Nations and AI Safety: Toward Global Trustworthy Intelligence

Posted by:

|

On:

|

As artificial intelligence transforms economies, societies, and geopolitics, the United Nations (UN) has been rapidly scaling efforts to ensure that AI development and deployment proceed in ways that maximize benefits while minimizing risks. Unlike national AI strategies focused primarily on industrial competitiveness, the UN’s work centers on global coordination, ethical foundations, and inclusive governance that reflects the interests of all 193 Member States.


1. A Globalising Imperative: Consensus on Safety and Trust

At the core of the UN’s push on AI safety is a recognition that AI will affect every society and that no country — large or small — can fully insulate itself from the global impacts of advanced AI technologies. In March 2024, the UN General Assembly adopted a widely referenced resolution titled Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development. Supported by over 120 countries, the resolution encourages Member States to adopt national safeguards that protect human rights, personal data and monitor AI risks — though it is non-binding in legal effect.

The resolution goes beyond basic risk mitigation. It defines “safe, secure and trustworthy AI systems” as those that are human-centric, reliable, ethical, inclusive, privacy-preserving, and aligned with the UN’s Sustainable Development Goals — stressing that AI should promote peace, equitable development, digital inclusion and fundamental freedoms.


2. New Institutional Architecture: Dialogue and Scientific Oversight

Recognising the limitations of voluntary resolutions, the UN has built new structures to turn dialogue into sustained policy action:

• Global Dialogue on AI Governance

Launched in 2025 under a UN General Assembly resolution, the Global Dialogue on AI Governance provides the first universal forum where all UN Member States, civil society, industry, and scientific communities can share best practices and work toward common approaches for global governance. It is intended to be a collective space to bridge fragmented national frameworks, especially for countries not yet part of major AI standards efforts.

• Independent International Scientific Panel on AI

Complementing the Dialogue is an evidence-based panel of global experts, often compared to an “AI IPCC,” contracted to conduct ongoing assessments of AI capabilities, risks, and impacts. Its role is envisioned as an early-warning and foresight mechanism to help policymakers separate hype from substantive risk. The Panel’s work aims to inform not only the Dialogue but also national and regional policymaking worldwide.

Together, these bodies represent a first-ever attempt by the UN to anchor AI governance within a truly global and multidisciplinary structure — a major step compared with existing regional and national frameworks.


3. Ethical Foundations Across the UN System

Beyond governance bodies, the UN has been developing ethical frameworks that guide AI deployment across sectors and agencies. One prominent example is the set of UN System Principles for the Ethical Use of AI, co-led by UNESCO and the UN Secretariat, which emphasizes:

  • Do no harm
  • Safety and security
  • Fairness, non-discrimination and inclusion
  • Transparency, explainability, human oversight
  • Sustainability and privacy protection
  • Responsibility and accountability

This human-rights grounded approach is designed to complement international law and inform guidelines across all stages of an AI system’s life cycle.

Additionally, UNESCO’s broader Recommendation on the Ethics of Artificial Intelligence sets an international benchmark for ethical AI governance — ensuring that ethical safeguards are aligned with human rights and sustainable development principles globally.


4. Trust and Safety in Practice: Beyond High-Level Principles

The UN is also engaging in programmatic work aimed at tackling AI harms where they are most likely to occur. For example, the AI Trust and Safety Re-imagination Programme under the UN Development Programme (UNDP) invites innovators and organisations to co-design new approaches to managing AI safety risks — especially in local contexts often overlooked by high-level governance frameworks. The programme emphasises partnership across public and private actors, aims to strengthen inclusive and locally grounded safety ecosystems, and supports equitable AI innovation.


5. Digital Cooperation and Broader Frameworks

AI safety at the UN is not viewed in isolation. It sits within the broader Global Digital Compact (GDC) — a UN effort to establish a comprehensive framework for responsible digital technologies that includes AI governance, digital trust, cybersecurity, and closing digital divides. While non-binding, the Compact seeks to foster collaboration among governments, technology firms, civil society and multilateral organisations to cultivate an inclusive, safe, secure and human-centered digital environment.


6. Challenges and Limitations

Despite these ambitious efforts, some analysts caution that the UN’s AI governance mechanisms lack enforcement authority and primarily play a confidence-building and facilitation role rather than impose binding rules. Their impact will depend heavily on Member States’ willingness to translate global consensus into national regulation and coordinate with regional governance initiatives already underway.


Conclusion: Toward Inclusive AI Safety

The UN’s AI safety plans represent a distinctive and necessary response to a rapidly evolving technological frontier — pushing for inclusive global dialogue, ethical norms grounded in human rights, scientific oversight, and cooperation for sustainable development. While the path to enforceable global standards remains long, these initiatives mark a shift from fragmented national approaches toward a shared international effort to harness the promise of AI while safeguarding people and societies worldwide.

Posted by

in