A Future by Choice, Not by Chance.

The emergence of superhuman intelligence is imminent. Its development must not be a race for profit or power. It must be a global collaboration for the benefit of all humanity.

Join the Movement

Humanity's Greatest Risk

  • 🌍A threat with no blast radius, no borders, and no warning.
  • 🤝Built without international treaties or shared control.
  • đź§ A system smarter than us, created by people who don't fully understand it.
  • ⏳Once it arrives, we don't get a second chance.

The Arc of Intelligence

To understand the future, we must first understand the present. This framework outlines the progression from simple programs to superintelligence.

Level 0: Reactive Programs

Definition: A system that follows pre-programmed rules to react to specific inputs, without any learning capability or memory of past events. This is classical automation, not true AI.

Examples: Thermostats, elevator controls, automatic doors.

Level 1: Narrow AI (ANI)

Definition: An AI that learns from data to excel at a single, predefined task. It operates as a sophisticated tool, but its performance collapses outside its narrow training.

Examples: Deep Blue (chess), spam filters, music recommendations.

WE ARE HERE - Mid 2025

Level 2: Applied Broad Intelligence (ABI)

Definition: An AI that integrates multiple narrow skills to master a broad but bounded domain. This is the current frontier of AI development.

Examples: Autonomous driving systems (Waymo), advanced medical diagnostics, GPT-4/5 with tool use.

Level 3: Proto-AGI

Definition: An AI that can learn and reason in novel situations outside of its training, demonstrating the first sparks of true generality and cross-domain knowledge transfer.

Conceptual Examples: A robot that learns to assemble any furniture by watching a few YouTube videos, or an AI that reads new scientific papers to generate its own novel, testable hypotheses.

Level 4: General AI (AGI)

Definition: An AI with the ability to understand, learn, and apply knowledge across the full range of human cognitive tasks, including autonomous goal-setting.

Conceptual Examples: An AI that passes "The College Student Test" by enrolling and graduating from a university autonomously.

Level 5: Superintelligence (ASI)

Definition: An intellect that radically surpasses the best of human intelligence in every domain, including its own recursive self-improvement.

Conceptual Examples: An AI that solves Grand Challenges like unifying physics or curing aging.

Voices of Warning

The development of full artificial intelligence could spell the end of the human race.
— Stephen Hawking

The Manifesto: A Call for a Global AGI Trust

The development of powerful, specialized AI will continue to drive innovation and commerce. However, the pursuit of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) represents a fundamental inflection point for humanity—one too powerful to be a weapon or a product. AGI/ASI development is currently a reckless race driven by corporate profit and geopolitical fear. This is insanity, and we demand the formation of a neutral, nonprofit, international consortium to ensure it serves all humanity as a shared public good.

The Demand: A Global Consortium Must...

1. Make Safety the First Principle

Prioritize alignment and control above all else. Progress should be gated by safety milestones, not deadlines.

Safety cannot be an afterthought; it must be the foundational, non-negotiable bedrock of AGI development. This principle requires a proactive and rigorous safety culture, including:

  • Verifiable Alignment: Research must focus on creating systems whose goals are provably aligned with human values, moving beyond simply hoping for the best.
  • Robust Interpretability: We must be able to understand and audit the reasoning behind an AI's decisions, eliminating "black box" systems for critical functions.
  • Aggressive Red-Teaming: An independent body must be tasked with continuously trying to break, misuse, and find flaws in the AI systems before they are deployed.
  • Pausing Capabilities: The consortium must have both the technical ability and the political authority to pause or roll back development if safety milestones are not met or if dangerous, unintended capabilities emerge.

2. Include the World's Best Minds

Assemble leading scientists, ethicists, and philosophers in an open, collaborative environment.

This mission requires more than just technical expertise. The consortium must actively recruit a diverse, interdisciplinary, and international body of talent. This includes not only leading AI researchers and engineers but also ethicists, sociologists, psychologists, diplomats, and artists to ensure the system's values are robust and human-centric. By creating a collaborative environment that transcends corporate secrecy and national borders, we can build a holistic intelligence that reflects the full spectrum of human wisdom and experience, rather than the narrow perspective of a single team or culture.

3. Be Globally Governed

Operate under a neutral, international body with representation from all nations.

A technology this powerful cannot be steered by a corporate board or a single nation-state. Its governance must be explicitly designed for neutrality, accountability, and global representation. We propose a multi-stakeholder model, similar to successful international bodies, with a council that includes scientific experts, government representatives, and members of civil society. This governing body must have the binding authority to enforce safety protocols, audit development, and ensure its actions are transparent to the public it serves.

4. Build AGI Transparently

Commit to open-source principles and verifiable safety protocols, accessible to the public.

Secrecy is a catalyst for mistrust and disaster. Transparency must be a core operating principle, extending beyond merely open-sourcing code. It requires public disclosure of research goals, funding sources, training data composition, safety experiment results, and known system limitations. This radical transparency allows for global peer review and public scrutiny, creating a feedback loop that identifies risks and biases far more effectively than any internal team could. This isn't about revealing dangerous secrets; it's about preventing the creation of dangerous, unexamined systems in the first place.

5. Serve All Humanity

Ensure that the benefits of AGI are distributed equitably to elevate humanity beyond poverty, disease, and conflict.

The ultimate purpose of AGI must be to elevate the well-being of every person, not to amplify the wealth or power of a few. This principle mandates that the benefits of AGI are treated as a global public utility. The consortium must proactively apply AGI capabilities to humanity's most pressing challenges, such as curing diseases, mitigating climate change, providing personalized education, and eradicating poverty. Access to these world-changing tools must be equitable, ensuring AGI closes global divides rather than widening them into an unbridgeable chasm.

6. Combine Global Resources

To prevent a resource arms race, the consortium must pool critical assets—including computational power, funding, and diverse datasets—under shared international control.

The race for AGI is fueling a dangerous arms race for computational power and data. The only way to de-escalate is to transform these contested resources into shared instruments for global good. The consortium must establish and manage a centralized pool of compute, analogous to a CERN for AI research, accessible to the international team. Funding will be drawn from a neutral trust supported by diverse sources, and training data will be ethically sourced and globally representative, moving beyond biased, scraped datasets. Pooling resources is the most practical way to ensure no single entity achieves a dangerous monopoly over the future of intelligence.

A Proposed Leadership Structure

To succeed, the consortium must be a robust institution with a system of checks and balances. This "dream team" illustrates the type of world-class, independent leadership required for each critical function.

Independent Public Oversight Board

This board sits above all departments and has final authority on ethical and public interest matters. It is composed of respected global leaders from fields outside of direct AI development, such as human rights, international law, and philosophy, ensuring the consortium remains accountable not just to institutions—but to humanity itself. It is advised by a dedicated Youth Council and shares joint authority with the Safety Auditing department over the **'emergency stop' mechanism.**

Proposed Co-Chairs: Amartya Sen & Malala Yousafzai

1. Alignment & Safety Auditing

Proposed Head: Eliezer Yudkowsky
Core Advisors May Include: Paul Christiano, Jan Leike

This department operates as a fully independent internal auditor. It defines the rigorous safety criteria and quantifiable metrics that all development must meet. Its teams conduct continuous, adversarial testing (red-teaming) to proactively discover flaws, biases, and potential for misuse before they can become threats. It holds ultimate veto power over deployment at any stage **and shares joint authority with the Oversight Board for the ultimate 'emergency stop' mechanism.**

2. Foundational Safety Research

Proposed Head: Ilya Sutskever
Core Advisors May Include: Geoffrey Hinton, Shane Legg

This department is the pure R&D wing focused on solving the core technical challenges of AI safety. It pioneers new methods for interpretability, control, and ensuring an AI's goals remain stable and aligned with human values. Its breakthroughs provide the proven, foundational techniques that the development teams must use, creating a pipeline from pure research to safe application.

3. Technical Development & Capabilities

Proposed Head: Demis Hassabis
Core Advisors May Include: Yann LeCun, Andrew Ng, Jakob Uszkoreit

This department is responsible for the careful and incremental scaling of AI capabilities. It integrates the proven safety techniques from foundational research to build the core AGI system. Its progress is strictly gated by the milestones and tests set by the Safety Auditing department, ensuring that capabilities never outpace safety and control.

4. Ethics & Societal Impact

Proposed Head: Dr. Timnit Gebru
Core Advisors May Include: Kate Crawford, Joy Buolamwini, Zeynep Tufekci

This department acts as the consortium's conscience. It analyzes the second- and third-order effects of the technology on society, culture, and the economy. By embedding ethicists, sociologists, and historians within the development process and consulting with global communities, it works to mitigate harm, prevent the amplification of bias, and ensure the AI's deployment is equitable and just.

5. Data Curation & Cultural Synthesis

Proposed Head: Dr. Fei-Fei Li
Core Advisors May Include: Ed Catmull, Refik Anadol, Margaret Atwood

This department curates the 'mind' of the AGI. It is responsible for ethically sourcing and meticulously documenting a vast, diverse dataset representing the breadth of human knowledge, culture, art, and ethical wisdom. Its mission is to move beyond simply scraping the internet and instead build a rich, balanced, and representative corpus to serve as the AI's foundational understanding of humanity.

6. Public Communication & Education

Proposed Head: Neil deGrasse Tyson
Core Advisors May Include: Sir David Attenborough, Sal Khan, Tim Urban

This department's mission is to translate the consortium's complex work into clear, accessible, and engaging narratives for the global public. It manages all public-facing communications, develops educational materials to foster global AI literacy, and creates channels for citizen feedback, ensuring a continuous, two-way dialogue between the project and the people it serves.

7. Global Governance & Diplomacy

Proposed Head: Audrey Tang
Core Advisors May Include: Brad Smith, Marietje Schaake

This department is the consortium's embassy to the world. It builds and maintains the international legal and political frameworks necessary for global trust. It manages relationships with governments, ensures compliance with international agreements, and develops transparent processes for public communication and participation, making the consortium accountable to all nations.

8. Superintelligent Systems Operations

Proposed Head: Dario Amodei
Core Advisors May Include: Chris Olah, Helen Toner, Dylan Patel

This department manages the immense practical challenge of running a live superintelligent system. It develops and maintains the secure, sandboxed hardware environments (the 'containment'), monitors the AI's behavior in real-time, and designs the protocols for safe interaction and shutdown. It is the final line of defense in operational safety.

Scientific & Philosophical Advisory Council

This council advises on the most profound, foundational questions of intelligence, consciousness, and the long-term trajectory of a post-AGI world. It asks not just "can we?" but "should we?" and helps steer the consortium's ultimate purpose with wisdom and foresight.

Core Advisors May Include: Nick Bostrom, Martha Nussbaum, Luciano Floridi, and indigenous thinkers.

A Precedent for Global Collaboration

We have built together before. For the most important project in history, we must do it again.

Join the Movement

Humanity's future depends on our collective action. Here are the ways you can help build a safer path forward.

1. Sign the Declaration

The most important step. Add your name to the global call demanding a safe, transparent, and globally governed AGI.

2. Donate to the Cause

Your contribution funds our global outreach, educational materials, and advocacy efforts to bring this message to world leaders.

3. Volunteer Your Skills

Do you have expertise in policy, design, translation, or community organizing? We need your help.

4. Spread the Word

The simplest way to help. Share this movement with your network.