By Jordan Ryan  |  11 March, 2026

When the Guardrails Come Off

Image: Photo For Everything / shutterstock.com

In January 2026, Dario Amodei, chief executive of Anthropic, one of America’s leading artificial intelligence companies and until recently the Pentagon’s most widely deployed AI provider, warned that sufficiently capable artificial intelligence (AI) could allow a government to generate “a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do.”

On 28 February, a sequence of events in Washington suggested that warning was no longer theoretical. President Trump ordered every federal agency to cease all use of Anthropic’s technology after the company refused Pentagon contract language it said “made virtually no progress on preventing its AI model, Claude, from being used for mass surveillance of Americans or in fully autonomous weapons.” Within hours, OpenAI announced an agreement with the United States Department of War (renamed by executive order from the Department of Defence) to deploy its AI models on classified military networks. Defence Secretary Pete Hegseth declared Anthropic a “supply chain risk,” a designation normally applied to companies linked to foreign adversaries, not domestic firms with policy disagreements. That same day, the United States and Israel launched major strikes against Iran, killing Supreme Leader Khamenei. The timing underscored that AI deployment decisions are being made in the same political environment as military escalation.

The sequence is consequential. A company that maintained explicit civilian safeguards lost federal business. A competitor willing to proceed under agreed terms secured it. Whatever the internal governance of OpenAI’s deployment, the signal to the broader AI industry is clear.

OpenAI has stated publicly that it shares certain principles Anthropic identified, including opposition to mass domestic surveillance and fully autonomous weapons. The Pentagon ultimately proceeded under OpenAI’s terms after declining to provide equivalent assurances to Anthropic. What changed was not the principle but the company.

There is no public, independent way to assess whether those provisions are functioning or merely stated. The deployment is classified. That opacity is itself a governance problem. When verification mechanisms are classified, democratic oversight shifts from public accountability to executive trust. That is not a neutral transition.

This is not principally a story about corporate rivalry. It is about what happens when the final accountability mechanism in a critical governance domain rests in private contractual language, and that language becomes negotiable under political pressure.

Amodei’s January 2026 essay, “The Adolescence of Technology,” develops the argument in full. He contended that fully autonomous weapons, combined with AI-enabled surveillance and information manipulation, could provide structural advantages to governments willing to deploy them without constraint. Democratic systems, with their procedural friction and oversight requirements, are at a structural disadvantage in that competition.

The essay has a companion. More than a year earlier, Amodei published “Machines of Loving Grace,” which laid out the positive case: AI will compress fifty to a hundred years of biological and medical progress into a decade, treat most cancers, transform mental health care, and accelerate development in the Global South. His January 2026 essay is not a retraction of that vision. It is its shadow, an argument that the same capabilities which could deliver those outcomes could, under different governance conditions, instead entrench authoritarian control in ways that prove structurally irreversible. Amodei makes no predictions about timelines or outcomes. Where credible risk of irreversible harm exists, he argues, governance must precede deployment. Recent days suggest that window is already closing.

The peacebuilding field recognises this pattern. In domain after domain, civilian oversight has lagged behind military application. Technical capability advances rapidly. Commercial incentives favour deployment. Legislators and civil society react after the fact. AI differs not in logic but in velocity and scale. The same systems embedded in civilian infrastructure are adaptable for military and intelligence use. The boundary between civilian and military AI is not a technical boundary. It is contractual and political. Contracts can change overnight.

Across three decades working in fragile and post-conflict environments, I have seen how institutional erosion unfolds. Surveillance powers justified for counter-terrorism migrate toward political monitoring. Emergency powers outlast emergencies. Oversight mechanisms that appear durable weaken under sustained political pressure. The consistent lesson is that accountability structures must be embedded before they are tested, not improvised after they have been breached.

That principle applies directly here. Two recently enacted transparency laws—California’s SB 53, in effect since January 2026, and New York’s RAISE Act, signed in December 2025—represent early attempts to require disclosure of AI safety practices, system documentation, and risk evaluation ahead of deployment. These are meaningful first steps. But transparency requirements are only as durable as the institutions receiving disclosed information and the political environment in which those institutions operate. When a company maintaining explicit civilian safeguards can be excluded from federal contracts and labelled a national security risk for doing so, the accountability ecosystem is already under strain. In the absence of comprehensive legislation, federal procurement has become the de facto regulatory mechanism for advanced AI.

The question is not whether AI will be deployed in the service of state power. It will be. The question is whether its deployment will remain subject to institutions strong enough to constrain power rather than concentrate it. If democratic institutions are to retain leverage, oversight must move from voluntary principles to institutional architecture.

That requires action on three fronts. Legislators should establish independent, security-cleared technical review bodies with authority to verify that contractual AI safeguards are functioning in classified deployments, not merely stated in contract language. Civil society organisations and researchers should press for public transparency reports from AI developers holding government contracts, specifying which provisions have been included and which have been waived. And democratic governments must commit to legislating AI-specific civil liberties protections that close the gap between what existing surveillance law anticipated and what AI systems can now do.

These are not radical proposals. They are the minimum architecture of accountability. The politics are moving in the direction of concentration rather than constraint. The question is whether democratic institutions can reassert authority before procurement practice hardens into precedent.

That challenge is already in court. On 9 March, Anthropic responded by filing suit in federal court arguing the supply-chain-risk designation is unlawful retaliation for protected speech. The complaint notes, without apparent irony, that the Department of War “launched a major air attack in Iran with the help of the very same tools” it had just moved to ban. A government that simultaneously brands technology a national security risk and deploys it in combat has not resolved the governance question. It has simply removed the institution that was asking it.

Related articles:

When Destruction Becomes Policy: What the Munich Security Report Reveals About the Future of Global Governance (3-minute read)

International Law Meets an Age of Impunity (3-minute read)

The Danger of a Transactional Worldview (3-minute read)

Digital Polarisation and the Future of Peace: Why Governance Must Catch Up With Power (3-minute read)

The New Fragility: Peacebuilding Meets Digital Democracy (3-minute read)

Jordan Ryan is a member of the Toda International Research Advisory Council (TIRAC) at the Toda Peace Institute, a Senior Consultant at the Folke Bernadotte Academy and former UN Assistant Secretary-General with extensive experience in international peacebuilding, human rights, and development policy.