Voice
AI Crossroads: Conflict or Co-Governance
By Wang Jiahao  ·  2026-04-09  ·   Source: NO.15 APRIL 9, 2026

A recent confrontation between U.S. AI company Anthropic and the Donald Trump administration has pushed the ethical boundaries of military AI applications into the global spotlight.

The Pentagon labeled Anthropic a "supply chain risk" after the company refused to allow its AI models to be used for fully autonomous weapons and mass domestic surveillance. Trump subsequently ordered federal agencies to halt the use of Anthropic's technology. However, on March 26, a federal judge issued a preliminary injunction blocking the actions, citing potential First Amendment violations.

The case has been widely viewed as an example of technological ethics being overridden by political power. It not only exposes a tendency toward technological control in the U.S., but also highlights deeper divergences in global approaches to AI governance.

Technology hegemony

The dispute dates back to July 2025, when the U.S. Department of War's Chief Digital and Artificial Intelligence Office announced contracts of up to $200 million each with frontier AI firms—including Anthropic, Google, OpenAI and xAI, to develop advanced capabilities for national security.

In late 2024, Anthropic's Claude, one of the world's most advanced AI models, became the first to clear the Pentagon's classified hurdles, gaining access to secret-level military and intelligence networks.

Tensions emerged this February during contract renewal negotiations, when disagreements arose over the scope of permissible uses. The Pentagon sought authorization for "all lawful purposes," while Anthropic insisted on maintaining two non-negotiable red lines: prohibiting the use of its systems for mass domestic surveillance and for fully autonomous lethal weapons.

Anthropic CEO Dario Amodei argued that current AI systems lack the judgment required of human soldiers and could lead to unintended harm to civilians or allied forces, stating that the company was unwilling to provide technology that might endanger lives.

The Pentagon, however, maintained that existing laws and military policies already prohibit such uses, and that private companies should not have veto power over national security decisions.

On February 27, Trump ordered all federal agencies to cease using Anthropic's technology via a statement on his online platform Truth Social. On the same day, Secretary of War Pete Hegseth announced on X that Anthropic would be designated a "national security supply chain risk."

A court intervention

On March 26, Judge Rita Lin of the U.S. District Court for the Northern District of California ruled that the government's punitive measures may have exceeded its authority and ordered a temporary halt to their implementation.

Anthropic's situation is not an isolated case; it reflects the broader challenges faced by technology firms operating under intense state pressure. As a company focused on AI safety, its decision to uphold ethical red lines in military applications can be seen as an expression of corporate responsibility. Yet this stance also brought it into direct conflict with a government increasingly willing to override such boundaries.

The case reveals a deeper logic in which AI is viewed not merely as a tool for improving productivity, but as an instrument for maintaining technological dominance. When technological development aligns with national power interests, it is often framed as "open" and "innovative." When ethical constraints limit its potential military or surveillance applications, administrative and political measures may be deployed to enforce compliance.

In this sense, the case can be understood as a manifestation of technological hegemonic tendencies, one that binds private innovation to state power and risks transforming advanced technologies into instruments of coercion. This dynamic erodes corporate autonomy, distorts the direction of innovation and introduces instability into an already fragile global framework for AI governance.

Confront or cooperate?

The Anthropic case is a warning. As AI technologies advance at an unprecedented pace, the international community urgently needs widely accepted ethical norms and a shared governance framework.

On the one hand, the U.S. has promoted narratives such as a "China technology threat" and invoked "national security risks" to justify restrictions on foreign hi-tech companies. On the other hand, it has actively pushed forward the militarization of AI, including efforts that could cross ethical boundaries, including the development of fully autonomous weapons systems. Such double standards undermine fair competition in the global AI landscape.

At its core, technological hegemony reflects the transformation of humanity's shared technological achievements into geopolitical tools used to preserve unilateral advantages and suppress ethical consensus. The Anthropic episode clearly reflects this trend.

What is needed now is openness rather than exclusion, cooperation rather than unilateral dominance and shared benefits rather than monopolization. If technological development is allowed to drift away from the ethical anchor of being people-centered and directed toward the good, AI risks evolving from a driver of progress into a force that wears down trust and threatens peace.

Looking ahead, the construction of a global AI governance framework must move beyond narrow technological instrumentalism and geopolitical rivalry. It should be grounded in a vision of a shared future for humanity, addressing both ethical concerns and the autonomy of intelligent systems.

This requires a commitment to consultation, joint contribution and shared benefits, placing common security and sustainable development above technological competition. A truly inclusive multilateral governance system should be built—one that reflects the interests of all countries, especially developing ones in the Global South.

At the institutional level, we need a multi-layered and adaptive governance structure that balances digital sovereignty with international cooperation, while also ensuring that developing countries have a meaningful voice in rule-making.

Ultimately, the future of AI should be defined as a sustainable, inclusive and safe global public good, not as a new vehicle for technological domination. The Anthropic case should serve as a reminder of the risks posed by such supremacy to global ethical consensus and stability. The international community must work together to build a governance system that is inclusive, transparent and accountable, ensuring that AI becomes a force that supports, rather than undermines, a shared future for humanity.

The author is a program officer at the Shanghai Institute of American Studies

Copyedited by Elsbeth van Paridon

Comments to dingying@cicgamericas.com

China
Opinion
World
Business
Lifestyle
Video
Multimedia
 
China Focus
Documents
Special Reports
 
About Us
Contact Us
Advertise with Us
Subscribe
Partners: China.org.cn   |   China Today   |   China Hoy   |   China Pictorial   |   People's Daily Online   |   Women of China   |   Xinhua News Agency
China Daily   |   CGTN   |   China Tibet Online   |   China Radio International   |   Global Times   |   Qiushi Journal
Copyright Beijing Review All rights reserved  互联网新闻信息服务许可证10120200001  京ICP备08005356号  京公网安备110102005860