Dario Amodei’s company chose democratic values over profit, blocking CCP-linked firms and standing up to a Pentagon ultimatum
A Decision That Cost Real Money
In a statement issued on February 27, 2026, Anthropic CEO Dario Amodei confirmed that his company had deliberately walked away from several hundred million dollars in revenue to block access to its Claude AI system by firms linked to the Chinese Communist Party. The decision included cutting off access for entities designated by the US Department of Defense as Chinese Military Companies. Anthropic also confirmed it had shut down CCP-sponsored cyberattacks that attempted to misuse Claude for potentially hostile purposes.
The announcement came as Anthropic simultaneously revealed it was in a direct confrontation with the Pentagon itself – a confrontation that may cost the company its US government contracts. The statement from Amodei represents one of the most public and principled stances taken by any major AI company on the question of who should and should not have access to frontier artificial intelligence systems.
The CCP Threat to Democratic AI
Anthropic’s decision to block CCP-linked firms reflects a growing recognition across the technology sector that advanced AI is not a neutral commercial product. It is a capability that can be weaponized. According to Anthropic’s statement, the company has also advocated for strong export controls on advanced semiconductors – the chips that make frontier AI possible – specifically to ensure that democratic nations maintain a structural advantage in AI development over authoritarian states.
This position puts Anthropic directly against the short-term economic logic that has led many technology companies to prioritize access to Chinese markets over security considerations. The decision to forgo hundreds of millions in CCP-linked revenue is a concrete demonstration that a different approach is possible – and that the companies best positioned to sustain it are those that take seriously the question of what their technology could be used for by actors who do not share democratic values.
The Pentagon Ultimatum
The same statement that announced the CCP revenue block also revealed a serious confrontation with the US Department of Defense. According to Amodei, the Pentagon has stated it will only contract with AI companies that agree to “any lawful use” of their technology and remove specific safeguards. The safeguards in question relate to two capabilities Anthropic has publicly stated it will not support: mass domestic surveillance and fully autonomous weapons systems.
The Pentagon has threatened to remove Anthropic from its systems if the company maintains these safeguards. It has also threatened to designate Anthropic a “supply chain risk” – a label normally reserved for US adversaries – and has explored invoking the Defense Production Act to force removal of the safeguards. Amodei’s statement called these two threats “inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Why the Safeguards Matter
Anthropic’s refusal to support mass domestic surveillance and fully autonomous weapons is not merely a corporate ethics position. It reflects a substantive judgment about what frontier AI systems are and are not capable of doing reliably, and what the consequences of unreliable autonomous systems could be. A weapons system that makes kill decisions without human judgment, operating on AI that may misidentify targets or be fooled by adversarial inputs, creates risks that extend far beyond any individual engagement.
The company has stated it is ready to continue supporting US national security efforts across intelligence analysis, modeling and simulation, operational planning, and cyber operations – all areas where Claude is currently deployed within classified government networks and national laboratories. What it will not do is remove the guardrails that prevent its technology from being used for purposes it judges to be inconsistent with democratic values and responsible AI development. As Freedom House has documented, AI systems deployed without such guardrails by authoritarian states have become powerful instruments of repression. Anthropic is drawing a line that others should follow.
Pui Yi Cheung
Economy & Labor Journalist, Apple Daily UK
Contact: puiyi.cheung@appledaily.uk
Pui Yi Cheung is an economy and labor journalist with expertise in employment trends, small business dynamics, and workers’ rights. Educated at a respected UK journalism school, she received formal training in economic reporting, data literacy, and investigative techniques, equipping her to cover complex financial topics accurately.
She has contributed to Apple Daily and other liberal Chinese newspapers, reporting on wage policy, employment conditions, labor organizing, and the economic challenges facing diaspora communities. Her work emphasizes firsthand interviews and careful examination of official statistics and regulatory documents.
Pui Yi brings real newsroom experience in translating economic data into accessible reporting without sacrificing accuracy. She is known for methodical fact-checking and for consulting independent experts when covering technical subjects.
Her authority is reinforced by consistent editorial oversight and adherence to transparency standards, including clear sourcing and prompt corrections when required.
At Apple Daily UK, Pui Yi Cheung produces trustworthy economic journalism grounded in evidence, professional experience, and public-interest reporting.
