Anthropic CEO Defies Pentagon Demands Over AI Ethics in Surveillance and Weapons
Anthropic CEO Defies Pentagon on AI Ethics in Surveillance, Weapons

Anthropic CEO Stands Firm Against Pentagon Demands on AI Ethics

Dario Amodei, the chief executive officer of Anthropic, has publicly declared that the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands for broader utilization of its technology. This statement was made on Thursday, February 26, 2026, amid ongoing negotiations with the U.S. Department of Defense. The conflict centers on ethical concerns regarding the potential use of Anthropic's AI models, such as its chatbot Claude, for mass surveillance of Americans and in fully autonomous weapons systems.

Pentagon's Stance and Deadline Pressure

Sean Parnell, the Pentagon's top spokesman, reiterated on social media that the military intends to use Anthropic's artificial intelligence technology solely for lawful purposes. He emphasized that the Defense Department has no interest in employing AI for illegal mass surveillance or developing autonomous weapons without human involvement. However, Parnell set a firm deadline of 5:01 PM ET on Friday for Anthropic to agree to the Pentagon's terms, threatening to terminate the partnership and designate the company as a supply chain risk if compliance is not met.

During a recent meeting between Defense Secretary Pete Hegseth and Amodei, military officials warned of potential consequences, including contract cancellation or invocation of the Cold War-era Defense Production Act, which could grant the military sweeping authority to use Anthropic's products without company approval. Parnell mentioned only some of these repercussions in his Thursday post, heightening tensions as the deadline approaches.

Anthropic's Ethical Policies and Industry Context

Anthropic has maintained strict policies that prohibit its models from being used for mass surveillance or in autonomous weapons. The company stated that new contract language from the Defense Department "made virtually no progress" on preventing such applications. Anthropic is the last among its peers, including Google, OpenAI, and Elon Musk's xAI, to not supply its technology to a new U.S. military internal network, highlighting its commitment to ethical AI development.

In a statement following Tuesday's meeting, Anthropic expressed its desire to continue "good-faith conversations" to support the government's national security mission in a responsible manner. The company has not immediately responded to recent requests for comment, but its stance reflects a broader industry debate over AI governance and military applications.

Political Reactions and Calls for Governance

Senator Thom Tillis, a North Carolina Republican not seeking reelection, criticized the Pentagon's handling of the matter as unprofessional, arguing that such discussions should occur privately rather than in public. He suggested that the government should listen to companies like Anthropic when they resist opportunities due to ethical fears, working behind closed doors to address underlying concerns.

Senator Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, expressed deep disturbance over reports that the Pentagon is "working to bully a leading U.S. company." He called for Congress to enact strong, binding AI governance mechanisms for national security contexts, citing this incident as evidence of the Defense Department's disregard for AI governance.

Defense Secretary Hegseth has previously stated on Fox News that the Pentagon seeks legal advice that does not act as a roadblock to operations, underscoring the ongoing tension between military objectives and ethical AI use. As the deadline looms, this clash between Anthropic and the Pentagon highlights critical issues in artificial intelligence ethics, national security, and corporate responsibility in the tech industry.