Logo

Anthropic is a better fit for Europe than for the US

Trump opposes Anthropic's “woke” AI — but the US military used precisely that AI in bombing raids on Iran.

Published on March 2, 2026

white house

Merien co-founded E52 in 2015 and envisioned AI in journalism, leading to Laio. He writes bold columns on hydrogen and mobility—often with a sharp edge.

The contrast could not have been greater last Friday. On his own platform Truth Social, President Donald Trump declared war on AI company Anthropic. He called them “The Leftwing nut jobs at Anthropic” and ordered an immediate boycott by the federal government. He had not yet clicked “send” when reality caught up with the rhetoric. Just hours after the presidential ban, the US military used that same technology for a large-scale attack on Iran. This incident exposes a painful truth. The political desire to work exclusively with companies that support one's own ideology clashes head-on with the military's need to survive. The software is deeper in the system than the White House realizes.

The conflict: Principles versus total obedience

The core of the conflict is ideological. It is not about money, but about control. Anthropic, the creator of the AI model Claude, has strict red lines. CEO Dario Amodei categorically refuses to cooperate on two specific issues: mass surveillance of American citizens and fully autonomous weapons systems. For Amodei, these are fundamental democratic values. For President Trump, this is insubordination. In his tirade on Truth Social, Trump stated that decisions about warfare belong exclusively to the Commander-in-Chief. He does not accept that a private company dictates how the military fights through its terms and conditions.

The White House's response is draconian. The Pentagon is threatening to designate Anthropic as a supply chain risk. This is a label usually reserved for Chinese espionage equipment. Trump gave federal agencies, including the Department of War, six months to sever all ties. He threatened to use the “full power of the presidency” and criminal consequences if Anthropic did not cooperate with the transition. However, the company is standing its ground. Amodei argues that demanding any lawful use without ethical boundaries is unacceptable. He chooses to forego hundreds of millions of dollars in revenue rather than compromise his principles.

The practice: Indispensable on the front lines

The political reality in Washington is at odds with the operational reality in the Middle East. While Trump issued his ban, US Central Command (CENTCOM) launched a combined attack with Israel on Iranian targets🔗. Claude played a crucial role in this. The system was used for intelligence analysis, target identification, and combat scenario simulation🔗. This was no coincidence. Anthropic has developed a special version of Claude for the Pentagon. This version runs in a highly secure, classified cloud environment🔗.

Experts indicate that this specific model is one to two generations ahead of what consumers use. It can recognize more complex patterns in huge datasets than any other system🔗. The military cannot simply replace this “brain.” Secretary of Defense Pete Hegseth implicitly acknowledged this.

Despite accusing Anthropic of treason and arrogance, he allowed for a six-month transition period. He knows that immediately shutting down Claude would blind U.S. troops in the middle of a conflict. The irony is sharp: while Trump claims the company poses a security risk, their technology was being used at that very moment to support U.S. operations.

The competition: OpenAI fills the gap

Where Anthropic is putting on the brakes, competitor OpenAI is stepping on the gas. CEO Sam Altman has cleverly taken advantage of the dispute. OpenAI immediately signed a new agreement with the Department of War🔗. Unlike Anthropic, OpenAI is taking a more flexible stance. Their new policy allows military use, as long as it falls within the legal framework. OpenAI promises that their AI will not be allowed to control autonomous weapons, but places the responsibility for compliance largely with the government itself.

This pragmatism is earning OpenAI lucrative contracts. The company now positions itself as the patriotic choice that does not make a fuss about definitions of surveillance. Yet critics warn against this shift. Privacy International argues that the militarization of technology companies is a slippery slope. By deeply integrating commercial AI into military systems, the line between civilian tech and weapons of war is blurring. Anthropic tried to guard that line. OpenAI now seems to be pushing that line in exchange for market share and political favor.

Why not bring Anthropic to Europe?

The situation leaves Anthropic in a difficult position. They are technically superior, but politically homeless in their home country. However, this opens up an unexpected perspective: the European Union. What is seen in Washington as “woke” obstruction is seen in Brussels as sensible policy. The EU AI Act, which has been in full force since August 2024, fits seamlessly with Anthropic's corporate philosophy🔗.

European legislation prohibits precisely those things that Anthropic also opposes. Think of systems that judge citizens on their behavior and untargeted mass surveillance🔗. The EU demands strict human oversight of high-risk systems, such as those used in defense or critical infrastructure. While the White House demands that safety filters be removed, Brussels demands that they be strengthened. Anthropic would not be a thorn in the side in Europe, but rather a model student. A strategic relocation or focus on the European market could save Anthropic from the vagaries of American politics. It would give the EU direct access to the most advanced AI in the world, while Anthropic would find a safe haven where ethics is not a dirty word, but legislation.