The era of AI-powered warfare has entered a new phase. OpenAI has officially signed an agreement to bring its advanced language models into the Pentagon’s most secure environments. This landmark deal follows the dramatic expulsion of Anthropic, which was banned from all federal agencies after a high-stakes standoff over the ethical limits of artificial intelligence.
The “red lines” that Anthropic refused to cross involved the most controversial uses of AI: autonomous weapons and mass surveillance. Anthropic’s leadership believed that allowing their AI to kill without human oversight or spy on the American public was a bridge too far. The Pentagon, however, saw these ethical stances as a hindrance to maintaining a technological edge over global adversaries.
The resulting political firestorm saw President Trump personally intervening to end Anthropic’s federal contracts. He described the company’s refusal to modify its terms as a “disastrous mistake” and a sign of disrespect toward the military. This created the perfect opening for OpenAI to step in and offer its services as a more “aligned” partner for the nation’s defense goals.
OpenAI’s deal is unique because it claims to keep the very ethical safeguards that Anthropic was fighting for. Sam Altman has insisted that the Pentagon has formally agreed to OpenAI’s rules against autonomous lethality and domestic spying. This suggests that the administration was more offended by Anthropic’s perceived arrogance than by the actual content of their safety protocols.
Anthropic is now an outsider in the federal space, but they are an outsider with a clear conscience. The company has stated that they acted in good faith throughout the process and that their restrictions have never actually hurt a government mission. As OpenAI begins the complex task of integrating its models into the Pentagon’s classified systems, the debate over the ethics of AI in war is only just beginning.