Open Letter from Google and OpenAI Employees Raises Concerns About Potential Military AI Use

Reviewed by Ayaz Khan.

An open letter, titled "We Will Not Be Divided" (as of February 28, 2026) signed by 573 current employees of Google and 93 current employees of OpenAI calls on company leadership to decline requests described in the letter as coming from the United States Department of Defense (DoD).

Signatures were confirmed as current employees, with some choosing to remain publicly anonymous.
Screenshot: Notdivided.org / Credit: DIW

The letter claims that the department has considered invoking the Defense Production Act in connection with Anthropic and has discussed measures that could require the company to provide access to its AI models for military use. It further states that Anthropic declined to allow its models to be used for domestic mass surveillance or for fully autonomous lethal decision-making without human oversight. In line with these concerns, OpenAI CEO Sam Altman told CNBC he does not think the Pentagon should threaten AI companies with the Defense Production Act and said companies should be able to decide whether to cooperate under legal protections. On Saturday, Sam Altman also posted on X that OpenAI reached an agreement with the Department of War to deploy its models in the department’s classified network, noting that the department agrees with safety principles, including prohibitions on domestic mass surveillance and human responsibility for the use of force, including autonomous weapon systems.

sam altman tweet: Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.  In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.  AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.  The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.  We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.  We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.  We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Screenshot: Sam Altman - X / Credit: DIW

According to the letter, the Department of Defense has engaged in discussions with Google and OpenAI regarding potential cooperation on similar AI capabilities. The letter does not include independent verification of these claims but presents them as the understanding of its signatories.

The organizers state that all signatures were verified as current employees, and that some signatories chose to remain anonymous publicly.

Notes: This post was improved with AI assistance and reviewed, edited, and published by humans.

Read next: 

Previous Post Next Post