Chalk messages scrawled across the pavement outside OpenAI’s San Francisco headquarters this week asked pointed questions about ethics, accountability, and the limits of artificial intelligence. The words came from outside activists — but the frustration they expressed had already been brewing within the building for days.
The flashpoint was a deal OpenAI struck with the US Department of Defense, granting the Pentagon access to its AI models for use within classified systems. The agreement landed on a Friday, catching many inside the company off guard, and has since triggered a wave of internal criticism that leadership has struggled to contain.
A RIVAL’S REFUSAL SETS THE STAGE
The backdrop to the controversy involves Anthropic, OpenAI’s most prominent competitor in the large language model space. Anthropic had previously declined to renew its own Pentagon contract after concluding that the agreement’s language did not adequately protect against two scenarios the company considers absolute limits: mass surveillance and the development of autonomous weapons systems. The Pentagon responded by blacklisting Anthropic, designating it a supply chain risk — a significant and unusual rebuke.
What happened next deepened the sense of betrayal felt by some OpenAI staff. As the government’s deadline for Anthropic’s decision approached, OpenAI CEO Sam Altman publicly stated he agreed with Anthropic’s position and shared the same ethical boundaries. Hours later, OpenAI announced it had signed its own deal with the Pentagon — effectively stepping into the space Anthropic had vacated.
The timing struck many observers, and a number of OpenAI employees, as deeply opportunistic. Scrutiny of the published contract terms intensified over the weekend, with critics arguing that the language around autonomous weapons and surveillance was far too vague to provide meaningful protection. Altman took to social media to address the backlash and later announced revisions to the contract intended to more clearly prohibit the use of OpenAI technology in surveillance programs — though the updated language made no specific mention of autonomous weapons.
EMPLOYEES SPEAK OUT AS LEADERSHIP SCRAMBLES
Several OpenAI researchers and safety specialists went public with their concerns. One research scientist posted that he personally did not believe the deal had been worth pursuing, while an AI safety employee called for independent legal analysis of the revised contract terms. Internal discussions, by one account, were described as overwhelming in their scale and intensity.
One current employee, speaking anonymously, said colleagues widely respect Anthropic for holding firm against Pentagon pressure. The frustration, however, was not purely about the deal itself but about how it was communicated — or rather, how it wasn’t. A contract of this magnitude and sensitivity, they said, felt rushed and poorly explained.
Altman acknowledged the misstep. At a company-wide meeting, he admitted that the speed at which the agreement was finalized was a mistake and that his attempt to quietly navigate the situation had come across as careless and self-serving.
His broader argument to employees was strategic: governments should be working with safety-focused AI labs rather than those with fewer ethical safeguards. He also stated he is actively pushing for Anthropic’s supply chain risk designation to be lifted — a notable gesture toward a company his own actions had just helped sideline.
The internal debate is far from over.




Leave a Reply