At the Vanderbilt Summit on Modern Conflict and Emerging Threats, OpenAI CEO Sam Altman issued a diplomatically worded but telling admission: he would not rule out cooperation with the U.S. Department of Defense in the development of AI-based weapons systems. The comment, made during a panel discussion with former NSA Director and current OpenAI board member Paul Nakasone, is an apparent shift in the historically skeptical relationship between Silicon Valley and the defense community.
“I will never say never, because the world could get really weird,” Altman replied, answering a question on whether OpenAI might get involved in creating weapons platforms. He immediately moderated the statement, adding, “I don’t expect to do it in the foreseeable future,” unless under extreme circumstances where it would be a question of making “really bad options”.
Altman’s comments come at a time when the AI industry is growing more open to defense contracting, a far cry from the outrage among employees that erupted in 2018 when Google’s Project Maven work sparked protests and en masse resignations. OpenAI itself has been dialing back its stance: last December, the company announced a strategic partnership with defense-tech company Anduril Industries to co-develop anti-drone technologies—an early sign that the company is open to considering national security partnerships on certain terms.
However, Altman explained that he remains skeptical about AI in autonomous weapons. “I don’t believe most of the world would want AI to be making weapon decisions,” he explained to the packed audience of military commanders, intelligence officers, and university researchers.
Altman’s shifting stance is part of wider tensions within the tech world as AI technologies advance and become more deeply embedded in civilian and military realms. His comments follow just a few days before OpenAI is set to release its closely watched 03 model, a next-generation system for sophisticated reasoning.
The summit conversation also touched on the broader government role in embracing and employing AI tools. “I don’t think the adoption of AI in the government has been as good as it can be,” Altman stated, urging leaders in the public sector to delve more significantly into the rapidly changing technology. He predicted the development of “exceptionally smart” AI systems within the next year—a pace of innovation many in the defense community are finding difficult to match.
OpenAI’s recent moves—including bringing on a high-ranking former intelligence chief like Nakasone to its board—signal a deliberate effort to bridge the gap between Silicon Valley innovation and national security imperatives. But how far that collaboration will go remains to be seen.
For now, Altman’s cautious openness suggests that OpenAI, while still guided by ethical concerns, is not closing the door on defense work. As geopolitical tensions rise and AI’s strategic value increases, such partnerships may become not just more acceptable—but inevitable.