Sam Altman walked into a room full of US intelligence officials in the summer of 2017 with a claim that would have set off alarms anywhere: China had launched an “AGI Manhattan Project,” he said, and OpenAI needed billions in government funding to keep pace. It was exactly the kind of geopolitical threat that opens wallets in Washington — and, according to a major New Yorker investigation published this week, it may have had no basis in fact.When pressed for evidence, Altman said, “I’ve heard things.” He later promised a follow-up with proof. It never came. The official who looked into the claim told The New Yorker there was no evidence the Chinese project existed — it was, in his words, “just being used as a sales pitch.”
The Manhattan Project pitch that kept moving goalposts
This wasn’t a one-off. Altman reportedly made the China claim across several meetings with intelligence officials, each time leaning on the same nuclear-age analogy he’d been using since co-founding OpenAI in 2015 — that building AGI was the new Manhattan Project, and that the stakes were civilizational.The New Yorker’s investigation, based on interviews with more than 100 people and a trove of internal documents, finds that Altman used the analogy differently depending on who was in the room. With national security audiences, it was a scare tactic to unlock funding. With safety-conscious researchers, it was a call for caution and international coordination. The framing shifted. The ask didn’t.
From government rooms to the Saudi royal court
The pattern extended well beyond Washington briefings. Altman pursued funding from the Saudi sovereign wealth fund shortly after the Khashoggi murder, asked advisers whether he could “get away with” taking Saudi money despite the optics, and eventually partnered with the UAE’s Sheikh Tahnoon — a man The New Yorker describes as the nation’s spymaster — to build a data center campus in Abu Dhabi seven times the size of Central Park.His former colleagues have a name for how this works. Dario Amodei, who left OpenAI to found Anthropic, documented it across 200 pages of private notes. Ilya Sutskever, OpenAI’s former chief scientist, concluded the same thing in secret memos compiled before Altman’s brief 2023 firing. Both men arrived at nearly identical verdicts: Altman tells people what they want to hear, and the consequences come later.
