The Biden administration is preparing a formal “national security memorandum” on artificial intelligence that will explore ways for the United States to “preserve and expand U.S. advantages” in AI technologies that could transform science, business and warfare, according to a senior administration official who has reviewed a draft of the memo.
Why the U.S. is opting against a ‘Manhattan Project for AI’
The new approach won’t propose the “Manhattan Project for AI” that some have urged. But it should offer a platform for public-private partnerships and testing that could be likened to a national laboratory, a bit like Lawrence Livermore in Berkeley, Calif., or Los Alamos in New Mexico. For the National Security Council officials drafting the memo, the core idea is to drive AI-linked innovation across the U.S. economy and government, while also anticipating and preventing threats to public safety.
The new strategy will focus on defense and intelligence agencies, aided by the just-created AI Safety Institute at the Commerce Department and its National Institute of Standards and Technology. The Pentagon, the intelligence community and Commerce will work to develop partnerships with the five private companies that dominate AI research, all of them American: Microsoft’s OpenAI, Google’s DeepMind, Elon Musk’s xAI, Meta AI and the start-up Anthropic.
The memorandum will “provide a framework for responsible use of AI, which will enable faster adoption” in the government and private sector, the administration official argued. He said the memorandum should be completed in late September or early October. What’s motivating this effort is the growing possibility of “artificial general intelligence,” or AGI, a hyper-connected future version of the technology that promises superhuman intelligence.
How government should interface with this transformative technology is a preoccupation for policymakers at home and abroad. An early test is an AI safety bill just passed by the California legislature. One of its key provisions requires AI companies to exercise “reasonable care to avoid unreasonable risk” of catastrophes, according to Nathan Calvin, an AI safety lawyer who has been working on the legislation for state Sen. Scott Wiener (D). Silicon Valley tech companies are sharply divided on the bill, and Gov. Gavin Newsom (D) hasn’t said whether he will sign it.
The White House wants to connect future federal oversight of AI with international standard-setting that is already underway. The European Union recently adopted its own AI Act, the world’s first major legislative framework. Britain’s AI Safety Institute helped convene a summit of 28 countries at Bletchley Park in November and a follow-up meeting of the same nations in Seoul in May. China was part of both groups.
AI’s potential is world-enriching and also, possibly, world-destroying. The national security implications received intense study in July at a meeting of the Aspen Strategy Group, a bipartisan gathering of top former government officials, business executives and journalists. At the concluding session, Philip Zelikow, a former State Department official who teaches at Stanford, presented a provocative proposal for reducing risks.
The government’s first obligation is a threat assessment, said Zelikow. It needs to study what catastrophes could be deliberately created by “the worst people and governments in the world using the most advanced possible models.” The government must also assess the havoc that could be wreaked by a “misaligned,” or rogue, AI. Once government agencies have gauged the risks, they will need to think about countermeasures, which will also involve AI, Zelikow argued.
The Manhattan Project analogy is tempting. “This may be a technology where the first-mover advantage would be world-historically decisive,” Zelikow told me. He noted the oft-cited danger that Adolf Hitler might have developed a nuclear bomb first, if America hadn’t raced to build one.
But even if Washington wanted to drive artificial intelligence in the same way it did nuclear research, it probably couldn’t, argues Graham Allison, a Harvard Kennedy School professor who has written extensively about AI. Unlike in the 1940s, Allison notes, cutting-edge technologies and cash to fund them are privately held, and the government struggles to keep pace. Public-sector managers are more often obstacles to innovation than enablers of it.
Rand, which played a crucial role in early nuclear-weapons strategy, is helping policymakers think about AI risks with a new project called the “Geopolitics of AGI Initiative.” Joel Predd, the project leader, explained to me Thursday that he wants to explore some key uncertainties about whether AGI will emerge gradually or suddenly, and whether the country that obtains it first will have an unbreakable strategic or competitive advantage.
“Our central research question,” he said, “is how the U.S. government should address the deeply uncertain but technically credible potential that world-leading AI labs are on the cusp of developing” AGI.
The debate is just beginning. Already, the tech world is dividing between “doomers,” who think AGI will mean the end of humanity, and “accelerationists,” who see it as “a way to make everything we care about better,” in the words of techno-optimist Marc Andreessen, who co-invented the first internet browser.
Joe Biden is the paradigm of an old-fashioned, analog guy. But in his final months in office, his team is boldly trying to write the rules for a digital technology that will have the capacity to rearrange, for good or ill, every piece of our global mosaic. As Shakespeare’s Miranda exclaims at the end of “The Tempest”: “O brave new world.”