A Molotov cocktail-type device was flung at OpenAI CEO Sam Altman’s home in San Francisco a couple of weeks ago. Days earlier, more than a dozen shots were fired at the front door of an Indianapolis councilman who had backed the construction of a data center. No one was injured in these incidents, but they point to a violent turn in the conversation on artificial intelligence in the U.S. — a very different tone from the rest of the world.
As AI adoption increases globally, anxiety about AI is rising — but so is optimism about its benefits, according to a recent study from Stanford University’s Human-Centered Artificial Intelligence center. Not in the U.S. To the prompt, “products and services using AI make me excited,” only 38% of respondents in the U.S. said yes, in comparison to 84% in China. Southeast Asians are among the most optimistic about AI, with 80% of Indonesians, 77% of Malaysians, and 79% of Thais agreeing.
Critically, while over half the survey respondents said they trust their government to regulate AI responsibly, only 31% in the U.S. did — the lowest score in the study. Singapore had the highest score of 81%, with Indonesia scoring 76% and Malaysia scoring 73%.
Greater enthusiasm for AI and higher trust in institutions can help quicken the pace of AI adoption, encourage startup founders, attract investors, and create a more enabling ecosystem for research and innovation. This is evident in Singapore, which has seen a higher-than-expected pace of adoption — 61% in comparison to 28% in the U.S. in the second half of last year. The Southeast Asian nation also leads in the number of AI researchers and developers per capita, along with Switzerland, the result of years of investment in education and government backing.
Meanwhile, in the U.S., resistance to data centers is delaying build-outs, and pushing companies to consider other locations around the world. It could also affect the flow of talent: While the U.S. still attracts more talent than it loses, the number of AI researchers and developers moving to the U.S. has dropped 89% since 2017, with an 80% decline in the last year alone, the Stanford study noted. There are other factors, of course, but enthusiasm for AI and clear government policies can be a powerful draw.
“Optimism about AI and trust in government matter. … They can reduce friction around adoption and make a country more attractive to startups, researchers, and investors,” Simon Chesterman, a senior director of AI governance at the government-backed agency AI Singapore, told me. These have to be “matched by investment in talent, compute, infrastructure, and credible governance if it is to produce a thriving and resilient AI ecosystem,” he said.
There are very real concerns about the impact of AI on jobs, the environment, and our lives. Last month, I spoke to some of the more than 100 people who protested against AI outside the offices of Anthropic, OpenAI, and xAI in San Francisco. Many were AI researchers, software engineers, as well as educators and artists. They called for a stop to the AI race, and an end to research into artificial general intelligence. I could not help but contrast their anger and anxiety with the viral videos out of China days before, of the Spring Festival, showing young children blithely dancing alongside humanoid robots.
The 20-year-old suspect in the attack on Altman’s home has been charged with attempted murder. In a blog post shortly after the incident, Altman included a rare picture of his family, and said he hoped that “it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me.” It will take more than a picture of Altman’s family, though. Building trust in institutions, providing skilling opportunities for the population, and passing laws for the responsible development and deployment of AI is perhaps a more surefire way to assuage some concerns about AI. Just ask Singapore.
