A Biden administration-era government office at the Department of Commerce focused on understanding and defining the risks inherent to artificial intelligence systems while offering mitigation guidance is undergoing a major change and will be renamed as the Center for AI Standards and Innovation.
Announced on Tuesday, Commerce Secretary Howard Lutnick confirmed the former U.S. AI Safety Institute will be updated to focus more on commercial AI systems’ potential, while still working to evaluate system safety and vulnerabilities.
The focus for the CAISI — still housed in Commerce’s National Institute of Standards and Technology — will continue to emphasize guidance development, testing and evaluations for various safety measures, but it will notably support the light-touch regulatory approach favored by the Trump administration.
“For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards,” Lutnick said in the press release. “CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.”
The CAISI will also work to establish voluntary agreements with private sector AI developers, while focusing on risks surrounding cybersecurity, biosecurity and chemical weapons.
It will also coordinate cross-departmentally to ensure agencies have proper evaluation methods and assessment protocols in place.
A new international focus will also be included in the CAISI’s pillars, specifically having the institute act as a representative on behalf of the U.S. to advocate against “burdensome and unnecessary regulation of American technologies by foreign governments.”
The pro-innovation mindset embedded in the CAISI speaks to the Trump administration’s less tentative, more optimistic approach to global AI dominance. The inaugural director of the AISI under President Joe Biden, Elizabeth Kelly, said in March 2024 that the vision for the institute revolved around model risk mitigation and focused on watermarking content to differentiate between synthetic and real.
The AISI also focused on international collaborations at that time, including communicating with allied nations about creating a level playing field for the private sector to operate globally.
Some of these major companies, namely OpenAI and Anthropic, previously agreed to collaborate with the AISI on safety evaluations for their major models.