Proposed moratorium on state-level AI regs aims to level the playing field, lawmaker says


Rep. Rich McCormick, R-Ga., defended the proposed decade-long moratorium on states enforcing individual laws governing artificial intelligence as a proactive step toward fostering equality among tech companies.

Speaking during a GovExec event Wednesday, he said the ten-year moratorium on individual state AI regulations featured in the House Energy and Commerce Committee’s Draft Budget Resolution that passed the House early Thursday aims to prevent conflicting AI laws from stifling innovation.

“You don’t want a bunch of states having cross-contamination of law,” McCormick said. “If you develop something very good in your state, then you try to market it interstate, and all of a sudden the regulations vary from state-to-state, you’re going to die in regulations. It’ll be regulatory purgatory, and you will not succeed as a business.”

McCormick, who was a member of the House AI Task Force set up during the previous Congress, added that this potential cross-contamination of rules on how AI software products can or can’t operate could hinder the broad adoption, use and development of advanced AI and machine learning capabilities. He said that AI development is “the centerpiece” of the transformation in how humans work. 

“We want to … make sure that everybody has a crack at this, that they have equal opportunity to develop their mom-and-pop shops into billion-dollar industries,” he said. 

Absent of local regulation, McCormick is optimistic that future federal law will act as solid uniform guidance for the country.

“I think we’ll have to, as a nation, decide what those guardrails are,” he said. “I’m sure that it will evolve over time, because there’s things that we haven’t even thought of.”

The uncertainty in how exactly AI software applications will evolve over time makes crafting effective laws notoriously difficult. McCormick acknowledged that threading the needle between applying appropriate safety measures and penalties, such as in the now-passed Take It Down Act, while respecting individual constitutional rights will be a focal point of future debates in Congress. 

“A lot of that stuff is subjective,” he told Nextgov/FCW on the sidelines of the event Wednesday. “The problem is, how do we write laws that are made for a judge or a jury to interpret appropriately?”

The 10-year state enforcement prohibition has drawn criticism from policy experts and think tanks leaders, with over 140 consumer protection groups signing a letter to House leadership asking to reject the proposed moratorium.

“Despite how little is publicly known about how many AI systems work, harms from those systems are already well-documented, and states are acting to mitigate those harms,” the letter said. “Many state laws are designed to prevent harms like algorithmic discrimination and to ensure recourse when automated systems harm individuals.”

Privacy law experts have also entered the discourse, saying the moratorium’s language is tough to dissect. Cobun Zweifel-Keegan, managing director at the International Association of Privacy Professionals, wrote that, at first glance, the moratorium doesn’t target other technological systems thanks to an adjoining “rule of construction.”

“The moratorium would seem to target laws like the Colorado AI Act that singles out high-risk AI systems,” Zewifel-Keegan said. “It would also proscribe enforcement of other common types of legislative proposals, such as those in the employment sector and those governing the development of foundation models.”

Other technological systems that see some level of state regulation, such as data privacy, cybersecurity, automated decision-making and biometrics, may also suffer from uncertainty about the moratorium’s reach. 

“Its intended effect on the dozens of state AI laws that apply to the states’ own internal governance of such systems is less clear,” Zewifel-Keegan said. 





Source link

Leave a Reply

Translate »
Share via
Copy link