BALTIMORE — Without laws on the books, AI becomes like the wild west.
But without a nationwide, uniform set of regulations, tech companies are left instead with a patchwork of complicated and potentially conflicting rules.
Watch as Hopkins' experts discuss the implications
"It does make it difficult to do business when a company that is doing business nationwide has to maybe change their software, change their way of doing business from one state to the other," Anton Dahbura, Executive Director of the Johns Hopkins University Information Security Institute and Co-Director of the Johns Hopkins Institute for Assured Autonomy, told WMAR-2 News.
And if we want to stay competitive as a country on the digital battleground for AI dominance - the thinking goes - we should make it easy for those companies to innovate.
That's why a measure was originally included in President Trump's "Big Beautiful Bill" that banned states from enforcing AI regulations for the next 10 years.
But as Dahbura tells us, the federal government wasn't putting forth its own plan to be used in place of the ones states are currently coming up with.
"So it doesn't really make sense to tell the states: 'you have to stop and undo what you've done, and we're not going to do anything either,'" he said.
AI laws are popping up more and more across the country - California has laws for robo taxis, for example, and in Colorado, an act goes into effect next year that protects consumers from biased algorithms that could lead to discrimination in things like hiring or loan applications.
But as Dahbura sees it, lawmakers have not caught up to how quickly the technology is evolving
"Could an autonomous robot arrest someone, for example? Things that, you know, we've seen in science fiction movies not too long ago, but now they're becoming real. And so these conversations need to happen constantly at the national level," Dahbura said.
Europe is already way ahead, and some say they've gone too far.
The European Union passed sweeping legislation last year that prohibits entire applications of AI, such as compiling facial recognition databases. Some view the crackdown as too restrictive.
Dahbura says most applications of AI can be used for either good or evil, and governments have to balance prioritizing safety without smothering.
"For instance, you can use facial recognition to it for a crowd to do surveillance, or you can use it for determining if someone's in medical distress to potentially save their life. So it's very tricky. It's a fine line," he told WMAR-2 News.
Here in Maryland, the General Assembly passed a bill this year to establish a working group that will look at new ways to regulate AI in the state, although some limited regulations do exist already - such as the new law that targets AI-created sexually explicit images. But plenty of bills proposed this year never made it anywhere.
Lawmakers had proposed bills to regulate artificial intelligence in education, crime, economic development, elections and consumer protection. The National Conference of State Legislatures identified 43 bills in Maryland's legislative session that involved AI. Only eleven made it to Governor Moore's desk, and he only signed seven into law.
Of those, three of the bills had the portion mentioning AI crossed out, two only mentioned AI briefly, one was the workgroup to study the implementation of AI in the state, and the other regulates how AI can be used by health insurance companies in making decisions.