216.696.8700

AI and Legal Regulation: State-Led AI Laws Are Creating a Compliance Maze for U.S. Businesses

March 27, 2025
NCAA

In previous articles we have examined both the lead-role states are taking in regulating cybersecurity and privacy, as well as the inefficiencies and costs created by the failure of the federal government to create law that would abolish the resulting patchwork compliance obligations. Now, as states begin to regulate the use and creation of artificial intelligence, are we facing the identical problems because of the federal government’s lack of national regulation?

The European Union’s Regulation of Technological Advances

America, for better or worse, has taken a back seat to the European Union on the regulation of technological advances. The EU has comprehensive rules for the regulation of cybersecurity/privacy (like the GDPR) as well as AI. The privacy regulations have been in effect since at least 2018 and continue to evolve. Additionally, the 2023 EU AI Act, bans AI systems with defined “unacceptable risks” and seeks safety and transparency and ethical use of all other AI systems. Because there are comprehensive “national” regulations in the EU and not in the US, complications have arisen in completing some cross-border business. Some might argue that the fragmented US state driven privacy laws have actually complicated cooperation and agreement among the US and the EU concerning the adequacy of protections granted by the applicable state regulatory schemes.

The Status of AI Regulation in the U.S.

Though the lack of an American national privacy framework has led to serious difficulty in company compliance, and multiple EU findings of inadequate protection of data in the US, no national framework has evolved in the privacy realm. Now we must ask ourselves, without a national framework, are we risking the same confusion and difficulty in regulatory compliance in the realm of AI?

Opportunity v. Risk

This is not a policy piece – there are serious disagreements on how or when AI systems should be regulated. These disagreements can generally be seen as two sides of a coin. The first side is the free market “opportunity” outlook that says regulating AI will impact the growth of the technology and the resulting investments and jobs. This is the apparent general belief of the current federal administration. The other side is the “risk” outlook, which recognizes bias risks (and resulting regulatory need) emanating from AI decision-making authority. This was the opinion held by the last federal administration.

State Efforts

Regardless of whether either of those sides is correct, it is certainly a truism that nature abhors a vacuum. Without federal regulation of AI, states have begun to look at potential regulatory schemes to manage the growth and scope of AI. The first out of the box was Colorado, with the Colorado Artificial Intelligence Act (CAIA) which is scheduled to go into effect on February 1, 2026. Among other things, the CAIA, like the EU regulations, attempts to protect against algorithmic discrimination in defined AI high-risk systems. It places risk analysis, disclosure and documentation obligations on both the developers and users of these high-risk systems and requires that consumers be informed whenever they are interacting with an AI system.

Some states are also looking at the same type of regulations. According to Adam Thierer of R Street, there is a Multistate AI Policy Maker Working Group which consists of lawmakers from 45 states who are looking to create a model AI bill to deal with AI bias. The current model law resulting from that group is, in one form or another, pending in nearly a dozen states.

Other states have looked at the potential AI regulation and ultimately decided to settle on the “opportunity” side of the coin. In the middle of February this year, the Virginia legislature sent the High-Risk Artificial Intelligence Developer and Deployer Act to Governor Youngkin for signature. That law, like the one in Colorado, was intended to eliminate bias in certain types of AI decisions. Interestingly, the coverage of the Virginia law was more limited than Colorado (i.e it would have covered actions where the AI’s conclusion was the “principal basis” for a decision while in Colorado they must only be a “substantial factor”). However, on March 24, Governor Youngkin vetoed the law identifying the potential lost opportunities for jobs and investments if the law were enacted.

The Need for Federal AI Regulation

Obviously, the recency bias resulting from the singular decision in Virginia neither takes away the fact that numerous other states are considering their own AI regulations nor obviates the need for some standardized federal regulation. If the federal government had preemptive regulations impacting even some portion of the “risk outlook” – the decision making that results from AI systems – than all parties would be faced with the same regulatory obligations either strict or lax wherever they are creating or using AI for these decisions in the US. While as with most regulation, there likely would be some compliance costs. However, a federal law would eliminate both the economic jurisdictional competition based on state AI regulation (we have no rules so set up shop here) and the extra compliance costs of attempting to comply with patchwork regulations that differ by state.

The logic of the need for a federal response is obvious. It is ultimately better for both businesses and consumers. Yet, like in the cybersecurity/privacy realm, the echoes of no-action are bouncing around the halls of Congress. Perhaps it is time to tackle this head-on and take some action that is better for everyone. Or perhaps that is simply too much to ask. For questions and more information, contact KJK’s privacy attorneys.