If the fractured landscape of data privacy law taught businesses anything over the last few years, it is that a patchwork of state-by-state regulation is nobody’s friend. Companies have spent years untangling overlapping (and sometimes conflicting) obligations under California’s CCPA, Virginia’s CDPA, Colorado’s CPA and a growing roster of state-specific privacy regimes, each with its own definitions, thresholds and enforcement mechanisms. The result has been a compliance environment that disproportionately burdens small and mid-sized businesses while delivering inconsistent protection to consumers. The U.S. still lacks a comprehensive federal privacy law, and businesses that delayed building compliance infrastructure eventually scrambled to catch up.
Artificial intelligence regulation is following the same trajectory, only faster. In 2025 each of the fifty states introduced AI-related legislation, with thirty-eight enacting some form of AI law. The result is a maze of conflicting obligations: for example a company deploying AI-assisted hiring tools across California, Colorado, Illinois and Texas must simultaneously analyze if they satisfy four different definitions of “high-risk” AI, comply with four different audit timelines and four different disclosure requirements, and, of course, face four different penalty structures – some with private rights of action for citizens, some without. Much like the early days of state privacy law, this patchwork is already creating a compliance burden that no single team, let alone a small team, can easily navigate. As a result, most companies just work to comply with the most “restrictive” schemes, in effect doing things in certain states that aren’t required under that specific states’ laws.
The federal government may finally be stepping in with an over-arching regulatory scheme – whether it can do so effectively remains to be seen.
The Federal Response: Two Documents, One Direction
In March 2026, the federal government made its most coordinated push yet toward comprehensive federal AI regulation. On March 18, Senator Marsha Blackburn (R-TN) released a discussion draft of the TRUMP AMERICA AI Act (the “Blackburn Bill”). Two days later, the White House issued a National Policy Framework for Artificial Intelligence (the “Framework”) – a non-binding set of legislative priorities organized around seven pillars: protecting children, safeguarding communities, respecting intellectual property, preventing censorship, enabling innovation, developing an AI-ready workforce and establishing federal preemption of state AI laws. It is worth noting that while the bill bears the President’s name, it is a Senate measure that must navigate the full congressional process before becoming law. Neither document is currently binding, but together they represent the most serious effort yet to establish a unified federal AI regime, avoiding previous and existing pitfalls around inconsistent or conflicting state laws.
What the Blackburn Bill Would Do
The Blackburn Bill is sweeping, with seventeen titles covering liability, copyright, child safety, content standards, bias audits and workforce reporting. Its most significant provisions include:
A new liability framework.
AI chatbot developers would owe a general duty of care (“reasonable care in the design, development, and operation” of their systems) and could face product liability claims for physical, financial or psychological harm. These claims could be brought by the Justice Department, state attorneys general, or private parties. Deployers who substantially modify or misuse an AI system would also be treated as “developers” and face the same exposure.
Repeal of Section 230.
In a huge step, if passed, the bill would eliminate the longstanding liability shield that protects online platforms from civil liability for third-party content. This provision alone would reshape the legal landscape for virtually every platform hosting user-generated content, well beyond AI, and is likely to face intense industry opposition.
A copyright bright line.
The bill would amend the Copyright Act to declare that unauthorized use of copyrighted works for AI training does not constitute “fair use,” directly overriding some judicial determinations. But the law doesn’t stop there, it would, in effect, flip the burden of proof and require the alleged infringer to prove they didn’t infringe. In other words, copyright holders could subpoena training data disclosures, and AI systems would be presumed to incorporate copyrighted material unless the developer proves otherwise by clear and convincing evidence.
Minor protections.
Platforms would need to implement safeguards for users under seventeen, restrict data collection and research on children, and obtain parental consent for research involving minors under seventeen.
Bias audits.
In perhaps one of the more politically sensitive provisions, defined high-risk AI systems would require annual third-party audits for viewpoint or political affiliation discrimination, and federal procurement of large language models would be limited to those meeting “unbiased AI principles.”
Workforce and energy reporting.
Covered companies would report quarterly on AI-related job impacts, and data center operators would be required to protect residential ratepayers from increased electricity costs.
Potential Benefits – and Real Pitfalls
The case for federal AI legislation is intuitive: a single national standard would reduce compliance costs, eliminate conflicting state obligations and give businesses a clearer framework for development and deployment. For companies already wrestling with the state patchwork, the promise of preemption is real. The minor protection provisions and creator likeness rights enjoy broad bipartisan support and represent meaningful consumer protections that are likely to survive in some form regardless of what happens to the rest of the bill.
The pitfalls, however, are significant. First, the preemption promise may be narrower than it appears. Though the White House’s Framework contemplates broad displacement of state AI laws, the Blackburn Bill expressly states it does not preempt generally applicable state laws, leaving individual sections to potentially carry their own preemption provisions with uncertain scope. Businesses should not assume that passage of this bill would wipe the slate clean of state obligations.
Second, the Section 230 repeal and the copyright “fair use” exclusion are both likely to generate fierce opposition (from the technology industry on the former, and from within the administration and the industry itself on the latter). Factually, the White House Framework explicitly recommends letting courts resolve the copyright question. These tensions signal that the bill, if it advances at all, will undergo significant revision before any version advances, and that the final product may look quite different from the current draft.
Third, the new liability framework, with its expanded deployer exposure, strict liability provisions and restrictions on contractual liability limitations, could increase legal risk for businesses that use third-party AI tools without careful vendor contracting. The compliance burden could prove heavy for smaller companies with fewer resources to implement testing protocols, documentation systems and insurance coverage.
The Bottom Line
Neither the Framework nor the Blackburn Bill is binding today, and the path to enactment is uncertain. But the direction of travel is clear, and the industry’s previous experience with patchwork data privacy regulation instructs that companies that wait for final rules before building compliance infrastructure will find themselves behind. Businesses should use this period to assess AI-related exposure, review vendor agreements and monitor both federal and state developments in parallel. State AI laws remain fully enforceable unless and until expressly preempted by federal legislation or a court ruling.
We will continue to monitor these developments as they evolve. If you have questions about how evolving AI regulations may affect your business, please reach out to attorneys Brett Krantz (BK@kjk.com) or Christopher Herrel (CGH@kjk.com).