The Regulatory Vacuum

Forward looking organizations understand consumer confidence will be imperative as AI becomes pervasive and are building internal AI governance frameworks that match or exceed external mandates.

They are treating principled AI practices as a foundation for sustainable growth rather than a compliance checkbox.

This approach is proving prescient, because the regulatory landscape just got significantly more complicated.

This week President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The promise sounds reasonable: replace the fragmented landscape of fifty different state interpretations with a single coherent federal standard. The concern is that the order dismantles state regulation without establishing federal guidelines in its place, leaving no framework at all for balancing innovation with transparency and safety.

This reflects a fundamental choice in how governments approach emerging technology. The EU tends toward precaution, for example, restricting AI applications until they’re proven safe. The US seems to favor speed, addressing harms after they emerge. Neither approach is inherently “right,” as balance is likely called for, but the executive order takes the latter philosophy to an extreme: it actively removes existing guardrails while proposing nothing to replace them.

The order calls out Colorado’s AI Act as exhibit A of the problem, arguing that requirements to prevent algorithmic discrimination force companies to “embed ideological bias within models.”

The Mechanics of Removal

Within 30 days, the Attorney General must establish an “AI Litigation Task Force” charged with suing states over their AI laws. Within 90 days, the Commerce Department must identify “onerous” state regulations, and states with such laws on the books could lose access to federal broadband funding under the BEAD program.

Supporters argue navigating fifty different regulatory frameworks creates real friction for companies operating across state lines, and regulatory uncertainty can chill innovation. The administration frames this as protecting American competitiveness in a global AI race. The counterargument is that removing consumer protections without federal alternatives creates a different kind of uncertainty, one where organizations face reputational and legal exposure with fewer clear standards to follow.

Congress has rejected AI preemption twice this year. The Senate struck the AI provision from the “Big Beautiful Bill” by a vote of 99-1, and bipartisan opposition killed a similar measure in the National Defense Authorization Act. The executive order exists precisely because the legislative branch declined to act.

This matters because executive orders cannot legally preempt state laws, a power reserved exclusively to Congress. Legal analysts at Ropes & Gray, WilmerHale, and Cooley note that the order’s enforcement mechanisms rest on untested legal theories, and scholars at LawAI have called the dormant Commerce Clause arguments the administration plans to deploy “legally meritless.”

Republican governors are already pushing back, with Ron DeSantis of Florida signaling he would consider the order unlawful if it attempts to override state legislation. California Attorney General Rob Bonta has stated he will challenge its “potential illegality.”

Two Risks, One Choice

Companies deploying AI across state lines must navigate Colorado’s algorithmic discrimination rules, California’s transparency requirements, and whatever New York, Texas, and Utah pass next. With over 1,000 AI bills pending across state legislatures and roughly 43% of companies now managing four or more compliance frameworks simultaneously, the coordination costs are substantial. The US Chamber of Commerce has documented small businesses in California facing roughly $16,000 per year in AI compliance costs alone.

The regulatory vacuum, however, carries its own weight. Without enforceable standards, harms occur without recourse. While the EU AI Act establishes clear rules and China has implemented its own regulatory framework, this order would leave the US with neither federal standards nor state standards that survive litigation. Consumers harmed by discriminatory algorithms, deceptive AI outputs, or privacy violations would find fewer legal pathways for redress at precisely the moment AI-related harms are accelerating.

In attempting to solve the first risk, the order creates the second.

The Fine Print

The FTC must issue a policy statement within 90 days explaining when state laws requiring “alterations to the truthful outputs of AI models” are preempted by federal law. The order positions bias mitigation requirements as forcing AI to produce “false results,” casting algorithmic fairness as incompatible with accuracy. But AI systems are probabilistic, not deterministic. The same model can produce different outputs from identical inputs, and the concept of a single “truthful output” doesn’t map cleanly onto how these systems actually work. Framing bias mitigation as distortion misrepresents the technical reality.

The order does carve out child safety, data center infrastructure, and state government AI procurement from future preemption legislation. But those carve-outs apply only to the legislative proposal the administration must develop. The litigation task force and funding restrictions the order already directs carry no such protections, meaning states enforcing AI laws face an immediate choice between continued enforcement and federal litigation.

Strategic Considerations

What does your compliance exposure look like? State AI laws remain on the books: Colorado’s AI Act takes effect June 30, 2026, and California’s transparency requirements under AB 2013 become active January 1, 2026. Federal litigation might eventually invalidate some provisions, but “eventually” could mean 2027 or later. Organizations need to comply with current law while this plays out.

How should you think about liability in the gap? If state AI-specific consumer protections are successfully challenged, technology leaders will want to understand what standards still apply. Sectoral laws covering fair lending, employment discrimination, healthcare privacy, and general FTC authority would remain in force, but AI-specific guardrails might not. For organizations deploying AI in high-stakes domains, this evolving landscape deserves careful consideration with legal counsel.

What does your governance posture signal? The EU AI Act applies to any company serving European markets, and many enterprises are building to that standard regardless of US policy. So will your organization govern AI by reference to the lowest common denominator or by reference to the standards your stakeholders expect?

Building Forward

Treating responsible AI as a strategic capability rather than just a regulatory challenge changes the calculus. Define your own principles. Implement governance structures that reflect your values and risk tolerance. Recognize that the absence of federal standards doesn’t mean the absence of accountability.

This requires staying current on a regulatory landscape that sometimes shifts weekly, understanding the technical nuances behind policy debates, and building frameworks flexible enough to adapt as the situation evolves. It’s a significant investment of attention and expertise, but it’s also what separates organizations positioned to deploy AI confidently from those constantly reacting to external uncertainty.

“In matters of style, swim with the current; in matters of principle, stand like a rock.”
— Thomas Jefferson

Compliance strategies can adapt. Principles shouldn’t have to.


Sources

Executive Order and Official Documents

Legal Analysis

State Regulation Analysis

Policy Analysis