left-caret

Client Alert

President Trump Signs Executive Order Challenging State AI Laws

December 16, 2025

By Amir R. Ghavi,Howard Glucksmanand Katie Katsuki

On December 11, 2025, the White House issued a widely anticipated[1] executive order, “Ensuring a National Policy Framework for Artificial Intelligence” (the Executive Order), intending to weaken state-level regulations of artificial intelligence through a combination of targeted litigation led by the Department of Justice, administrative reinterpretation of existing laws, conditional federal funding and the preemption of existing state laws through a federal policy framework.

This Executive Order follows a clear pattern in the Trump Administration’s AI policy of seeking to limit state-level regulation and consolidate authority at the federal level. For example, the Trump Administration had previously pursued legislative preemption earlier this year through the proposed One Big Beautiful Bill Act, which included a 10-year moratorium on new state AI regulations. Although it passed in the House, the moratorium was rejected by the Senate largely due to bipartisan concerns regarding the erosion of traditional state authority over consumer protection and laws protecting artists and entertainers. The Trump Administration’s July 2025 AI Action Plan foreshadowed this action, calling for a national, innovation-focused AI framework and warning that state regimes create regulatory “fragmentation.” Many of the directives in the Executive Order closely align with and operationalize the priorities outlined in the Action Plan. Taken together, these initiatives reflect the Trump Administration’s strategy to limit the scope of state AI rules and promote a uniform AI governance framework set by the federal government.[2]

The Executive Order differs slightly from a draft leaked in November. While the draft explicitly cited California’s SB 53 and characterized state regulations as “fear-based” or ideologically driven, the final text replaces these references with somewhat softer language concerning the economic inefficiencies of a regulatory patchwork. The final text is also narrowed as it expressly prohibits the federal preemption of otherwise lawful state AI laws, including those relating to child safety, AI compute and data center infrastructure (except for generally applicable permitting reforms), state government procurement and use of AI, and other topics as later determined. These changes temper the Executive Order’s tone, reduce the risk of broad or implied preemption, and expressly preserve traditional areas of state authority, such as child safety.

AI Litigation Task Force: The Executive Order establishes an AI Litigation Task Force within the Department of Justice, which beginning January 10, 2026, will be responsible for challenging state AI laws in federal court on the grounds that they unconstitutionally burden interstate commerce, are preempted by federal regulations, or are otherwise unlawful in the Attorney General’s judgment. The primary legal theory underpinning these challenges will likely be the Dormant Commerce Clause, which prohibits states from enacting legislation that places an undue burden on interstate commerce. The Trump Administration’s position is that, because frontier AI models are developed and deployed by companies operating on a global scale, a patchwork of differing state regulations creates insurmountable barriers to national deployment, therefore undermining U.S. competitiveness. However, the strength of this argument is difficult to predict as the Dormant Commerce Clause gives the courts wide discretion in weighing the local benefits of a state law against its burden on the national economy.[3]

Evaluation of State AI Laws: The Executive Order directs the Secretary of Commerce to publish, by March 11, 2026, a comprehensive review of existing state AI laws, identifying those deemed overly burdensome or in conflict with the federal policy[4] outlined in the Executive Order (the Policy), particularly laws that require AI systems to alter “truthful outputs”[5] or mandate disclosures that may violate the First Amendment.[6] The review must also flag state laws appropriate for referral to the new AI Litigation Task Force and may highlight state laws that support AI innovation in line with federal objectives.

Restrictions on State Funding: The Executive Order instructs the Department of Commerce to condition $42 billion in previously allocated broadband infrastructure funding appropriated under the Broadband Equity, Access and Deployment (BEAD) program on the repeal of state AI regulations deemed onerous. The Executive Order uses federal funding as leverage to limit state AI regulation by authorizing federal agencies to condition discretionary grants on states refraining from enacting, or agreeing not to enforce, AI laws deemed inconsistent with the Executive Order’s policy or otherwise identified as conflicting or subject to challenge.[7]

Preemption of State Laws Mandating Deceptive Conduct in AI Models: Additionally, the Executive Order directs the Federal Trade Commission (FTC) to issue a policy statement by March 11, 2026, classifying state-mandated bias mitigation as a per se deceptive trade practice. This directive stems from the AI Action Plan, which prioritized preventing the imposition of ideological constraints on AI development. The Trump Administration’s legal theory posits that if an AI model is trained on data reflecting societal patterns, forcing developers to alter the model’s outputs to mitigate bias compels them to produce results that are less faithful to the underlying data. Under this interpretation, such mitigation renders the model less “truthful” and, therefore, deceptive. Policy statements are interpretive rather than binding regulations, and courts may reject the premise that correcting for bias constitutes deception. We note that the Executive Order includes no standards on data sourcing or data normalization, therefore effectively placing an even stronger onus on model developers to focus on data curation.

Federal Legislation: The Executive Order directs Special Advisor for AI and Crypto David Sacks and Assistant to the President for Science and Technology Michael Kratsios to draft legislative recommendations for a uniform federal AI framework that would preempt conflicting state laws, while expressly preserving state authority over child-safety protections, data center and compute infrastructure, state government AI procurement and other designated areas later determined. It also instructs the Federal Communication Commission (FCC), within 90 days of the Department of Commerce’s state-law evaluation, to consider establishing a federal reporting and disclosure standard for AI models that would similarly supersede inconsistent state requirements. Historically, the FCC has viewed general AI governance as beyond its jurisdiction, interpreting the Communications Act as covering the physical infrastructure of transmission, rather than the software applications using it.[8]

Several categories of state AI laws that we believe are potentially vulnerable to review under the Executive Order are those that impose transparency, reporting, documentation or safety-testing requirements on developers and deployers. This includes laws such as Colorado’s AI Act (which is directly named in the Executive Order), California’s SB 53 (the Frontier Model Safety and Transparency Act) and California’s AB 2013, which require training data disclosures. The Executive Order refers to these state regulations not merely as burdensome, but as “legally deceptive.” The Executive Order also puts at risk state rules that require explanations of algorithms or mandate independent audits, such as California’s CCPA automated decision making regulations or New York City’s Local Law 144.[9] This creates a direct conflict because many of these state laws, which the Executive Order says are inconsistent with the Policy, are already in effect or coming into effect soon.

What to Expect: The immediate consequence of the Executive Order is legal ambiguity. We anticipate that the validity of targeted state laws will likely be determined through prolonged litigation that could reach the Supreme Court, where the power of the executive and the strength of the Dormant Commerce Clause will be tested. Regardless of what the courts decide, the Executive Order is part of a larger trend toward tempered regulation of AI. For example, the Trump Administration’s strategy coincides with similar developments in the European Union. In November, the European Commission proposed delaying the implementation of high-risk obligations under the EU AI Act from 2026 to 2027. By raising the financial and legal costs and uncertainty associated with enacting and defending state AI laws, the Executive Order may create a deterrent effect that discourages state legislatures from pursuing new regulations, regardless of what the courts decide.[10]

However, we do not believe this Executive Order will eliminate state involvement in AI regulation altogether. Instead, we think that states will diffuse AI regulation by applying existing consumer protection, unfair competition, deceptive practices and civil rights laws to AI-related conduct. The Trump Administration’s case for federal preemption will be more relevant if Congress were to enact a comprehensive federal AI framework. In the interim, we expect increased enforcement activity from federal agencies such as the FTC, FCC and the Equal Employment Opportunity Commission, particularly against tech companies or AI deployers that the Trump Administration believes are engaging in what it considers to be unlawful bias or the abridgment of free speech.

Although the regulatory landscape is uncertain, companies should continue to comply with applicable state AI laws because the Executive Order itself does not, and cannot, overturn existing state law — that can only be done by an act of Congress or the courts. Until the relevant legal challenges are resolved, state laws remain enforceable, and companies could face potential penalties for noncompliance.

 

[1] On November 19, 2025, several news outlets leaked a draft of the Executive Order, causing a debate about federal preemption of state AI laws. The initial executive order was on pause while the House made a last-minute effort to include AI preemption language in the National Defense Authorization Act, which ultimately failed. President Trump had repeatedly signaled his intent to sign the Executive Order, including posting on Truth Social and holding meetings with senators to discuss federal authority over AI policy.

[2] President Trump has issued a series of executive orders focusing on American leadership in AI. In 2019, during his first administration, President Trump issued Executive Order 13859, “Maintaining American Leadership in AI,” that launched the American AI Initiative focusing on federal research and development, data access, workforce and U.S. economic and national security leadership in AI. In July 2025, President Trump signed the “Preventing Woke AI in the Federal Government” executive order. Although this executive order seems to focus on eliminating perceived political bias in AI procured by federal entities, we think that its practical effect is to assert federal authority over core questions of AI governance.

[3] In National Pork Producers Council v. Ross (2023), the Supreme Court ruled that the Dormant Commerce Clause does not invalidate nondiscriminatory state laws merely because they force out-of-state industries to alter their businesses. The decision clarified the Pike balancing test, establishing that high compliance costs alone do not constitute a substantial burden on interstate commerce sufficient to override a state’s authority to regulate products sold within its borders.

[4] Section 2 of the Executive Order states that: “It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”

[5] The phrase “truthful outputs” relates to the administration’s prior Executive Order 14319 (“Preventing Woke AI in the Federal Government”), which argued that certain safeguards can distort reality, exemplified by Google’s Gemini inaccurately depicting white historical figures, such as the U.S. Founding Fathers and WWII German soldiers, as people of color. Distinct legal pressures are also forcing developers to address truthfulness through other lenses. In a December 9, 2025, letter to 13 major AI companies, 42 state and territorial attorneys general expressed serious concerns about “sycophantic and delusional outputs” from AI chatbots that have been linked to deaths, violence and harm to children. The letter emphasizes state authority to regulate AI safety through existing state statutes and positions state laws as the primary mechanism to promote truthfulness in AI. Simultaneously, several defamation suits (e.g., Starbuck v. Meta; Wolf River Electric v. Google) allege that AI hallucinations have caused tangible reputational and economic harms. Further, class-action litigation like Mobley v. Workday and Harper v. Sirius XM allege that automated hiring tools generate false conclusions about candidate suitability by relying on discriminatory proxies for protected characteristics, rather than objective merit.

[6] The skepticism toward mandated disclosures targets recent transparency statutes such as California’s AB 2013 or the Colorado AI Act, which require developers to publish training data summaries or algorithmic impact assessments.

[7] The AI Action Plan also recommended withholding federal resources from jurisdictions that impeded the development of AI.

[8] Broad AI adoption ultimately hinges on performance. Regardless of the regulatory landscape, the commercial viability of AI systems depends on their reliability and accuracy. While safeguards such as post-training alignment and adjustments may reduce certain risks, they can also introduce trade-offs that may degrade model performance if applied too aggressively. Overreliance on post-inference “cleanup” risks may compound these model limitations. Therefore, AI companies may have to spend more time and make greater investments in upstream data curation, validation and management rather than downstream corrective mechanisms.

[9] In a December 12, 2025, interview on Blomberg Tech, David Sacks defended the Executive Order as a necessary tool to dismantle the patchwork of state regulations, but he clarified that the administration’s targets are not yet fully defined. While Sacks expressed uncertainty about whether the Department of Justice would challenge laws in California and New York, he explicitly singled out the Colorado AI Act as “probably the most excessive.”

[10] We have already seen early state-level pushback on AI regulation, though it is unclear whether this has been a direct result of the Trump administration’s push to limit state regulations. For example, the Colorado AI Act has been delayed from taking effect on February 1, 2026 to June 30, 2026, and Utah amended its Artificial Intelligence Policy Act in 2025 to narrow its scope and disclosure obligations, establish safe harbor protections and extend the law’s duration.

Click here for a PDF of the full text

Practice Areas

Technology

Technology Transactions


For More Information

Image: Amir R. Ghavi
Amir R. Ghavi

Partner, Corporate Department

Image: Howard Glucksman
Howard Glucksman

Associate, Corporate Department

Image: Katie Katsuki
Katie Katsuki

Associate, Corporate Department

Image: Sarah Gagan
Sarah Gagan

Partner, Corporate Department