The proliferation and popularity of artificial intelligence (“AI”) has led to a whirlwind of regulatory efforts in the U.S., particularly at the state level. States like Colorado, California, and New York have issued a series of state laws and regulations, many of which will take effect in 2026. And state and federal laws addressing AI deepfakes and sector-specific regulation of the use of consumer data in predictive algorithms highlight the diverse set of issues and questions that AI raises for policymakers.
Against this regulatory backdrop, the Trump Administration issued an Executive Order—Ensuring a National Policy Framework for Artificial Intelligence—with the stated aim of “remov[ing] barriers to United States AI leadership” and reducing “excessive State regulation” of AI companies. The Executive Order cites and builds on the Administration’s AI Action Plan, released on July 31 and summarized here, and prior Executive Orders, summarized here.
Key Takeaways
- The Trump Administration remains focused on AI competition with China, and opposed to state-level regulation, especially regulations inconsistent with the Administration’s other priorities. The White House framed the compliance challenges of state-by-state regulation as slowing AI innovation in the U.S., when compared to China’s single-party leadership.[1] The Executive Order frames the absence of a unified federal standard as a risk to national security, economic leadership, and investment, and a burden to startups and multi-state operators of AI businesses.
- Short-term compliance is likely to become more challenging. Companies operating nationwide should anticipate near-term turbulence as the federal government deploys litigation, funding conditions, and federal standard-setting to challenge or displace state AI laws. The Executive Order’s provisions, especially the FTC policy statement, may also lead to tensions and risks for companies who also operate in the European Union.
- A focus on Congressional action may lead to federal AI legislation. Section 8(a) directs the Special Advisor for AI and Crypto and the Executive Advisor to the President for Science and Technology to prepare legislative recommendations for a uniform, preemptive federal policy framework for AI. Given the Executive Order’s rhetoric, we can likely expect that this proposed framework will prohibit certain debiasing and content moderation measures, focus on child safety, and otherwise take a deregulatory stance. It remains to be seen whether any such legislation can garner the necessary support in Congress, however.
- Extended litigation is likely. The directed litigation effort focuses on dormant commerce clause and preemption challenges, but also provides for litigation against regulation that is “otherwise unlawful in the Attorney General’s judgment.” The Administration may pursue aggressive litigation campaigns under this direction, especially against states like Colorado which were referenced in the Executive Order.
- Not all AI regulations will be preempted, even if the Executive Order is fully executed. The Executive Order requires that the legislative recommendation issued pursuant to Section 8 not propose preempting otherwise lawful state AI laws relating to child safety, infrastructure deployment, and other policy priorities.
Specific Directives
The Executive Order states that it aims to promote U.S. AI leadership and counter China by constraining conflicting state regulation through targeted preemption, litigation, funding conditions, and federal legislation. Specifically, the Executive Order sets forth the following directives:
- Creation of an AI Litigation Task Force. Within 30 days, the Attorney General must establish an AI Litigation Task Force to challenge state AI laws inconsistent with the Executive Order’s promotion of a “minimally burdensome national policy framework,” including via constitutional challenge under dormant commerce clause or preemption theories. The Task Force is to consult the Special Advisor for AI and Crypto, the Assistant to the President for Science and Technology, the Assistant to the President for Economic Policy, and the Assistant to the President and Counsel to the President regarding emerging state laws warranting challenge.
- Commerce Evaluation of State AI Laws. Within 90 days, the Secretary of Commerce must publish an evaluation identifying “onerous” state AI laws that conflict with federal policy, as well as laws meriting referral to the AI Litigation Task Force. The evaluation must, at a minimum, identify laws that require AI models to alter truthful outputs or that compel disclosure/reporting in ways violating the First Amendment or other constitutional provisions. It may also identify state laws that promote AI innovation consistent with the Executive Order’s goals.
- Restrictions on State Funding (BEAD Program and Discretionary Grants). The Executive Order instructs the Secretary of Commerce to issue a Policy Notice making states with “onerous” AI laws identified in Commerce’s evaluation ineligible for non-deployment funds under the Broadband Equity Access and Deployment (“BEAD”) Program. Executive agencies are likewise directed to assess their discretionary grants and determine whether they may condition awards on states refraining from enacting conflicting AI laws or, where such laws exist, entering binding agreements not to enforce them during the funding performance period.
- FCC Proceeding on Federal Reporting/Disclosure Standard. Within 90 days of Commerce’s publication of “onerous” state laws, the Federal Communications Commission (“FCC”) Chair must initiate a proceeding to consider a federal reporting and disclosure standard for AI models that would preempt conflicting state laws.
- FTC Policy Statement on Deceptive Practices and Preemption. Within 90 days, the Federal Trade Commission (“FTC”) Chair must issue a policy statement, in consultation with the Special Advisor for AI and Crypto, applying 15 U.S.C. 45 to AI models and explaining when state laws requiring “alterations to truthful outputs” are preempted by the FTC Act’s prohibition on deceptive acts or practices affecting commerce.
- Legislative Recommendation and Preemption Carve-Outs. The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology are directed to prepare a legislative recommendation for establishing a uniform federal policy framework that preempts conflicting state AI laws. However, the recommendation must not propose preempting otherwise lawful state AI laws concerning child safety, AI compute/data center infrastructure, state procurement and use of AI, and additional topics to be determined.
Special Considerations for the Insurance Sector
Among all the industries potentially impacted by this Executive Order and any follow-on rules, legislation, or litigation, the insurance sector faces a particularly unique set of legal and policy considerations. Many states, in conjunction with initiatives and models promulgated by the National Association of Insurance Commissioners, are pursuing or considering AI laws and regulations specific to insurance practices, such as underwriting, pricing, claims handling, and utilization management. To date, this has primarily taken the form of non-binding regulatory guidance,[2] with only Colorado passing legislation and adopting regulations specifically governing the use of AI in the insurance business.[3]
To the extent state insurance rules are deemed “onerous,” they may be flagged by Commerce for referral to the DOJ and could face litigation or be targeted for federal grant conditions. However, the insurance industry has been operating under a state-based regulatory system for 80 years. State efforts to regulate insurance are actually protected by the McCarran-Ferguson Act, 15 U.S.C. §§ 1011-1015 (“McCarran-Ferguson”), a federal statute passed in 1945 where Congress affirmatively granted the states plenary power to regulate the business of insurance unless Congress subsequently passes a law specifically targeting insurance. This unusual “reverse preemption” principle means that a state law regulating insurance can override a conflicting federal law unless the federal law specifically targets insurance. In the context of the Executive Order, it is unclear whether DOJ litigation would be successful if challenged under McCarran-Ferguson’s reverse preemption principle. Moreover, to the extent the federal government decides to challenge state AI insurance-specific regulation, it may create a dynamic where the state regulators view the state-based insurance regulatory framework as under threat.
Conclusion and Next Steps
As we have previously written, the integration of AI into products and appetite for AI investment makes it more important than ever to understand the risks and opportunities posed by AI, and compliance challenges in AI deployment. In the short term, the Executive Order likely compounds those challenges because it does not have any immediate effect on existing state laws, but it raises uncertainties about which laws will be tagged as “onerous,” which will be targeted by the DOJ Task Force, and which–if any–will be successfully preempted. As a result, companies likely will still need to prepare to comply with both general and sector-specific laws and regulations related to AI coming out of states like Colorado and California, until litigation reaches concrete resolutions that dictate otherwise. In the meantime, companies must continue to monitor developments and should be prepared for sustained federal-state regulatory friction until Congress or the courts provide some clarity.
Click here to download this article.
[1] See Fact Sheet: President Donald J. Trump Ensures a National Policy Framework for Artificial Intelligence, The White House (Dec. 11, 2025) https://www.whitehouse.gov/fact-sheets/2025/12/fact-sheet-president-donald-j-trump-ensures-a-national-policy-framework-for-artificial-intelligence/; Alyssa Lukpat & Natalie Andrews, Trump Signs Executive Order to Curtail State AI Laws, Wall St. J. (Dec. 11, 2025) https://www.wsj.com/politics/policy/trump-signs-executive-order-to-curtail-state-ai-laws-4ffc09a9 (quoting President Trump as stating during the signing: “We have to be unified . . . China has one vote because they have one vote, and that’s President Xi, and that’s the end of that.”).
[2] For example, regulatory bulletins modeled on the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (December 2023) have been adopted in substantially similar form in roughly half of U.S. jurisdictions.
[3] Colo. Rev. Stat. § 10-3-1104.9; 3 Colo. Code Regs. § 702-10.