Client Alert
White House Releases AI Action Plan: "Winning the Race: America's AI Action Plan"
July 23, 2025
By Amir R. Ghaviand Katie Katsuki
This morning, the White House released its strategic action plan on AI, “Winning the Race: America’s AI Action Plan” (the AI Action Plan), which is designed to secure U.S. global leadership in AI with a strategic focus on maintaining a competitive edge over China. The AI Action Plan is grounded in several guiding principles. First, the AI Action Plan emphasizes empowering American workers through AI-enabled job creation and industry breakthroughs, such as in medicine and manufacturing, with the aim of raising the standard of living and creating high-paying jobs. Second, the Trump administration is focused on eliminating what it deems to be ideological bias and social engineering from federally supported AI tools, insisting that AI systems must be designed to “pursue objective truth,” particularly as they shape how Americans access and interpret information. Finally, the AI Action Plan stresses the importance of defending against AI misuse, theft and emerging risks to ensure that advanced technologies are not exploited by malicious actors.
The AI Action Plan is stunning in breadth and marks a critical milestone in the Trump administration’s efforts to bolster domestic and international U.S. leadership in AI by emphasizing deregulation, infrastructure development and coordinated federal strategy. The AI Action Plan identifies over 90 federal policy actions across three core pillars: (i) Accelerating Innovation; (ii) Building American AI Infrastructure; and (iii) Leading in International AI Diplomacy and Security.
The AI Action Plan follows Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which directed federal officials to develop a comprehensive action plan to achieve U.S. global dominance in AI, signed by President Donald Trump on January 23, 2025. The AI Action Plan’s release coincides with a presidential address outlining President Trump’s strategic AI vision.[1]
Pillar One: Accelerate AI Innovation
The AI Action Plan aims to accelerate AI integration across the federal government while scaling back existing federal regulations that the plan determines hamper AI adoption. Key initiatives include (1) removing red tape and onerous regulations; (2) ensuring that frontier AI systems protect free speech and combating synthetic media in the legal system; (3) encouraging open-source and open-weight AI models; (4) building scientific datasets; (5) building an AI evaluations ecosystem; and (6) accelerating AI adoption in the government, particularly within the Department of Defense (DOD).[2]
Removing Red Tape and Onerous Regulations
The Trump administration has taken initial steps to prevent what it considers to be regulatory headwinds, including by rescinding the Biden administration’s Executive Order 14110.[3] This also includes stemming what the Trump administration considers to be excessive state regulation through two primary ways. First, the Trump administration emphasizes in the AI Action Plan that federal AI funding should not be directed toward states with overly restrictive regulations that could undermine innovation. Second, the Federal Communications Commission is directed to evaluate whether state AI regulations interfere with its authority under the Communications Act of 1934.[4]
The AI Action Plan further recommends the following policy actions:
- The Office of Science and Technology Policy (OSTP) to solicit public input to identify federal regulations that hinder AI innovation and work with agencies to remove them.
- The Office of Management and Budget (OMB), under Executive Order 14192[5], to coordinate efforts across federal agencies to revise or repeal unnecessary regulatory barriers and ensure that agencies consider the regulatory posture of states when awarding AI-related discretionary funding, limiting funds for states with counterproductive policies.
- The Federal Trade Commission to review investigations and legal actions from the previous administration, including those by the FTC, to ensure they do not impose unwarranted constraints on AI development.[6]
Ensure That Frontier AI Protects Free Speech and American Values and Combat Synthetic Media in the Legal System
According to the Trump administration, to safeguard free speech and uphold American values in the age of advanced AI, it is critical that AI systems, particularly those used in education, work and media, are developed to reflect objective truth rather than ideological agendas. The AI Action Plan also acknowledges the risk that malicious deepfakes pose a growing threat to individuals, institutions and the justice system and notes the new risks to the integrity of judicial proceedings, where fabricated evidence could be used to manipulate or obstruct justice.[7]
The AI Action Plan recommends the following policy actions:
- Revise the NIST AI Risk Management Framework (AI RMF) to remove references to misinformation; Diversity, Equity and Inclusion; and climate change.[8]
- Update federal procurement guidelines to require that the government only contracts with developers of frontier LLMs whose systems are objective and free from ideological bias.
- Direct NIST’s Center for AI Standards and Innovation (CAISI) to evaluate frontier AI models from China for alignment with Chinese Communist Party narratives and censorship practices.
- Develop forensic standards for deepfake detection.[9]
Encourage Open-Source and Open-Weight AI
The AI Action Plan stipulates that open-source and open-weight AI models are vital for innovation, academic research and secure AI adoption in both the public and private sectors. However, access to necessary computing resources and supportive infrastructure remains a barrier, especially for startups and academic institutions.
The AI Action Plan recommends the following policy actions:
- NIST to pilot the National AI Research Resource (NAIRR) to treat AI compute as commodities with forward contracting and swaps positions and allow startups and researchers to access compute resources without long-term contracts.
- Partner with major technology companies to make private-sector computing, models, datasets and software tools available to academic and research communities as part of the NAIRR pilot.
- Publish an updated National AI Research and Development Strategic Plan, led by OSTP, to define federal priorities and funding strategies that support open models and other critical AI initiatives.
- The National Telecommunications and Information Administration (NTIA), informed by stakeholders from industry and academia, to identify barriers to adoption of open-source and open-weight models by small and medium-sized enterprises and promote targeted initiatives to overcome those challenges.
Build World-Class Scientific Datasets
The AI Action Plan underscores that the U.S. must take action to build the largest and most reliable repositories of scientific data across domains such as biology, chemistry, materials science and physics.
The AI Action Plan recommends the following policy actions:
- The National Science and Technology Council’s (NSTC) Machine Learning and AI Subcommittee to develop minimum data quality standards for scientific disciplines such as biology, chemistry and materials science to ensure AI systems are trained on accurate, interoperable and high-value datasets.
- OMB to issue regulations under the Confidential Information Protection and Statistical Efficiency Act to create a presumption of accessibility, allowing secure but broader use of federal data by lowering bureaucratic silos, while maintaining strict confidentiality protections.
- The NSF and the Department of Energy (DOE) to create secure computing platforms to facilitate restricted AI use cases, enabling researchers to access and work with sensitive federal datasets in a controlled and compliant environment.
- The NSTC, in collaboration with USDA, DOE, NIH, NSF, the Department of the Interior and the Cooperative Ecosystem Studies Units, to explore creating a national genome sequencing program for biological life on federal lands.
Build an AI Evaluations Ecosystem
The AI Action Plan notes that the current evaluation ecosystem lacks standardization, shared practices and the scientific maturity needed to ensure AI systems meet both technical and legal requirements and calls for a federal approach to advance AI evaluation science, infrastructure and cross-sector collaboration.
The AI Action Plan recommends the following policy actions:
- NIST, including CAISI, to publish comprehensive guidance and tools for federal agencies to evaluate AI systems for mission-specific performance, reliability and legal compliance to enable tailored and consistent use of AI across agencies.
- Federal science agencies, including NIST, DOE and the NSFT, to lead research into new methods and metrics for measuring AI system performance, fostering a robust scientific foundation for evaluation.
Accelerate AI Adoption in Government and Drive Adoption of AI Within the Department of Defense
The AI Action Policy notes that a cohesive, enterprise-wide strategy is needed to institutionalize AI across the federal government while addressing the DOD’s unique national security requirements.
The AI Action Plan recommends the following actions:
- Formalize the Chief Artificial Intelligence Officer Council (CAIOC) as the central body for federal AI governance. CAIOC to coordinate with key federal executive councils, including those for data, IT, HR and privacy, to ensure cross-agency alignment and collaboration.
- Establish a rapid deployment program that allows skilled AI professionals (e.g., data scientists, ML engineers) to be detailed across agencies, with support from the Office of Personnel Management.
- Implement a technology transfer initiative, led by the General Services Administration, to quickly distribute advanced AI tools, applications and use cases between agencies.
- Scope and build a dedicated facility to test AI and autonomous systems under secure and realistic operational conditions.
- Formalize priority access agreements between the DOD and cloud/computing infrastructure providers to ensure uninterrupted AI capabilities during national emergencies or conflicts.
Pillar Two: Build an American AI Infrastructure
To meet the computational demands of advanced AI, the AI Action Plan aims to scale domestic infrastructure and streamline permitting processes for data center construction. Key measures include: (1) streamlining permitting for data centers; (2) developing a strategy to upgrade the U.S. power grid; (3) building high-security data centers; and (4) bolstering cybersecurity for critical infrastructure.[10]
Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities and Energy Infrastructure While Guaranteeing Security
The AI Action Plan notes the advancement of AI requires a buildout of physical infrastructure, including chip fabrication facilities, data centers and reliable energy sources, and existing environmental permitting processes and other regulatory hurdles in the U.S. delay construction. The Trump administration has already initiated permitting and regulatory reforms, but notes that further action is needed to remove bottlenecks and secure the AI supply chain.
The AI Action Plan recommends the following actions:
- Establish new Categorical Exclusions under the National Environmental Policy Act for routine data center projects and encourage cross-agency adoption of existing exclusions to increase permitting speed.
- Broaden the use of the Fixing America’s Surface Transportation (FAST-41) process to include all eligible data center and associated energy infrastructure projects to accelerate review timelines.
- Explore a nationwide Clean Water Act Section 404 permit tailored to AI data centers that removes unnecessary preconstruction requirements and accommodates modern facility sizes.
- Expedite project timelines by streamlining or reducing burdens under laws such as the Clean Air Act, Clean Water Act and CERCLA.
- Direct federal landholding agencies to identify and make federal lands available for the construction of data centers and related energy generation projects suited for large-scale development.
Develop a Grid to Match the Pace of AI Innovation
The AI Action Plan notes that the U.S. power grid must be modernized and expanded to ensure long-term reliability, resilience and capacity to support AI leadership.
The AI Action Plan recommends the following actions:
- Prevent premature shutdown of vital power generation assets and improve reliability by leveraging backup systems and enforcing national standards for resource adequacy and consistent power supply across all regions.
- Upgrade transmission systems with advanced grid management technologies and improve efficiency of power delivery.
- Accelerate the integration of dispatchable, next-generation energy sources, including enhanced geothermal, nuclear fission and fusion, and reform energy markets to better align investment incentives with grid stability needs.
Build High-Security Data Centers for Military and Intelligence Community Usage
The AI Action Plan notes that ensuring secure environments for AI model deployment is critical to protecting U.S. strategic interests.
The AI Action Plan recommends the following actions:
- The DOD, NSC and NIST (including CAISI), in collaboration with industry and research centers, to establish technical standards tailored for high-risk AI environments.
- Encourage agencies to scale up the use of secure, classified infrastructure for AI workloads to safeguard sensitive data and maintain operational integrity.
Bolster Critical Infrastructure Cybersecurity
The AI Action Plan states that while AI can enhance cyber defense, its deployment in safety-critical and homeland security contexts also exposes these systems to adversarial threats, such as data poisoning and manipulation attacks.
The AI Action Plan recommends the following actions:
- Establish an AI Information Sharing and Analysis Center, led by the Department of Homeland Security (DHS) and in partnership with CAISI and the Office of the National Cyber Director, to facilitate intelligence sharing on AI-related threats across critical infrastructure sectors.
- DHS to provide ongoing guidance to private sector stakeholders on how to identify, respond to and remediate vulnerabilities unique to AI systems.
- Ensure that federal agencies share known AI vulnerabilities with the private sector through established cybersecurity communication channels, helping infrastructure owners respond to emerging threats in real time.
Pillar Three: Lead in International AI Diplomacy and Security
The AI Action Plan states that, to maintain global AI leadership, the U.S. must not only advance AI domestically but also promote widespread international adoption of American AI technologies, hardware and standards, transforming current technical advantages into a lasting global alliance while preventing adversaries from benefiting unfairly from U.S. innovation and investment. Key measures include (1) exporting American AI to allies and partners; (2) strengthening AI compute export control enforcement; (3) ensuring that the U.S. is at the forefront of evaluating national security risks in frontier models; and (4) investing in biosecurity.[11]
Export American AI to Allies and Partners
The AI Action Plan states that, to remain competitive in the global AI race, the U.S. must proactively export its full AI technology stack, including hardware, models, software, applications and standards to partner nations.
The AI Action Plan recommends the following actions:
- Establish a program within the DOC to gather and evaluate full-stack AI export proposals from industry consortia.
- Coordinate across agencies (e.g., DOS, Export-Import Bank, DFC, USTDA) to facilitate secure and standards-compliant AI export deals with allied countries.
Strengthen AI Compute Export Control Enforcement
The AI Action Plan states that advanced AI compute underpins both economic growth and national security and that enforcement of export controls remains a major challenge.
The AI Action Plan recommends the following actions:
- Explore using location verification technologies to monitor AI chip deployment and prevent unauthorized access in restricted regions.
- Create a DOC-led enforcement collaboration with the Intelligence Community to monitor emerging AI compute risks and expand end-use monitoring, especially in high-risk countries lacking export control officers.
Ensure That the US Government is at the Forefront of Evaluating National Security Risks in Frontier Models
The AI Action Plan states that, as frontier AI systems grow more powerful, they present emerging national security threats, including potential misuse in cyberattacks and the creation of chemical, biological, radiological, nuclear or explosives (CBRNE) weapons. Since the U.S. currently leads in AI capabilities, these risks are likely to appear here first, and proactively identifying and addressing these risks is essential to maintaining national defense and securing critical infrastructure.
The AI Action Plan recommends the following actions:
- Partner with AI developers to evaluate frontier models for national security risks, focusing on misuse in cyber and CBRNE contexts.
- Assess vulnerabilities and foreign influence risks from adversary AI systems used in U.S. infrastructure, including possible backdoors and malicious behaviors.
- Recruit top AI talent into key federal agencies (e.g., NIST, CAISI, DOE, DOD, IC) to strengthen national AI evaluation capabilities.
- Establish and continuously update national security-focused AI evaluation systems through collaboration between CAISI, national security agencies and research institutions.
Invest in Biosecurity
The AI Action Plan acknowledges that AI will revolutionize biology but also creates new risks, including enabling malicious actors to synthesize harmful pathogens.
The AI Action Plan recommends the following actions:
- Require federally funded research institutions to use vetted nucleic acid synthesis providers with strong sequence screening and customer verification systems.
- Create enforcement mechanisms for compliance rather than relying on voluntary adherence.
- Convene OSTP-led efforts to enable data sharing across synthesis providers to detect and block malicious activity.
- Maintain AI-related biosecurity evaluations in partnership with national security agencies and CAISI.
Conclusion
With the AI Action Plan, the Trump administration further distances itself from the relatively cautious approach for AI development under the Biden administration and wholeheartedly adopts an AI-first agenda. While there is some work for various administrative agencies to do, we expect that the clear signals from the Trump administration will serve as an immediate boost to the U.S. AI ecosystem and further encourage AI developers, investors and infrastructure providers. It may also prompt shifts in global AI strategy and regulation as other nations react to the U.S.’s renewed competitive posture in emerging technologies. Lastly, the AI Action Plan sharply contrasts with the approach taken by the European Commission under the EU AI Act.
[1] E.O. 14179 builds upon President Trump’s earlier Executive Order 13859 (February 11, 2019), signed during his first administration. E.O. 13859 focused on six objectives: (1) fostering R&D investment; (2) enhancing access to federal data and computing resources; (3) lowering adoption barriers; (4) establishing technical standards; (5) training AI professionals; and (6) developing a national action plan. Several developments have followed the issuance of E.O. 14179. In February 2025, the White House issued a Request for Information (RFI) to gather public input for the Action Plan, and by April, more than 10,000 comments had been received.
[2] Other measures in support of accelerating AI innovation include (1) “Support Next-Generation Manufacturing” (policy actions include federal investment in manufacturing technologies through programs such as the Small Business Innovation Research program, the Small Business Technology Transfer program, CHIPS R&D and the Defense Production Act, as well as industry-government collaboration led by DOC/NTIA to address robotics and drone supply chain challenges); (2) “Invest in AI-Enabled Science and Advance the Science of AI” (policy actions include prioritizing investment in theoretical, computational and experimental research to advance AI capabilities; funding automated, cloud-enabled scientific labs supporting Focused-Research Organizations using AI; incentivizing public release of high-quality datasets; and requiring disclosure of nonproprietary, nonsensitive data from federally funded research); (3) “Invest in AI Interpretability, Control, and Robustness Breakthroughs” (policy actions include launching a DARPA-led program with CAISI and NSF to advance AI interpretability, control systems and robustness; prioritizing these areas in the National AI R&D Strategic Plan; and coordinating an interagency AI hackathon initiative to test systems for transparency, control and security vulnerabilities); (4) “Empower American Workers in the Age of AI” (policy actions include prioritizing AI skill development across federal education and workforce programs; clarifying tax-free status of AI training to encourage employer investment; analyzing AI’s labor market impacts through BLS, Census and BEA; establishing an AI Workforce Research Hub; funding rapid retraining for displaced workers; and piloting innovative workforce strategies through existing federal authorities); (5) “Enabling AI Adoption” (policy actions include creating regulatory sandboxes and Centers of Excellence for real-world testing; launching domain-specific initiatives to set national AI standards and measure productivity gains; conducting regular DOD-Intelligence Community assessments of global AI adoption; and coordinating interagency efforts to collect and share intelligence on foreign frontier AI projects with national security relevance); and (6) “Protect Commercial and Government AI Innovations” (policy action includes collaboration between DOD, DHS, CAISI and the Intelligence Community with leading U.S. AI developers to help the private sector safeguard AI technologies against cyber threats, insider risks and other security challenges).
[3] Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” was issued on October 30, 2023, by the Biden administration. It directed a wide array of federal agencies to implement extensive oversight measures for AI development, including mandatory reporting requirements, pre-deployment evaluations, and expanded civil rights and labor impact assessments.
[4] We believe this should be viewed in the context of the eleventh-hour removal of provisions prohibiting state regulation of AI in the One Big Beautiful Bill Act (“OBBBA”). OBBBA proposed a moratorium prohibiting states from enacting or enforcing new laws or regulations specific to AI models, systems, or automated decision systems for a period of up to five years (down from the 10-year ban passed by the House). The moratorium’s enforcement was tied to eligibility for $500 million in new federal funding and included a narrow exemption for generally applicable state laws so long as they did not impose an “undue or disproportionate burden” on AI systems and “reasonably effectuated” the underlying regulatory purpose.
[5] Executive Order 14192 was signed by President Trump on January 31, 2025. The Executive Order directs federal agencies to promote fiscal responsibility and reduce regulatory burdens by requiring that, for every new regulation issued, at least ten existing regulations be repealed. Agencies must also ensure that the total incremental cost of regulations finalized in FY 2025 is significantly below zero, with ongoing cost offset requirements in future years.
[6]This may include the existing 2024 FTC consent order against Rytr, resolving allegations that Rytr offered a generative AI-based service enabling users to — among other things — create fake and deceptive online reviews, which the FTC under the Biden administration deemed in violation of the FTC Act. The consent order prohibits Rytr from marketing or selling any service aimed at generating consumer reviews or testimonials.
[7] The Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, or TAKE IT DOWN Act, is a United States law aimed at dealing with revenge porn and deepfakes posted to online sites and social media applications, typically made with AI. The bill was introduced by Sen. Ted Cruz in June 2024, passed both houses by near unanimous votes by April 2025, and was signed into law by President Trump on May 19, 2025.
[8] We note that the AI RMF emphasizes the importance of diverse, inclusive teams in AI governance (Govern Function 3; 3.1) and encourages organizations to identify and mitigate harmful bias, particularly where systems may perpetuate inequality or limit accessibility (AI Risks and Trustworthiness, Section 3.7). On environmental concerns, the AI RMF calls for assessment and documentation of the environmental impact and sustainability of AI model training and management (Measure Function 2.12) and acknowledges the resource intensity and energy demands of modern AI systems (Appendix B).
[9] The Department of Justice (DOJ) is also directed to issue guidance encouraging federal agencies involved in adjudicative processes to adopt a standard for deepfake evidence evaluation, modeled on the proposed Federal Rules of Evidence Rule 901(c), which aims to address the authentication of AI-generated content. DOJ’s Office of Legal Policy should also actively participate in shaping national evidentiary standards by submitting formal comments on any deepfake-related proposals to the Advisory Committee on Evidence Rules.
[10] Other measures in support of building American AI infrastructure include (1) “Restore American Semiconductor Manufacturing” (policy actions include delivering strong ROI and eliminating unnecessary policy requirements for CHIPS-funded semiconductor projects, as well as reviewing grant programs to accelerate AI integration in semiconductor manufacturing); (2) “Train a Skilled Workforce for AI Infrastructure” (policy actions include developing national skill frameworks for AI infrastructure occupations; funding employer-led training and upskilling programs; expanding early career exposure and pre-apprenticeships; updating CTE curricula; growing Registered Apprenticeships in critical AI roles; and supporting hands-on AI training through DOE national labs and partner institutions); (3) “Promote Secure-By-Design AI Technologies and Applications” (policy actions include refining the DOD’s Responsible and Generative AI frameworks and publishing an Intelligence Community Standard on AI Assurance under IC Directive 505); and (4) “Promote Mature Federal Capacity for AI Incident Response” (policy actions include integrating cybersecurity standards and incident response protocols; updating federal playbooks to require coordination between CISOs and AI officers; and encouraging responsible sharing of AI vulnerability information under Executive Order 14306).
[11] Other measures in support of leading in international AI diplomacy and security include (1) “Align Protection Measures Globally” (policy actions include implementing technology protection measures across research, education and diplomacy; aligning allied export controls through a strategic AI diplomacy plan; promoting plurilateral control frameworks; and coordinating with allies to prevent adversary access to sensitive technologies, led by DOC, DOS, DOD, DOE and other partners); (2) “Plug Loopholes in Existing Semiconductor Manufacturing Export Controls” (policy actions include developing export controls on semiconductor manufacturing subsystems not currently covered, closing critical gaps in existing controls led by the DOC); and (3) “Counter Chinese Influence in International Governance Bodies” (policy actions include advocating in international forums for AI governance that promotes innovation, aligns with American values and counters authoritarian influence).
Contributors


Practice Areas
For More Information

