10 Takeaways from Trump’s AI Action Plan

By Courtney King | July 24, 2025

Yesterday the Trump campaign released its America First AI Action Plan, outlining how the Trump administration plans to approach the development and regulation of artificial intelligence.

While the document reads more like a campaign statement than a detailed policy blueprint, it offers early clues about how this White House might handle AI governance. At Stillwater Insights, we read the full report so you don’t have to. Below are 10 takeaways that matter for anyone building, deploying, or regulating AI in the U.S.

1. The plan is light on governance and heavy on innovation.

The bulk of the document focuses on accelerating AI development, not regulating it. There are repeated promises to “unleash” American innovation and remove “regulatory barriers,” but almost no concrete guidance on managing AI risks or enforcing ethical standards.

2. Expect a rollback of Biden-era executive orders.

Trump pledges to rescind recent AI-related executive orders, particularly those that require federal agencies to assess AI risk or incorporate ethical design principles. Instead, his administration would prioritize deregulation and commercial freedom.

3. There’s a clear rejection of the precautionary principle.

The Action Plan critiques what it calls “fear-driven AI regulation,” asserting that U.S. competitiveness is being stifled by overcautious risk assessments. This marks a sharp contrast with the EU’s risk-based AI Act or Canada’s evolving approach to AI governance.

4. Open-source AI is a strategic priority.

The document frames open-source AI as a national security asset, not a threat. It calls for U.S. government support of open-source models as a counterbalance to centralized control by large tech firms—or foreign adversaries.

5. Antitrust enforcement could heat up—but selectively.

The plan accuses “a handful of Big Tech firms” of monopolizing AI development. While vague, this rhetoric suggests a possible return to populist antitrust energy—potentially targeting incumbents like OpenAI, Google, or Anthropic.

6. Data privacy is not a central concern.

The plan barely mentions data privacy or surveillance risks. There is no indication of support for a national data privacy law or stronger protections against biometric tracking, AI-driven profiling, or facial recognition abuse.

7. There’s support for AI in education, defense, and healthcare.

Expect targeted investments in sectors where AI is seen as a productivity booster. Trump’s plan sees AI as key to reshoring manufacturing, streamlining healthcare, and modernizing the military—without much discussion of unintended consequences.

8. China remains the focal point.

As with much of Trump’s platform, the Action Plan paints AI competition in explicitly geopolitical terms. China is referenced repeatedly as a threat to U.S. dominance, and AI is positioned as a cold war–era tech race.

9. Ethics is reframed as ‘American values.’

Rather than endorsing multilateral frameworks or universal human rights standards, the Action Plan calls for AI aligned with “American values” and “constitutional principles.” That framing leaves open the question of who defines those values, and how.

10. Implications for companies: prioritize agility.

If implemented, this plan would create a more permissive regulatory environment in the America, but a more volatile one too. Companies should prepare for rapid shifts in compliance expectations depending on who’s in the White House, and where their models are deployed globally.

Final Thoughts

Trump’s AI Action Plan reflects a deregulatory, nationalistic approach to artificial intelligence. It prioritizes speed, sovereignty, and competition over caution or coordination. For organizations navigating AI governance, the message is clear: U.S. policy is still deeply fragmented, and the next few years may bring even more divergence across borders and political administrations.

At Stillwater, we help teams stay grounded through uncertainty. If you’re navigating AI risk, compliance, or organizational readiness, we’re here to help.

Next
Next

Opinion: AI Governance Needs to Get Out of the Ivory Tower