- 4 August 2025
As the Chartered Governance Institute's AI training expert, I'm writing to you from the US, where I've been witnessing firsthand the dramatic shift in America's approach to artificial intelligence governance. The Trump administration's recently unveiled AI Action Plan represents nothing short of a complete philosophical reversal from the previous administration's regulatory framework, and the implications for global AI governance are profound.
A new era of AI policy
On July 23, 2025, the White House released "Winning the AI Race: America's AI Action Plan", outlining over 90 federal policy actions across three pillars: Accelerating Innovation, Building American AI Infrastructure, and Leading in International Diplomacy and Security. Having spoken to several AI startups in the US there is both excitement and fear, this administration views AI not merely as a technological advancement, but as a national security imperative. The plan's urgency stems from what officials describe as an existential competition with China. "I'm here today to declare that America is going to win" the global AI race, Trump declared, "We're gonna win it, because we will not allow any foreign nation to beat us." This zero-sum framing fundamentally shapes every aspect of the policy approach.
Deregulation as strategy
What strikes me most about this plan is its aggressive deregulatory stance. The administration has revoked Executive Order 14110 from October 30, 2023, which established safety and security requirements for AI development. Trump explicitly stated his intention to eliminate "woke" AI measures, declaring "Once and for all, we are getting rid of 'woke' – is that OK?". However digging deeper I have looked at several of the regulatory bodies such as the SEC (Securities Exchange Commission) and the NIST Framework and their work around ethical and responsible AI continues to at the core of any AI deployment especially in regulated industries. NIST's AI Risk Management Framework (AI RMF), released in January 2023, provides voluntary guidance built around four core functions - Govern, Map, Measure, and Manage - to help organisations incorporate trustworthiness considerations into AI design, development, and deployment. This framework remains influential across regulated sectors, demonstrating that despite the administration's deregulatory rhetoric, established standards for responsible AI continue to guide practice. From a governance perspective, this represents a fascinating case study in regulatory philosophy. Where the Biden administration focused on risk mitigation and civil rights protections, the Trump approach prioritises market-driven innovation. The plan involves removing what administration officials described as "bureaucratic red tape" to AI development and is based on recommendations from the private sector.
Infrastructure and industrial policy
The plan's infrastructure components are perhaps its most concrete elements. Key initiatives include expediting and modernising permits for data centres and semiconductor labs, as well as creating new national initiatives to increase high-demand occupations like electricians and HVAC technicians. There is genuine excitement with Americans about the potential for AI to drive massive infrastructure investment. The administration is also pursuing what it calls "full-stack AI export packages", including hardware, models, software, applications, and standards, to America's friends and allies around the world. This represents a sophisticated attempt to create technological dependencies that could cement American AI leadership globally.
The copyright conundrum
One of the most controversial aspects I've observed in discussions here concerns intellectual property rights. Trump has taken a notably industry-friendly stance on training data, suggesting that AI models shouldn't plagiarise but also can't be expected to go through "the complexity of contract negotiations" for the materials they learn from. This position puts the administration at odds with publishers and content creators but aligns perfectly with the tech sector's preferences. From a governance standpoint, this raises fundamental questions about how we balance innovation incentives with creators' rights, questions that will likely ripple through international IP frameworks, especially with pending lawsuits such as the one with the New York Times and OpenAI.
Federal vs State authority
Perhaps most significantly for governance practitioners, Trump's plan could limit federal funding for states that pass AI laws deemed "burdensome" to developing the technology, with Trump stating "We need one common-sense federal standard that supersedes all states". This represents a remarkable assertion of federal pre-emption in an area traditionally governed by state authority. Even supportive companies like Anthropic have pushed back, stating they "continue to oppose proposals aimed at preventing states from enacting measures to protect their citizens from potential harms caused by powerful AI systems".
International implications
What I find most intriguing from an international governance perspective is how this plan positions America globally. Unlike other policy measures from this administration focused purely on domestic concerns, this action plan moves toward international engagement, including leadership in frontier technology research and creating global governance standards. This suggests recognition that AI governance cannot be achieved in isolation, even an "America First" administration acknowledges the need for international coordination in this domain. Think about American companies with clients in Europe, they will still need to follow the EU AI Act or face huge fines.
Looking ahead
This latest plan will likely accelerate the global AI governance divide. While the EU pursues comprehensive regulation through its AI Act, and the UK seeks a balanced approach, America is betting everything on innovation-first policies. As the nonpartisan US think tank the Atlantic Council notes, "We are in an era of increasing geopolitical competition, increased interdependence, and rapid technological change. No single issue demonstrates the convergence of all three better than AI". The success or failure of America's deregulatory gambit will profoundly influence global AI governance for years to come. For governance professionals worldwide, the Trump AI Action Plan represents both a cautionary tale about regulatory capture and a bold experiment in innovation policy that demands our careful attention. As I continue monitoring these developments globally, one thing is clear: the stakes for getting AI governance right have never been higher, and America has just placed its bet decisively on the side of unfettered innovation.
Harmeen Birk is a global AI specialist with more than 20 years in finance and technology. A former Citigroup leader and founder who exited her Fintech company in 2019, she is also the founder of Across the Board AI, empowering financial institutions to innovate responsibly. At the Institute, she develops practical frameworks and training to help governance professionals lead confidently in the AI era.
Explore our AI training and resources.