The Trump administration is preparing a significant executive order designed to assert federal control over artificial intelligence regulation in the United States, a move that would directly challenge states attempting to create their own rules for the burgeoning technology.
The Federal Power Play
According to a draft order obtained by Axios, the plan would empower the Justice Department to legally challenge US states that are considering or have implemented their own AI regulations. The administration, led by President Donald Trump, aims to steer the national approach to AI governance and prevent what it sees as a fragmented regulatory landscape.
David Sacks, Trump's appointed czar for AI and cryptocurrency, has been a vocal proponent of this centralised approach. He has warned that allowing states individual control would burden the industry with excessive regulation, creating what he describes as an 'asymmetrical threat' that could hinder American competitiveness against China's rapidly advancing AI sector.
Commerce Secretary Howard Lutnick reportedly emphasised in a private meeting that the administration believes it can act on its concerns without requiring Congressional approval. However, a White House official tempered expectations, telling the Daily Mail that 'until officially announced by the WH, discussion about potential executive orders is speculation.'
Mechanisms of Enforcement
The proposed order, which remains subject to change, outlines specific mechanisms for enforcement. It would instruct Attorney General Pam Bondi to establish an 'AI Litigation Task Force' within 30 days of the order's enactment.
This specialised legal team would be authorised to litigate against state-level AI laws, potentially arguing that such regulations unconstitutionally interfere with interstate commerce. Furthermore, the draft order directs various US federal agencies to scrutinise existing state AI laws that might conflict with the forthcoming executive action.
States refusing to align with the federal position risk losing access to crucial federal grant funds, according to the current proposal. This creates a significant financial incentive for compliance with the administration's vision for a unified national AI policy.
Context and Controversy
The reported plan emerges amidst heightened White House focus on artificial intelligence. Just one day before the draft order was revealed, First Lady Melania Trump delivered a speech to US service members and their families at Marine Corps Air Station New River in Jacksonville, North Carolina, warning about what she termed the 'dystopian' dangers of the nascent technology.
'To win the AI war, we must train our next generation, for it's America's students who will lead the Marine Corps in the future,' she told military personnel on Wednesday, characterising the AI revolution as the most pressing societal change since the development of nuclear weapons.
This push for federal preeminence follows a failed attempt to include a 10-year moratorium on state AI regulations in Trump's 'Big Beautiful Bill' during the summer. That provision faced strong opposition from Congresswoman Marjorie Taylor Greene, who loudly condemned it as an infringement on states' rights.
'I am adamantly OPPOSED to this and it is a violation of state rights,' Greene wrote in June. 'We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous.' The Senate subsequently amended the bill and removed the moratorium.
Sacks expressed his frustration with this outcome on the All-In podcast that he co-hosts, stating, 'Republicans are in power in Washington, and the states are making a bunch of bad decisions with respect to AI.' He argued against a scenario where AI companies must report to 50 different states and agencies, each with varying definitions and deadlines.
The administration's concern is particularly focused on major Democratic-led states like California and New York, which could draft influential regulations that might be adopted by other states. Sacks questioned, 'Why would you allow the big blue states to essentially insert dei into the models, which will affect the red states too?'
This concern appears validated by recent legislative action. In September, California Governor Gavin Newsom signed a law mandating that large AI companies annually publish their safety protocols, risk mitigation strategies, and report critical safety incidents. Although this law does not take effect until 2026, advocates hope it will serve as a framework for other states like New York to follow.