The battle over who governs artificial intelligence is no longer theoretical. In the first weeks of March 2026, AI regulation is moving at legislative speed — with states, federal agencies, and foreign governments all pulling in different directions at once.
The Federal vs. State Tug-of-War
In December 2025, President Trump signed an executive order aimed at preempting state AI laws, arguing that a patchwork of 50 different regulatory frameworks would strangle innovation and cede ground to China. The order set up a direct confrontation with states that have been the most aggressive legislators on AI safety.
That confrontation is now playing out in real time. New York has multiple active AI bills, including the Artificial Intelligence Training Data Transparency Act, which would require developers to publicly disclose the datasets used to train their models. The bill advanced to third reading in the state Senate on March 4. Florida’s Governor DeSantis is pushing his own “AI Bill of Rights” — a broad bill that passed the state Senate the same week. Vermont signed new legislation on synthetic media in elections into law on March 5, becoming one of the first states to regulate AI-generated political content.
What the Laws Actually Require
Across the active state bills, several themes keep appearing. Transparency mandates require that AI-generated or AI-modified content carry provenance data so consumers can identify it as synthetic. Data disclosure requirements target training datasets, aiming to expose copyright and privacy concerns baked into foundation models. Liability frameworks attempt to assign responsibility when AI systems cause harm — a question courts are increasingly being asked to answer in the absence of clear statute. Healthcare and housing bills in multiple states limit how AI can be used in decisions affecting insurance, loan approvals, and rental housing access.
The Global Dimension
Europe’s AI Act has been in phased implementation since 2025, and regulators are watching closely how companies comply. The UK’s Information Commissioner’s Office and Ofcom recently issued a formal demand to Elon Musk’s xAI for information about the Grok model — one of the first major regulatory actions targeting a specific model’s behavior rather than a company’s data practices.
AI Companies Are Not Passive
The lobbying campaign by AI companies against state-level regulation is intense. The industry’s core argument is that inconsistent rules across states will make compliance impossible and push development offshore — framing deregulation as a national security imperative in the US-China AI competition. This narrative has found significant traction in Washington, even as the underlying safety concerns that motivated state-level action remain unresolved.
The Pragmatic View
For organizations deploying AI, the regulatory uncertainty is itself a risk factor. The safest approach is to build AI systems that would pass strict transparency and auditability requirements even if those aren’t yet legally mandatory — because in many jurisdictions, they likely will be within two to three years. Document training data sources. Log model decisions. Build explainability into workflows. Companies that treat compliance as an architectural property rather than a retroactive checklist will be far better positioned as the legal landscape solidifies.

Leave a Reply