2026-01-16

'This is industrial policy of a scope and ambition that would have seemed inconceivable even a decade ago–—particularly from a party historically committed to free enterprise and opposed to government involvement in the private sector. It represents not the absence of state intervention in markets but its intensification, conducted through regulation by ownership. When the government acquires equity in private firms, it gains leverage: influence over where factories are built, which technologies are prioritized, and which partnerships are deemed acceptable.'

body

One of the most interventionist approaches to technology governance in the United States in a generation has cloaked itself in the language of deregulation. In early December 2025, President Donald Trump took to Truth Social to announce a forthcoming “One Rule” executive order on artificial intelligence (AI). Trump warned that US leadership in AI would be “DESTROYED IN ITS INFANCY” by the meddling of “50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS.” But beneath the bluster was a consequential policy move: the federal preemption of state authority to govern AI, framed as the removal of bureaucratic obstacles from the path for American technological dominance.

To speak of regulation is to conjure a particular image: the federal agency, the notice-and-comment process, the rule published in the Federal Register. This is regulation as visible, formal, legible—and therefore open to dispute and debate. But governance does not vanish when it stops looking like this. Decades of social science research make clear that the steering of policy occurs through mechanisms far more varied than formal rulemaking. Much of the classic work on regulation focused on how interests contest rules once they are on the books. Economist and Nobel laureate George Stigler showed that regulation serves concentrated interests, but his analysis still presumed the formal rule as the site of struggle. Other strands of scholarship have shown that regulatory power often operates earlier and through the shaping—and breaking—of shared expectations about what kinds of governance are legitimate or even imaginable. Political scientists Martha Finnemore and Kathryn Sikkink identified one such mechanism: norm entrepreneurship, the work of establishing collective understandings about appropriate behavior that shape what is possible before any rule is written.[1]

All administrations engage in norm-setting; this is an ordinary feature of governance. What distinguishes the current approach is not norm entrepreneurship itself but its character: a greater reliance on executive discretion than on deliberative, public processes.

The Trump administration is engaged in norm destruction—breaking expectations about transparent governance and public oversight while installing new assumptions about how technological development should be directed. What it has advanced is not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority. Many of these actions face legal challenge, and some may not survive judicial review. But the pattern itself—the systematic preference for executive discretion over deliberative process—reveals an approach to governance that will shape AI policy regardless of how individual cases are decided. This is not deregulation. Not in the least. It is hyper-regulation by other means.

When Michael Kratsios, the White House science and technology advisor, traveled to Montreal in December 2025 to address G7 technology ministers, he was not merely lobbying against specific rules. He was working to establish as common sense the proposition that government oversight represents an obstacle to innovation rather than a precondition for it. The “trusted AI ecosystem” he advocated —built on “smart, sector-specific regulations tailored to each nation’s priorities”—is itself a normative framework, one that privileges certain values (a narrow idea of innovation, national competitiveness) while marginalizing others (innovation broadly understood to include social and economic benefits, precaution, public accountability, the rights of those affected by AI systems). The Trump administration is not withdrawing from AI regulation; it is attempting to set the terms on which AI governance will be understood globally.

Investment in semiconductors and critical minerals was a priority of the Biden administration as well. What distinguishes the current approach is not industrial policy itself but how it operates—circumventing ordinary accountability mechanisms in favor of discretionary power.

This is industrial policy of a scope and ambition that would have seemed inconceivable even a decade ago–—particularly from a party historically committed to free enterprise and opposed to government involvement in the private sector. It represents not the absence of state intervention in markets but its intensification, conducted through regulation by ownership. When the government acquires equity in private firms, it gains leverage: influence over where factories are built, which technologies are prioritized, and which partnerships are deemed acceptable.

Immigration policy provides another, less visible lever of AI governance. Restrictions on high-skilled immigration—implemented through visa screening, fees, and administrative guidance—shape who participates in US AI research and development. Concerns about the potential displacement of US workers are not unfounded, but the governance implications extend beyond labor markets. Restricting this pipeline does not just limit who works in laboratories, firms, and universities; it determines which perspectives and expertise circulate within them, shaping both what gets built and what questions can be asked about its consequences.

At the same time, research funding has been redirected through executive action toward narrowly defined AI development priorities, while work examining the social, labor, and health implications of the widespread deployment of AI models, tools, and systems has been defunded or deprioritized. The administration has framed research on algorithmic discrimination, disparate impact, and the environmental costs of AI systems as political bias rather than empirical inquiry. This rhetorical move attempts to sideline the substantive premise that AI governance should attend to collective goods. Democratic accountability concerns not just how decisions are made but whose interests they protect and whose harms they make visible.

These decisions restructure the intellectual resources of the scientific enterprise itself, influencing not only what technologies are built but which questions can be asked about their consequences. This is AI governance operating upstream of regulation, through control over the people and knowledge that constitute the field.

But the solution matters as much as the problem it addresses. Federal preemption by executive action, rather than through congressional action, replaces diffuse but locally accountable policy with concentrated but unaccountable governance. Framed as relief from regulatory burden, preemption represents an aggressive assertion of federal authority that forecloses democratic experimentation at the state level. Whatever one’s view of state-level AI policy, federal preemption is itself a form of regulation—one that concentrates power while insulating it from local accountability.

Here the regulation-deregulation binary collapses entirely. The Trump administration is not removing government from AI regulation; it is concentrating governmental power at the federal level while deploying it through mechanisms—investment, ownership, research funding, immigration controls, and preemption—not typically classified as “regulation.” The effect is a regulatory regime simultaneously more intensive and less transparent, less democratically accountable, than the rulemaking it displaces. Political economist Dani Rodrik has observed that governments always engage in industrial policy; the question is whether they do so deliberately and transparently or “ surreptitiously and without an overall strategic frame.”

The current administration’s AI policy may be a case study in the latter. The decisions being made now may determine US technology research, development, and deployment infrastructure for decades; early choices at critical junctures lock in trajectories that become increasingly difficult to reverse. An administration proclaiming deregulation is making precisely such choices through mechanisms that escape the ordinary character of regulatory debate. These choices are hyper-regulatory, interventionist, yet outside the traditional vocabulary of regulation while exercising power every bit as consequential.

Procedures such as public notice, comment periods, reasoned decision-making, judicial review, and congressional approval are hard-won mechanisms of public oversight. These principles must be extended to the new modalities through which governance now operates: the equity stake chosen in private, the research priority set by executive action, the visa rule implemented through guidance.

Regulation is not just a matter of abstract procedures and frameworks. Governance is always governance of someone. When a graduate student loses funding for scholarship on algorithmic bias, that is not an abstraction—it is AI policy in practice. When a machine learning researcher cannot obtain a visa to return to her lab, that is AI regulation at work. And when a rare earth miner’s labor unfolds within constraints set by trade policy, ownership arrangements, and government-defined priorities, governance is operating through material conditions rather than formal rules.

When decisions about the technologies that will shape our work, our health, our capacity to know and to communicate are made without public justification, outside ordinary channels of accountability, we are not governing ourselves. We too are being governed. The mirage of AI deregulation obscures this truth. Power has not retreated. It has moved—and democratic accountability requires following it there. Seeing clearly is where that work begins.


  1. An analog of the Overton Window? ↩︎