the conversation gap: who regulates the regulator? the administration’s theory of safety — and whether it has one

A new drug that might cure Alzheimer's requires, on average, ten to fifteen years of trials and over a billion dollars in regulatory compliance before it reaches a single patient. A new AI system that its own creators say could be more dangerous than nuclear weapons requires nothing.

This is not an accident. It is a choice — and one this administration has made with increasing clarity over the past sixteen months. The question worth asking seriously is: is that choice a coherent theory of governance, or is it the absence of one?

Because here is what makes this genuinely difficult: the institutions being dismantled have real failures on their record. and the people defending them have not always been honest about that. The conversation requires holding both things at once — which almost no one is doing.

what the administration has actually done

Let’s start with the facts. The FDA has lost 4,332 employees since the start of the trump presidency — 3,859 in 2025 and 473 more in 2026. The CDC has lost nearly 3,000 positions. HHS overall has shed more than 17,000 jobs. These are not administrative cuts.. They have hit facility inspection programs, laboratory staff that tests drug manufacturing batches, and the support infrastructure for the generic drug supply — which constitutes 90% of all drugs dispensed in the United States. Experts note that FDA is "losing its institutional memory, and there is no way to replace it quickly — dealing with complex problems requires someone with decades of experience, and there is no shortcut."

The U.S. exit from WHO (the world health organization) removed the united states from the primary international body for coordinating pandemic response, disease surveillance, and global health standards.

On AI, the posture is almost a mirror image of restraint. the march 2026 national AI policy framework explicitly prohibits creating a new federal AI regulator, instead recommending "existing regulatory agencies with subject matter expertise and industry-led standards." The december 2025 executive order frames "excessive state regulation" as the primary obstacle to american AI dominance and establishes a litigation task force to challenge state-level AI laws. As of this week, the administration is internally divided over whether national security agencies should have more authority over AI — a debate happening inside the executive branch with no congressional oversight structure in place to observe it.

Gut the institutions that regulate drugs and disease. Block the ones that might regulate AI. Call it a safety strategy. The question is whether it is.

the case that it is: regulatory capture is real

The defenders of this approach are pointing at genuine failures — and those failures deserve to be named accurately.

The oxycontin story is instructive, but not in the way it's usually told. The FDA approved oxycontin correctly. the drug did what it said it would do. What killed 500,000 Americans over two decades was not the approval — it was the prescribing system that surrounded it: physician kickbacks, pharmaceutical marketing that misrepresented addiction risk, insurance frameworks that created demand, and a healthcare incentive structure with no interest in restraint. The FDA's gate held. Everything after the gate failed.

This is, if anything, a stronger indictment of regulatory thinking than the simple version. It means that even when pre-market approval works exactly as designed, the system can still produce catastrophic harm — because approval is not safety, it is a checkpoint. What the opioid crisis revealed is that a checkpoint without ongoing accountability infrastructure, without physician payment transparency, without systemic oversight of prescribing patterns, is not a safety system. It is a permission slip.

The WHO failure on covid is harder to explain away. The organization delayed declaring a public health emergency. It deferred to Chinese government reporting. It declined to classify human-to-human transmission as confirmed until late January 2020, costing the world weeks of preparation. These are not fringe criticisms, they are documented in the organization's own after-action reviews. The United States was contributing $600 million annually to an institution that, in its most important moment, made decisions that cost lives.

These failures are real, but they should inform how we design what comes next. They are not, on their own, an argument for having nothing.

the sunshine act and what expertise actually produces

Here is what the deregulatory argument leaves out: the solutions to regulatory capture were themselves products of expertise in government.

Republican senator Chuck Grassley and democratic senator Herb Kohl first introduced the Physician Payments Sunshine Act together in 2007. It failed. They revised it, built coalitions, and got it passed in March 2010 as part of the Affordable Care Act. The law required pharmaceutical and medical device manufacturers to publicly report every payment made to a physician or teaching hospital — meals, speaker fees, consulting contracts, research funding. All of it, publicly searchable. Since 2017, 88 million transactions have been recorded on the open payments platform, accounting for $76.9 billion in payments and declared interests.

The Sunshine Act did not eliminate industry influence on medicine, but it created the infrastructure of visibility that makes accountability possible. You cannot address a conflict of interest you cannot see. The act was the product of senators who understood how drug markets worked, how physician incentives operated, and how to write legislation that would change behavior without banning the relationships entirely. That is what expertise in government produces — not perfect outcomes, but the specific, targeted mechanisms that make correction possible when things go wrong.

Grassley, notably, is a republican. The Sunshine Act was not a partisan document. it was a technical one. It required people in government who understood the system well enough to design an intervention that fit it.

Who writes that law in an administration that has eliminated 4,300 people from the FDA? Who designs the equivalent for AI — the mechanism that makes it possible to see when something has gone wrong, trace it to its source, and correct it — without institutional knowledge of how these systems actually work?

The answer, increasingly, is no one.

where the AI argument breaks down

The administration's framework recommends regulatory sandboxes, industry-led standards, and reliance on existing sector regulators for AI oversight. The theory is that the market, existing law, and sector-specific expertise will handle problems as they emerge.

This might work if AI were a sector, but it is not. It is infrastructure — the same way electricity is infrastructure, the same way the internet is infrastructure. It crosses every regulated domain simultaneously: finance, healthcare, defense, media, employment, education. The FDA can regulate an AI that diagnoses cancer, but it cannot regulate the same AI system when it's also used to generate synthetic media, price insurance, screen job applicants, and brief military commanders. "Existing sector regulators" is not an answer when the technology has no sector.

The administration's own executive order acknowledges that "state-by-state regulation creates a patchwork of 50 different regulatory regimes that makes compliance more challenging." It uses this argument to block state regulation, but doesn’t use it to build federal regulation. The conclusion it draws is: therefore, nothing. The logic of the argument points toward federal coordination, while the policy points toward a vacuum, and the vacuum is not neutral. When the defense department blacklisted Anthropic for maintaining AI safety guardrails while OpenAI and xAI agreed to "all lawful use" military standards, it revealed the actual preference — not for less regulation, but for no constraints on government use of AI specifically. A theory of safety that exempts the government from its own logic is not a theory of safety, but a theory of power.

what the record actually shows

The administration has not replaced what it is dismantling, that is the core of the indictment.

A coherent deregulatory theory of safety would say: the FDA model is too slow, too captured, and too expensive — here is the faster, leaner, more transparent version we are building in its place. Here is how we will know when something goes wrong — and here is who is accountable.

It would say: the WHO failed on covid — here is the alternative architecture for international health coordination.

It would say: we oppose a federal AI regulator because the existing regulatory model is wrong for this technology — and here is the mechanism, the Sunshine Act Equivalent, that will create visibility into AI harms so we can correct them.

None of those things have been said. What has been said is: regulation slows innovation, institutions are captured, and to trust the market.

The problem is that the market is not what produced The Sunshine Act. Experts in government, with the patience to understand a complex system and the legislative skill to design a targeted intervention, produced it. And it took three years of failed attempts before it passed. There is no shortcut to that kind of knowledge — which is precisely what the people who have been let go from the FDA and the CDC carried with them when they left.

the open question

Is there a coherent theory of public safety that is less bureaucratic, less captured, and more honest about institutional failure — while still taking seriously the question of what happens when powerful technologies go catastrophically wrong?

The opioid crisis tells us that the problem was never the gate. It was the absence of ongoing visibility into what happened after the gate. The Sunshine Act was the answer to that problem — built by people who understood the system from the inside.

Who is doing that work for AI? Who is building the mechanism that will make it possible to see, trace, and correct harm when it occurs? And how do you design that mechanism without the institutional knowledge that has spent the last sixteen months walking out the door?

The gap between a governing philosophy and a vacancy looks the same from the outside, until it doesn't.

Next
Next

the friday brief