When Data Becomes Law: How Governments Use Information as a Regulatory Tool

In their forthcoming article Data as Policy, legal scholars Janet Freilich and W. Nicholson Price II argue that data is more than a byproduct of governance—it is a potent policy tool in its own right. This perspective challenges the traditional understanding of regulation, which has typically been defined by command-and-control rules, economic incentives, and mandatory disclosures. Instead, the authors suggest that the state’s choice to produce, release, or suppress data can itself drive or deter innovation, shape markets, and guide social norms.

Freilich and Price demonstrate how government-supplied data can function as infrastructure, particularly in innovation-dependent sectors such as healthcare and artificial intelligence. The Human Genome Project, for instance, exemplifies how large-scale public data initiatives can disrupt private monopolies, democratize scientific knowledge, and accelerate research. By placing genetic information into the public domain, the government thwarted efforts by private firms to patent and restrict access to the human genome. The result was not just a biomedical breakthrough, but a policy statement about data as a public good.

In contrast, withholding or removing data can serve deregulatory or even authoritarian purposes. The authors cite recent events such as the CDC's removal of datasets related to sexual orientation and gender identity. These actions don't simply limit research—they can erase communities from public discourse, leaving policymakers and advocates without the evidence they need to act. In this way, government data policy becomes a quiet but decisive tool of inclusion or exclusion.

The article also explores the concept of “data pathologies,” identifying common problems such as bias, fragmentation, and expense. These pathologies degrade the quality and equity of outcomes in fields like AI, where biased or incomplete datasets can lead to systemic discrimination. The authors contend that private markets have little incentive to correct these issues, particularly when data collection is costly and the economic returns are skewed toward well-resourced users. Governments, by contrast, can create high-quality, representative datasets to counteract these failures and promote fairness.

Freilich and Price go on to distinguish their theory from existing frameworks like informational regulation and open government data initiatives. Unlike transparency-focused efforts, which often emphasize passive disclosure for public accountability, their vision is active and strategic. Data is not merely released; it is curated to achieve targeted policy goals—whether to foster legal technology, reform energy usage, or reduce algorithmic bias.

Importantly, the authors do not ignore the risks. They acknowledge the potential for privacy intrusions, abuse of data power, and democratic erosion when data practices lack transparency. But they argue persuasively that these concerns underscore the need for a more deliberate and accountable approach to governmental data policy—not a retreat from it.

Disclaimer: This blog post is a summary for informational purposes only. It is not guaranteed to be accurate and does not constitute legal advice.

Previous
Previous

The Agentic State: Reimagining Government Through AI Agents

Next
Next

Turning Surplus into Strategy: The Complex Path of Federal Property Disposition