Buying Blind: Why Federal AI Procurement Needs Stronger Oversight
In Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement, Professor Jessica Tillipman argues that the federal government’s accelerating adoption of artificial intelligence is outpacing the legal and institutional safeguards needed to preserve procurement integrity. Crediting the author directly, this article is a summary of Tillipman’s central thesis that the government is increasingly “buying blind,” acquiring AI tools without sufficient transparency, audit rights, testing protocols, or internal capacity to understand the systems it is deploying.
Tillipman’s argument is grounded in a simple but consequential proposition: acquisition decisions made now will shape future operational risks. In her account, recent federal AI policy has emphasized speed, commercial acquisition, and deregulation, while narrowing the government’s practical ability to negotiate protective terms. That shift matters because AI is not merely another software purchase. It is a layered, data-dependent technology whose risks may originate in infrastructure, model architecture, customization, application integration, or weak human oversight. When agencies procure AI without meaningful visibility into those layers, they inherit systems that may be opaque, difficult to audit, and resistant to independent verification.
A major contribution of the article is its distinction between traditional corruption risks and broader integrity risks. Tillipman explains that AI can magnify familiar concerns such as organizational conflicts of interest, fraud, falsification, and supply chain manipulation. But it also creates structural vulnerabilities that do not always look like conventional corruption at first glance. These include contractor lock-in, promotional pricing used as strategic buy-in, automation bias, reduced workforce competence, limited auditability, and technical flaws such as hallucinations that can nonetheless distort public decision-making. The danger is that such weaknesses may become embedded in procurement systems and continue operating even after individual actors leave the scene.
The article is equally practical in its recommendations. Tillipman does not assume sweeping legislative reform is imminent. Instead, she proposes immediate safeguards that agencies can adopt within existing procurement frameworks: stronger contractual provisions for transparency and audit rights, independent verification and third-party assessments, red-teaming and testing protocols, documented human review for high-risk uses, development of AI procurement guides, and investment in an AI-literate acquisition workforce. She also urges the use of institutional oversight mechanisms, including designated AI Integrity Advocates and stronger cross-agency audit coordination.
The broader lesson of Buying Blind is that governance and innovation are not opposing values. Tillipman contends that governance is what makes innovation sustainable, because it preserves competition, strengthens trust, and reduces the likelihood that procurement systems become vulnerable to hidden manipulation. Her warning is clear: if agencies fail to embed safeguards now, federal AI procurement may drift toward dependency, opacity, and diminished accountability at precisely the moment when public trust is most needed.
Disclaimer:
This blog post is a summary and commentary based on Jessica Tillipman’s article, Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement, forthcoming in the Public Contract Law Journal (Winter 2026). It is provided for informational and educational purposes only and does not constitute legal advice.