Why Generative AI Isn’t Revolutionizing Government — Yet
In his May 21, 2025 piece, Tiago C. Peixoto examines why generative AI (GenAI) has not yet fundamentally transformed public-sector workflows, despite widespread hype and increasing expectations. Drawing from conversations with digital service practitioners and recent analytical studies from the Alan Turing Institute and the Tony Blair Institute, Peixoto finds a consistent pattern: while estimates and benchmarks abound, compelling real-world applications remain scarce. The gap, he argues, is not merely technological—it’s institutional, structural, and deeply rooted in the way government operates.
Peixoto notes that many so-called “AI deployments” in the public sector are often mislabeled, conflating traditional automation methods like rules-based decision trees and robotic process automation (RPA) with GenAI. Most actual uses of GenAI are bounded and assistive, not transformative. Tools like France’s Albert and Brazil’s MARIA help draft and summarize, but decisions remain in human hands. Utah’s use of retrieval-augmented generation (RAG) models to support call centers shows GenAI’s informational power, yet these tools do not execute workflows or change core systems. The majority of current deployments simulate service support, not actual service delivery.
Structural barriers loom large. Governments tend to digitize existing processes rather than reimagine services around new technologies. Moreover, GenAI’s variable outputs clash with the deterministic, rules-based design of Weberian bureaucracy, raising accountability and interpretability concerns. This misalignment explains why government agencies often relegate GenAI to marginal tasks, limiting its systemic impact.
Peixoto proposes a shift in approach. Rather than assessing AI against perfect standards, governments should compare GenAI performance to “the best available human” in overloaded or underserved roles. In places with limited capacity, such as parts of sub-Saharan Africa, GenAI may offer greater consistency and accountability than human-led systems. He emphasizes the importance of context: in robust bureaucracies, GenAI must conform to procedural safeguards; in fragile systems, its introduction may be the safest option.
To close the implementation gap, Peixoto calls for disaggregating AI types, focusing on high-value tasks, and emphasizing augmentation over automation. Governments should prioritize construct-valid pilots embedded in real workflows, benchmark performance realistically, and reform procurement models to accommodate iterative deployments. Governance, he argues, must evolve beyond static oversight to embrace probabilistic evaluation and citizen-inclusive design, especially where traditional human oversight is unavailable or unreliable.
The essay concludes with a grounded optimism. While full GenAI automation in government remains aspirational, strategic augmentation and tightly scoped delegated autonomy represent viable, impactful pathways forward. The real promise lies not in AI for its own sake but in a more equitable, responsive public sector—one that uses technology to extend access, improve quality, and earn trust.
This blog post summarizes the original article by Tiago C. Peixoto titled “Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It,” published on May 21, 2025. The findings, interpretations, and conclusions expressed here are those of the author and do not constitute legal advice or a guarantee of accuracy.