Making AI Work for the Public: Why the ALT Framework Matters for Federal Contractors

The New America/RethinkAI report, Making AI Work for the Public, argues that government’s current AI push is too narrowly fixated on efficiency, and it proposes a governance framework—Adapt, Listen, Trust (ALT)—that reframes how public institutions should deploy AI. For federal contractors, this shift is not academic; it foreshadows how solicitations will be scoped, how performance will be measured, and how risk and accountability will be allocated over the next cycle of procurements. The authors’ field scan shows a remarkable legislative surge—more than 1,600 state AI bills since 2019, with 735 in the first half of 2025 alone—and finds that roughly three quarters are “controlling” measures that set guardrails, audits, and bans rather than enabling transformation. That backdrop will raise compliance burdens across multi-state delivery while leaving agencies hungry for partners who can operationalize AI responsibly and measurably.

The report’s ground truth is that adoption is happening, but mostly in walled gardens. States are building sandbox environments, issuing enterprise guidance, and pairing access to tools with mandatory upskilling—some with six-figure numbers of public employees enrolled in training. Cities, by contrast, are piloting targeted use cases—translation, memo drafting, permitting triage, safety analytics—often led not by mayors’ innovation offices but by CIOs who are publishing playbooks and demanding pragmatic, production-ready solutions. For vendors, this means procurement leads will increasingly sit with IT and enterprise architecture stewards, and proposals that stop at “faster, cheaper” will underperform against ones that integrate workforce enablement, operational process redesign, and post-deployment learning loops.

ALT’s first pillar—Adapt—speaks directly to how contractors should design implementations and price risk. The report stresses that well-placed AI removes friction and therefore amplifies demand; a self-service chatbot that simplifies requests will increase service volumes, not simply “save time.” Contractors who forecast demand surges, stress-test processes, and reallocate human work toward high-judgment tasks will be better positioned to meet service-level agreements and withstand scrutiny from inspectors general and legislators. Proposals that bundle forecasting models, triage redesign, and change-management plans with agentic automation will align with this adaptive posture.

Listen reframes “user research” as durable context engineering. Rather than clever prompts, agencies will need institutional memory—data models, ontologies, and governance that let AI interpret policies, budgets, and services against resident needs. For federal contractors, that invites a different deliverable mix: low-code/no-code tools that help program staff translate rules into plain language; pipelines that fuse 311 logs, benefits data, and unstructured transcripts; and multilingual accessibility by default. It also implies new evidence artifacts—traceable reasoning, model cards tied to civic datasets, and participatory testing—that will become evaluation criteria in source selections and task order competitions.

Trust is the most consequential pillar for federal work. The report advocates two-way accountability—community-controlled data, civic data trusts, resident-facing impact dashboards—rather than one-way transparency. Expect solicitations to require measurable “trustworthiness” alongside efficiency: fairness, responsiveness, usefulness, and observable outcomes such as environmental stress reduction or safety improvements. Vendors who can operationalize data-sharing compacts, privacy-preserving analytics, and continuous public feedback mechanisms will differentiate—especially where agencies face a “vision vacuum” but intense legislative pressure.

Finally, the ecosystem analysis is a contracting roadmap: philanthropy is funding field-building and prototypes, universities are pivoting to production-grade civic projects, and peer coalitions are harmonizing procurement expectations. For federal contractors, partnering upstream with these actors can de-risk proofs of concept, furnish credible third-party evaluation, and accelerate readiness for federal scale. In short, the report is significant because it highlights where policy is going, what buyers will ask for, and how contractors can evolve from tool deployers to governance partners who deliver results residents can see and trust.

Disclaimer: This summary is informational and reflects the cited report’s findings at the time of writing. It is not legal advice, does not create an attorney-client relationship, and readers should consult counsel and official guidance before acting on regulatory or contractual matters.

Next
Next

Why MAS Refresh 30 Matters: Interpreting GSA’s RFO-Aligned Clause Overhaul