The Surge: How Federal Agencies Are Adopting and Managing Generative AI
In its July 2025 report, Generative AI Use and Management at Federal Agencies (GAO-25-107653), the Government Accountability Office (GAO) delivers a sweeping and data-rich assessment of how U.S. federal agencies are integrating generative artificial intelligence into their operations. The report reveals a ninefold increase in generative AI use cases among selected agencies from 2023 to 2024, reflecting both the technology’s promise and the policy and infrastructure hurdles that agencies continue to navigate. Authored by Candice N. Wright and Kevin Walsh, the report is part of an ongoing body of work on the federal government’s evolving AI landscape.
GAO reviewed 12 agencies that had publicly reported generative AI activity between 2023 and 2024. These included high-profile agencies such as the Departments of Veterans Affairs, Homeland Security, Defense, and Health and Human Services, as well as NASA, NSF, and others. From this group, AI use cases more than doubled overall—from 571 in 2023 to 1,110 in 2024—while generative AI use cases surged from 32 to 282. Mission-support operations accounted for the largest share of these new use cases, including applications in report summarization, chatbot development, and document translation. In health and medicine, the Department of Veterans Affairs is using generative AI to automate medical imaging analysis, while HHS is leveraging it to identify and contain potential poliovirus outbreaks.
Despite this rapid adoption, agencies face a host of obstacles. Chief among them is the tension between generative AI’s power and existing federal privacy, cybersecurity, and procurement frameworks, which were not built with this technology in mind. Officials from 10 of the 12 agencies said that existing policy presented potential roadblocks, particularly where classified or sensitive data are involved. The fast pace of innovation has also made it difficult for agencies to maintain current and flexible policies on appropriate use. Four agencies specifically noted this challenge, with GSA and HHS highlighting the problem of policy obsolescence in the face of emerging capabilities.
Agencies are also constrained by insufficient technical infrastructure and tight budgets. High-performance computing resources, essential for training and running generative AI models, remain limited. Agencies like DOD and NASA cited their inability to fully exploit the technology due to a lack of computational capacity. Procurement delays caused by compliance requirements like FedRAMP certification further complicate timely access to commercial generative AI tools. Meanwhile, workforce development remains a critical gap: six agencies noted difficulty recruiting and training AI-literate personnel, often losing top talent to the private sector.
To address these challenges, federal agencies are beginning to institutionalize frameworks and policies tailored to generative AI. Most are using tools like the NIST AI Risk Management Framework (AI RMF) and GAO’s own AI Accountability Framework to guide risk mitigation and promote trustworthy systems. DOE, for example, developed a generative AI reference guide to manage risks including hallucinations and deepfakes. Agencies are also formalizing policies on appropriate use, with 11 of the 12 reporting that they now have internal guidelines in place. These include limitations on inputting sensitive data into AI models and requirements for staff training on generative AI risks.
Importantly, collaboration is emerging as a key strategy. GAO details a number of promising cross-agency efforts. The Federal CIO Council, working with GSA, NSF, DOE, and others, is coordinating the creation of a government-wide AI resource-sharing platform. DOI is partnering with the USDA Forest Service to develop AI applications for wildfire prediction. GSA, meanwhile, has hosted hackathons to advance AI-powered improvements to digital services. These joint efforts demonstrate a shift toward enterprise-level thinking about AI, moving beyond agency silos to shared innovation.
The GAO concludes that while significant progress has been made, further integration of frameworks, interagency coordination, and risk governance is essential. Agencies are just beginning to implement requirements from the April 2025 OMB Memorandum M-25-21, which calls for agencies to publish AI strategies, assess AI impacts, and publicly report waivers and risk management plans related to high-impact AI systems. Compliance with these requirements will be a defining test of whether agencies can manage generative AI responsibly while maintaining public trust.
Disclaimer: This blog post is a summary of GAO Report GAO-25-107653. While care has been taken to accurately reflect the content, it does not constitute legal advice or official government policy. Please refer to the original GAO report for authoritative guidance.