AI’s Impact: A Call for a Sustainable Information Ecosystem
In Beyond AI and Copyright: Funding a Sustainable Information Ecosystem, author Paul Keller tackles a critical issue that has emerged in the generative AI era: how societies can sustain human information production in a world where AI models increasingly mediate access to knowledge. Keller, co-founder of the Open Future think tank, presents a thoughtful, deeply reasoned critique of the current trajectory of AI development, challenging readers to consider what it means for AI systems to become the "repository of all human knowledge and culture."
Keller’s central concern is not speculative risks or human rights issues but the structural and economic impact of large AI models—particularly those trained on public information—on the broader information ecosystem. He argues that the commodification of digital public goods by a small number of dominant AI developers poses a dual threat: the erosion of public knowledge institutions and the consolidation of control over how societies access and interpret information.
The paper opens with a reflection on how AI has become a new kind of cultural and social technology, comparable to writing or printing in its capacity to reshape human access to knowledge. By absorbing and abstracting massive amounts of publicly accessible content, generative AI systems repackage the shared output of human creativity into proprietary services, often bypassing the institutions—like libraries, media, and archives—that have historically curated and validated that content.
Keller suggests that without proper intervention, the incentives to sustain those institutions will diminish. Yet paradoxically, the value of such institutions remains essential for providing context, veracity, and provenance—functions generative models cannot reliably replicate. This tension, Keller notes, must be addressed through public policy rather than left to market dynamics.
A core proposal in the paper is the creation of a “Public AI” infrastructure—AI systems and services that are publicly funded, transparently governed, and designed to serve the public interest. Just as public broadcasting once balanced the influence of commercial media, Public AI could offer an alternative to the private AI giants currently shaping how we understand the world.
But Keller doesn’t stop at infrastructure. He calls for a bold, redistributive funding framework to ensure the long-term viability of information production. Recognizing the limitations of copyright law in addressing AI training on public domain and openly licensed content, Keller proposes a market-deployment levy. This would impose a fee or percentage of revenue on commercial AI systems when they are brought to market, with funds redirected to support creators, public service institutions, and the very information commons from which these systems derive their value.
This system would not only compensate traditional rightholders, but also sustain lesser-known contributors like Wikipedia editors, cultural institutions, open-access publishers, and public agencies—all of whom currently receive no recompense for their foundational role in the AI ecosystem. Such a model, Keller argues, shifts the conversation from narrow copyright enforcement to a more equitable, systemic redistribution of value.
The paper closes by emphasizing the need to avoid repeating the mistakes of the early internet era, when the public ceded control of key information platforms to commercial interests. Quoting Microsoft CTO Kevin Scott, Keller calls for a “new deal” for the digital age—one that protects and funds the ecosystems upon which democratic societies rely for knowledge, culture, and civic engagement.
Disclaimer: This summary is provided for informational purposes only. It is not legal advice, and accuracy is not guaranteed. Please refer to the original publication for authoritative content.