Sharing Trustworthy AI Models Through Privacy-Enhancing Technologies: OECD’s Roadmap for Collaborative and Confidential AI
In its June 2025 publication Sharing Trustworthy AI Models with Privacy-Enhancing Technologies, the OECD presents a forward-looking and policy-relevant analysis on how privacy-enhancing technologies (PETs) can enable secure, trustworthy, and collaborative artificial intelligence (AI) development. Authored by Christian Reimsbach-Kounatze and Shinya Ishikawa, the report draws on OECD expert workshops held in 2024 in collaboration with the UK, Estonia, and Singapore. It synthesizes best practices, use case archetypes, and policy strategies to promote PET adoption within the AI lifecycle.
The report identifies two foundational use case archetypes. The first involves enhancing AI performance through minimal and confidential use of input and test data, where no single organization has access to sufficient data diversity or volume. In this scenario, PETs like federated learning (FL), secure multi-party computation (MPC), and trusted execution environments (TEEs) allow for collaborative data use while protecting confidentiality. Synthetic data and differential privacy add further safeguards, enabling model testing and development without compromising privacy or intellectual property.
The second archetype concerns the co-creation and sharing of AI models across multiple organizations. Unlike the first archetype, where one entity owns the model, here the development process is distributed and collaborative. This creates new confidentiality challenges—not only must sensitive training data be protected, but also the model itself, which could be reverse engineered to reveal personal or proprietary information. PETs in this context are deployed to protect model weights, prevent data leakage, and ensure secure inferencing, with homomorphic encryption (HE) and differentially private outputs providing added assurance.
Real-world examples underscore the relevance of both archetypes. Financial institutions use TEEs to securely analyze client data in forecasting models. Apple's implementation of FL for Siri voice recognition illustrates local model training with differential privacy to protect user data. In cancer research, FL combined with TEEs enables hospitals to collaboratively train models on pathology images without transferring patient data. Meanwhile, marketing firms and banks have employed MPC and HE to co-develop models while preserving customer confidentiality.
Despite their promise, PETs face limitations. Homomorphic encryption remains computationally expensive. MPC can introduce latency and overhead. Differential privacy requires careful calibration to balance privacy against data utility. Synthetic data, while privacy-preserving, may amplify biases or permit re-identification if poorly designed. The report cautions that PETs are not panaceas, but part of a broader privacy-by-design strategy.
The OECD calls on governments to proactively support PET adoption through regulatory sandboxes, innovation contests, public procurement, and targeted R&D funding. Case studies from Singapore, Norway, and the UK show how regulators can provide safe testing environments while developing policy guidance. In parallel, the U.S. and U.K. joint prize challenges incentivized federated learning solutions for pandemic forecasting and financial crime detection.
The report also advocates for standardized, archetype-based use cases to drive convergence in data governance and AI regulation. By focusing on PET functionalities rather than specific technologies, the OECD aims to make privacy-respecting AI development more adaptable across jurisdictions and sectors. This approach supports compliance with data protection laws, protects trade secrets, and fosters cross-border data collaboration in critical fields such as health, finance, and cybersecurity.
Ultimately, the OECD’s analysis positions PETs as foundational tools for reconciling innovation with privacy in the age of AI. Their proper implementation is not merely a technical issue—it is central to building trust, enhancing interoperability, and securing the ethical deployment of AI worldwide.
Disclaimer: This blog post is a summary of the OECD publication Sharing Trustworthy AI Models with Privacy-Enhancing Technologies (June 2025, No. 38). While every effort has been made to accurately reflect the report’s content, this post does not constitute legal advice or an official OECD position. Readers are encouraged to consult the full report for authoritative guidance.