5 maggio, 2026

Beyond regulation: why Europe must lead AI stewardship in health

The Lancet Regional Health - Europe, Volume 65, June 2026, 101701

 

Beyond regulation: why Europe must lead AI stewardship in health

Authors: Luca Saba, Ignazio R. Marino

DOI: https://doi.org/10.1016/j.lanepe.2026.101701

License: Creative Commons BY-NC-ND 4.0

 

Artificial intelligence (AI) is already influencing how European health systems diagnose disease, allocate scarce resources, prioritise patients, and support clinical decisions.1,2 Europe has led globally on ethics and regulation, most notably through the EU AI Act, which establishes an essential framework for safety, transparency, and accountability.3 But regulation alone will not determine whether AI strengthens European health systems or leaves them dependent on infrastructures, models, and standards designed elsewhere. The next step is stewardship: building and governing AI capacities in the public interest, with health as a central priority.

In health, the limits of a regulation-only approach are especially clear. Rules can constrain harmful uses, but they do not create high-quality datasets, computing capacity, interoperable evaluation systems, or trustworthy reference models. Nor do they ensure that hospitals, universities, and public-health agencies can access the tools needed to innovate responsibly. If Europe remains mainly a regulator while others remain the builders, it risks becoming a rule-maker without real operational sovereignty in one of the most sensitive domains of public life.4,5,6 The main priorities for a European approach to AI stewardship in health are outlined in Table 1.

Table 1. Key priorities for European stewardship of artificial intelligence in health

Priority for European AI stewardship in health Why it matters Practical action
Federated health data spaces Enable research while protecting privacy and trust Interoperable, privacy-preserving cross-border data frameworks
Independent evaluation Prevent unsafe or non-generalizable tools entering care European hubs for bias, robustness, and clinical safety testing
Open reference infrastructure Reduce dependency and create trustworthy baselines Auditable models, benchmarks, and tools for imaging, triage, and reporting
Equitable access to compute Avoid widening gaps between institutions and regions Shared compute and technical support for public-interest health projects
Inclusive governance Align AI deployment with clinical reality and patient values Governance involving clinicians, patients, ethicists, and public-health agencies

The table summarises the main operational domains through which Europe could move from a predominantly regulatory role to an active stewardship role in health AI, including data infrastructure, independent evaluation, open reference tools, equitable access to computing resources, and inclusive governance.

Health should therefore become the clearest expression of European AI stewardship. Europe already possesses many of the ingredients of a public-interest AI ecosystem: strong health systems, rich clinical data, scientific talent, public institutions, and emerging digital infrastructures. What is still missing is integration around a clear mission. Federated electronic health record initiatives show that cross-border, privacy-preserving data use is possible.7 Europe should now consolidate these efforts into robust health data spaces that support both research and care while respecting fundamental rights. Such infrastructures should not be viewed simply as technical assets, but as strategic public goods. They can determine who generates evidence, who audits performance, and who retains the capacity to adapt AI to local clinical realities.

A second priority is evaluation. AI tools should not enter routine care merely because they perform well on internal company benchmarks or narrow retrospective datasets. Independent and publicly accountable evaluation must become a core health function. Models need to be tested for bias, robustness, transportability, and real-world clinical safety before large-scale deployment.1,7 Europe should support interconnected evaluation hubs capable of validating algorithms across populations, hospitals, and national settings. This would protect patients, assist clinicians, and strengthen public trust.

A third priority is open and auditable reference infrastructure. Debate around AI is often framed as a choice between market dynamism and public oversight, but health requires both.8 Europe should support transparent baseline models, shared benchmarks, and open tools for high-value tasks such as imaging, triage, reporting, and population-health surveillance. These resources would not replace private innovation; they would set standards, reduce duplication, and ensure that academic centres, public hospitals, and small and medium-sized enterprises are not excluded from progress because they lack proprietary infrastructure.4,8

Governance will determine legitimacy. AI in health cannot be governed only by regulators, corporations, or technical experts. Stewardship must include clinicians, patients, ethicists, public-health agencies, and civil society. It should also recognise that AI increasingly affects not only workflow efficiency but human judgement itself, influencing how decisions are framed and made.2

In this sense, health is where questions of accountability, trust, and human oversight become concrete. Strategic investment in shared health AI infrastructure is therefore not only a matter of competitiveness, but of resilience, autonomy, and democratic responsibility.9 Europe has already shown that it can lead on values, rights, and regulation. It should now lead on stewardship in health: building the datasets, evaluation systems, compute access, and reference tools that make AI safe, effective, and fair in practice. The real risk is not simply economic dependency, but the gradual outsourcing of clinical standards, decision architectures, and public priorities. A European approach to AI in health should ensure that innovation remains aligned with patients, professionals, and the public good.8

Contributors

Luca Saba conceived the commentary and wrote the first draft. Ignazio R. Marino contributed to the conceptual framing and critically revised the manuscript for important intellectual content. Both authors contributed to the interpretation of the arguments, approved the final version of the manuscript, and accept responsibility for the decision to submit the manuscript for publication.

Declaration of interests

Luca Saba declares no competing interests. Ignazio R. Marino declares no competing interests.

References

  1. A.K. Denniston, X. Liu. Responsible and evidence-based AI: 5 years on. Lancet Digit Health, 6 (2024), pp. e305-e307. DOI: 10.1016/S2589-7500(24)00071-2

  2. H. Campbell, S. Goldman, P.M. Markey. Artificial intelligence and human decision making: exploring similarities in cognitive bias. Comput Hum Behav, 4 (2025), Article 100138. DOI: 10.1016/j.chbah.2025.100138

  3. C. Cancela-Outeda. The EU's AI act: a framework for collaborative governance. Internet Things, 27 (2024), Article 101291. DOI: 10.1016/j.iot.2024.101291

  4. P. Taborsky, I. Colonnelli, K. Kurowski, et al. Towards a European HPC/AI ecosystem: a community-driven report. Procedia Comput Sci, 255 (2025), pp. 140-149. DOI: 10.1016/j.procs.2025.02.269

  5. https://publications.jrc.ec.europa.eu/repository/bitstream/JRC136359/JRC136359_01.pdf

  6. D. Mügge. EU AI sovereignty: for whom, to what end, and to whose benefit? J Eur Public Policy, 31 (2024), pp. 2200-2225. DOI: 10.1080/13501763.2024.2318475

  7. R. Raab, A. Küderle, A. Zakreuskaya, et al. Federated electronic health records for the European Health Data Space. Lancet Digit Health, 5 (2023), pp. e840-e847. DOI: 10.1016/S2589-7500(23)00156-5

  8. P. Verdegem. Dismantling AI capitalism: the commons as an alternative to the power concentration of big tech. AI Soc, 39 (2022), pp. 1-11. DOI: 10.1007/s00146-022-01437-8

  9. A. Ilcic, M. Fuentes, D. Lawler. Artificial intelligence, complexity, and systemic resilience in global governance. Front Artif Intell, 8 (2025), Article 1562095. DOI: 10.3389/frai.2025.1562095

© 2026 The Author(s). Published by Elsevier Ltd.