·

AI Safety in Healthcare

Healthcare

Healthcare IT / CIO

EU AI Act explained: what healthcare organisations need to know

A comprehensive guide to the EU AI Act for healthcare organisations. Understand risk classifications, compliance timelines, and obligations for deployers and providers

The EU AI Act (Regulation EU 2024/1689) is the first comprehensive legal framework for regulating artificial intelligence in the European Union, adopted by the European Parliament in March 2024 and published in the Official Journal of the European Union in July 2024. For healthcare organisations, it adds a new layer of legally binding obligations on top of existing frameworks. Hospitals, GP practices, and health technology vendors operating in or supplying to EU markets need to understand what the Act requires, when those requirements apply, and who is responsible for meeting them.

What is the EU AI Act?

The EU AI Act establishes a unified legal framework for artificial intelligence systems used across all sectors in the European Union. Its stated purpose is to ensure that AI systems placed on or used in the EU market are safe, transparent, traceable, non-discriminatory, and subject to human oversight. The Act applies not only to organisations based in the EU but also to providers and deployers outside the EU whose AI systems affect people within it, giving it significant extraterritorial reach.

The regulation is structured around a risk-based classification system, with obligations scaled to the potential harm an AI system could cause. It also introduces new governance structures, including the European AI Office, which sits within the European Commission and oversees enforcement of rules relating to general-purpose AI (GPAI) models. A general-purpose AI model is a large AI model trained on broad datasets that can perform a wide range of tasks.

When does the EU AI Act come into force?

The Act entered into force on 1 August 2024, but obligations are being introduced through a phased implementation schedule rather than a single activation date. Healthcare decision-makers should be aware of the following key milestones:

  • February 2025: Prohibitions on unacceptable-risk AI systems apply. The AI literacy obligation for all organisations deploying AI also takes effect from this date, requiring staff who work with AI systems to have sufficient knowledge to use them appropriately.

  • August 2025: Rules governing general-purpose AI models, including foundation models underpinning many clinical AI tools, become applicable.

  • August 2026: The core obligations for high-risk AI systems, including conformity assessments, technical documentation, and human oversight requirements, apply in full. This is the critical compliance deadline for most healthcare AI.

  • August 2027: An extended transition period applies specifically to AI systems already regulated as medical devices under the Medical Device Regulation (MDR) or the In Vitro Diagnostic Regulation (IVDR) that require Notified Body assessment. These systems have an additional year to meet the AI Act's Article 6(1) requirements.

The European Commission has proposed adjustments to the high-risk rules timeline through the Digital Omnibus initiative, which addresses implementation challenges including delays in harmonised standards development. Healthcare organisations should monitor this process, as it may affect when certain compliance documentation requirements become enforceable in practice.

Why healthcare is a priority sector under the Act

Healthcare is explicitly identified as a high-risk domain in the AI Act because AI systems used in clinical settings can directly affect patient safety, access to care, and fundamental rights. An AI system that influences a diagnostic decision, recommends a treatment pathway, or triages patients based on predicted urgency carries material potential for harm if it is inaccurate, biased, or used without adequate human oversight.

The European Commission's public health guidance on AI in healthcare notes that high-risk AI systems intended for medical purposes must satisfy requirements around risk mitigation, data quality, transparency, and human oversight. This reflects a broader recognition that the stakes in healthcare differ fundamentally from those in sectors such as marketing or logistics.

A comparative analysis of AI governance frameworks across five jurisdictions found that risk classification schemes for healthcare AI are converging internationally, with the EU approach serving as an influential model, particularly in how it distinguishes between AI that supports decision-making and AI that could autonomously influence clinical outcomes.

How the EU AI Act classifies AI systems: the risk tiers

The Act organises AI systems into four risk categories, each carrying a different level of regulatory obligation.

Unacceptable risk

AI systems that pose a clear threat to fundamental rights or safety are prohibited outright. Examples include social scoring systems used by public authorities and AI that exploits psychological vulnerabilities. These prohibitions applied from February 2025.

High risk

AI systems that pose significant risks to health, safety, or fundamental rights, but where the benefits may justify their use under strict conditions. These systems must meet extensive compliance requirements before deployment. Healthcare AI falls predominantly into this category.

Limited risk

AI systems with specific transparency obligations, such as chatbots that must disclose they are AI. Many patient-facing digital tools fall here.

Minimal risk

AI systems with no specific regulatory requirements under the Act, such as spam filters or AI used in video games.

A separate category covers general-purpose AI models. These large foundation models underpin many clinical language tools and carry their own obligations regardless of the risk tier of the downstream application built on them.

Which healthcare AI systems are classified as high risk?

Annex III of the AI Act specifies the categories of high-risk AI systems. For healthcare, the most relevant is Annex III point 5(a): AI systems intended to be used as safety components of medical devices, or as standalone AI systems that are themselves medical devices. The Act covers AI systems used for:

  • Diagnosis and clinical decision support: including AI tools that assist in identifying diseases, interpreting medical imaging, or recommending diagnostic pathways

  • Treatment recommendations: AI systems that suggest or prioritise treatment options for individual patients

  • Patient triage: systems that allocate clinical priority or urgency

  • Patient monitoring: AI tools that continuously assess patient status and flag deterioration

A peer-reviewed analysis in npj Digital Medicine found that approximately 75 per cent of commercial AI-enabled medical devices are in radiology, and all but one are classified as Class IIa or above under the MDR, meaning the majority of deployed clinical AI will be treated as high-risk under the AI Act.

AI-powered virtual health assistants and clinical chatbots occupy a more nuanced position. Where they provide clinical information that could influence patient decisions, they may attract high-risk classification. Where they function primarily as communication interfaces, limited-risk transparency obligations may apply instead.

What the EU AI Act means for AI medical devices and MDR overlap

One of the most complex aspects of the AI Act for healthcare organisations is its relationship with existing medical device regulation. For further background on why AI documentation tools need to fall under the MDR framework, see this analysis. AI systems already regulated as medical devices under the MDR or IVDR are subject to both frameworks simultaneously, and the obligations do not simply merge.

Reed Smith's legal analysis describes this as a dual compliance framework. Medical device AI systems classified as MDR Class IIa, IIb, or III, or IVDR Class A through D, will generally qualify as high-risk under the AI Act. The AI Act then adds requirements around data quality, data governance, record-keeping, transparency, accountability, and human oversight that go beyond what the MDR requires.

Organisations can integrate AI Act requirements into existing quality management system (QMS) documentation, and a single conformity assessment is permitted where the Notified Body is accredited under both frameworks. Healthcare organisations should not assume that MDR compliance is sufficient, however. The Hunton Andrews Kurth briefing is explicit that the AI Act introduces additional obligations, including incident reporting to Market Surveillance Authorities within 15 days for serious incidents, that have no direct MDR equivalent.

White & Case's analysis further notes that the AI Act, MDR, and the revised Product Liability Directive together form what they describe as a "regulatory triangle" tightening liability for manufacturers of AI-powered medical devices, a consideration relevant to procurement decisions as well as internal compliance planning.

Key compliance obligations for high-risk AI in healthcare

Organisations deploying or supplying high-risk AI systems in healthcare must satisfy a substantial set of requirements. The core obligations under the Act include:

Conformity assessment

High-risk AI systems must undergo a conformity assessment before being placed on the market or put into service. For AI medical devices already subject to Notified Body review under the MDR, this process can be integrated into the existing assessment pathway.

Technical documentation

Providers must maintain comprehensive technical documentation covering the system's design, development methodology, training data characteristics, performance metrics, and known limitations. This documentation must be kept up to date throughout the system's lifecycle.

Human oversight mechanisms

High-risk AI systems must be designed to allow human oversight, including the ability for users to monitor, understand, and where necessary override or stop the system's outputs.

Transparency obligations

Deployers must ensure that users are informed they are interacting with or relying on an AI system. For clinical tools, this includes clear disclosure of the AI's role in generating outputs such as diagnostic suggestions or clinical documentation.

Data governance

Training, validation, and testing data must meet quality standards. Data must be relevant, representative, and free from errors likely to produce discriminatory or unsafe outputs.

Post-market monitoring

Providers must implement systems to collect and analyse performance data after deployment, and report serious incidents to the relevant national authority.

AI literacy

Applicable from February 2025, all organisations deploying AI systems must ensure that staff working with those systems have sufficient knowledge to understand their capabilities and limitations. HIMSS has noted that this obligation applies across all risk tiers, not only high-risk systems.

A scoping review published in npj Digital Medicine synthesising evidence on AI governance frameworks in healthcare organisations identified human oversight, transparency, and post-deployment monitoring as the most consistently cited components of effective governance, aligning closely with what the Act now mandates.

Who is responsible: AI providers vs deployers

The AI Act draws a clear legal distinction between two categories of obligated party.

Providers

Providers are organisations that develop an AI system, place it on the market, or put it into service under their own name or trademark. This includes AI vendors, software developers, and health technology companies.

Deployers

Deployers are organisations that use a high-risk AI system under their own authority in a professional context. In healthcare, this means hospitals, GP practices, integrated care systems, and any other organisation that uses a third-party AI tool in clinical or administrative workflows.

This distinction matters because both parties carry distinct and non-transferable obligations. Providers are responsible for ensuring their systems meet the technical and documentation requirements before deployment. Deployers are responsible for ensuring systems are used appropriately, that human oversight is in place, and that staff are sufficiently trained.

The Diagnostic and Interventional Radiology journal analysis is explicit that healthcare organisations acting as deployers cannot simply rely on vendor compliance. They must actively verify that the AI systems they use meet the Act's requirements and that internal processes support compliant use.

One important nuance: healthcare organisations that develop AI tools in-house for their own use may qualify for a limited exemption, but only under specific conditions, and the Act still requires adherence to data quality and transparency standards.

What human oversight actually means in a clinical context

The Act's human oversight requirement is not simply a legal formality. It reflects a substantive principle: that consequential decisions affecting patients should not be delegated entirely to automated systems.

In practice, meaningful human oversight in clinical workflows means:

  • A clinician reviewing AI-generated clinical documentation before it is saved to the patient record, rather than accepting it without review

  • A radiologist examining an AI-flagged finding and making an independent clinical judgement before acting on it

  • A triage nurse assessing an AI-generated priority score in the context of direct patient observation, rather than treating it as a definitive instruction

  • Clinical staff having the technical ability, and the organisational permission, to override or disregard an AI output when clinical judgement warrants it

A BMJ Health Care Informatics commentary proposing a risk-stratified approach to large language model governance in clinical practice argues that meaningful oversight requires not just technical mechanisms but organisational culture. Clinicians must feel empowered to question AI outputs, and governance structures must support that behaviour.

The Act requires that high-risk AI systems be designed with oversight mechanisms built in by default, not added as an afterthought. Healthcare organisations evaluating AI tools should assess whether the product's design genuinely supports clinician review, or whether workflow pressures make rubber-stamping AI outputs the path of least resistance.

Data governance and GDPR: how the two frameworks interact

The AI Act introduces data governance obligations that sit alongside, but do not replace, existing requirements under the General Data Protection Regulation (GDPR). Healthcare organisations operating under both frameworks need to understand that GDPR compliance does not automatically satisfy the AI Act's data requirements.

Where GDPR focuses on the lawful processing of personal data, the AI Act's data governance provisions focus specifically on the quality and appropriateness of data used to train, validate, and test AI systems. The Act requires that training datasets be:

  • Relevant and sufficiently representative of the intended use case

  • Free from errors and completeness gaps that could cause unsafe outputs

  • Examined for potential biases that could lead to discriminatory outcomes in clinical contexts

The npj Digital Medicine peer-reviewed commentary highlights that AI systems trained on datasets that under-represent certain patient populations, by age, ethnicity, sex, or comorbidity profile, can produce systematically worse outputs for those groups, with direct patient safety implications. The Act's data governance requirements are designed to address this risk.

Data residency is a further consideration. Healthcare organisations procuring AI tools from non-EU vendors should confirm where patient data is processed and stored, both to satisfy GDPR requirements and to ensure compliance with the AI Act's transparency and accountability provisions. The European Health Data Space (EHDS), which entered into force in 2025, adds a further layer of data governance applicable to health data specifically.

What healthcare organisations should be doing now

Given the phased implementation timeline, healthcare organisations have a defined window to prepare. The following steps represent the core of a practical compliance programme:

  • Audit current AI tools: Identify every AI system in use across the organisation, including tools embedded in medical record systems, diagnostic software, administrative automation, and patient-facing applications

  • Assess risk classifications: For each identified system, determine whether it falls into the high-risk category under Annex III of the Act, or whether it carries limited-risk transparency obligations

  • Review vendor contracts: Examine agreements with AI suppliers to understand how compliance obligations are allocated. Contracts should specify which party is responsible for conformity assessments, technical documentation, and post-market monitoring

  • Appoint an AI governance lead: Designate internal ownership of AI Act compliance. In larger organisations, this may sit within a clinical informatics, legal, or information governance function; in smaller practices, it may require external support

  • Prepare documentation: Begin compiling or requesting the technical documentation required for high-risk systems, including evidence of conformity assessments, data quality assurances, and human oversight mechanisms

  • Implement AI literacy training: The AI literacy obligation has applied since February 2025. Organisations that have not yet addressed this should prioritise training for clinical and administrative staff who interact with AI tools

  • Establish incident reporting processes: Put in place internal procedures for identifying and escalating AI-related incidents, aligned with the Act's requirement to report serious incidents to national authorities within 15 days

A comparative governance study across five jurisdictions found that organisations which proactively developed internal AI governance structures before regulatory deadlines were better positioned to meet compliance requirements than those that waited for enforcement to begin.

How to evaluate AI vendors for EU AI Act compliance

Procurement and clinical informatics teams evaluating AI products should treat EU AI Act compliance as a standard due diligence requirement, alongside clinical evidence and data security. Key questions to put to vendors include:

  • Risk classification: How does the vendor classify this system under the AI Act? What evidence supports that classification?

  • Conformity assessment: Has the system undergone a conformity assessment? Where a Notified Body is required, which body conducted it?

  • Technical documentation: Can the vendor provide full technical documentation, including training data characteristics, performance benchmarks, and known limitations?

  • CE marking: For AI medical devices, is CE marking in place under the MDR or IVDR, and has it been updated to reflect AI Act requirements?

  • Human oversight by design: How does the product's design support clinician review and override? Can the vendor demonstrate this in a live workflow?

  • Data residency: Where is patient data processed and stored? Does this meet GDPR and EHDS requirements?

  • Post-market monitoring: What processes does the vendor have in place for ongoing performance monitoring and incident reporting?

  • GPAI model dependencies: If the product is built on a foundation model, what obligations apply to that model under the GPAI provisions, and how does the vendor manage them?

The AI Act Single Information Platform (SIP) compliance checker, maintained by the European Commission, provides a structured tool for assessing where a given AI system sits within the regulatory framework, a useful starting point for procurement teams.

Penalties for non-compliance

The AI Act's penalty structure is tiered to reflect the severity of the violation:

  • Up to €35 million or 7 per cent of global annual turnover (whichever is higher) for violations involving prohibited AI systems

  • Up to €15 million or 3 per cent of global annual turnover (whichever is higher) for violations of other obligations, including high-risk system requirements

  • Up to €7.5 million or 1.5 per cent of global annual turnover (whichever is higher) for providing incorrect or misleading information to authorities

Enforcement will be handled at member state level through designated national supervisory authorities. The European AI Office will have oversight responsibility for GPAI model compliance. Penalties can apply to deployers, not only to AI developers and vendors, where the deployer has failed to meet its own obligations under the Act.

White & Case's liability analysis notes that the withdrawal of the standalone AI Liability Directive in February 2025 means that civil liability claims will be channelled through the revised Product Liability Directive rather than a dedicated AI liability instrument. This does not reduce the financial exposure for organisations found in breach of the AI Act itself.

Glossary: key EU AI Act terms for healthcare professionals

Provider

An organisation or individual that develops an AI system, places it on the EU market, or puts it into service under their own name. Includes AI vendors and health technology companies.

Deployer

An organisation or individual that uses a high-risk AI system in a professional context under their own authority. Hospitals, GP practices, and other healthcare organisations using third-party AI tools are deployers.

High-risk AI system

An AI system listed in Annex III of the Act, or used as a safety component in a product regulated under sector-specific legislation such as the MDR. Subject to the Act's most extensive compliance obligations.

Conformity assessment

A formal process by which a provider demonstrates that a high-risk AI system meets the Act's requirements. Depending on the system type, a provider may conduct this itself or an accredited Notified Body may conduct it.

Post-market monitoring

An ongoing process by which providers collect and analyse data on the real-world performance of a deployed AI system, with the aim of identifying risks or performance degradation not apparent during pre-market assessment.

Fundamental rights impact assessment

A structured assessment that deployers of certain high-risk AI systems must conduct before deployment, evaluating the potential impact on fundamental rights including non-discrimination, privacy, and access to healthcare.

General-purpose AI model

A large AI model trained on broad datasets that can perform a wide range of tasks. Foundation models underpinning many clinical language tools fall into this category and carry specific obligations under the Act regardless of downstream application.

AI literacy

The knowledge and skills required to use AI systems appropriately, critically assess their outputs, and understand their limitations. Organisations must ensure relevant staff possess sufficient AI literacy from February 2025 onwards.

Technical documentation

The formal record a provider must maintain covering an AI system's design, development, data characteristics, testing methodology, and performance. Required for all high-risk AI systems and must be kept current throughout the system's operational life.

Frequently asked questions

▶ What is the EU AI Act and does it apply to healthcare organisations?

The EU AI Act (Regulation EU 2024/1689) is the first comprehensive legal framework for regulating artificial intelligence in the European Union. It applies to any organisation that places AI systems on the EU market or uses them to affect people within the EU, including hospitals, GP practices, and health technology vendors. It doesn't only apply to organisations based in the EU — its extraterritorial reach means non-EU suppliers are also covered if their systems affect EU patients.

▶ When do healthcare organisations need to comply with the EU AI Act?

Obligations are being introduced in phases. Prohibitions on unacceptable-risk AI systems and the AI literacy requirement applied from February 2025. Rules for general-purpose AI models apply from August 2025. The core obligations for high-risk AI systems, including conformity assessments and human oversight requirements, apply in full from August 2026. AI systems already regulated as medical devices under the Medical Device Regulation have an extended transition period until August 2027.

▶ Which healthcare AI systems are classified as high risk under the Act?

The Act classifies AI systems used for diagnosis, clinical decision support, treatment recommendations, patient triage, and patient monitoring as high risk. A peer-reviewed analysis in npj Digital Medicine found that approximately 75 per cent of commercial AI-enabled medical devices are in radiology and classified as Class IIa or above under the Medical Device Regulation, meaning the majority of deployed clinical AI will be treated as high risk under the Act.

▶ What's the difference between a provider and a deployer under the EU AI Act?

Providers are organisations that develop an AI system and place it on the market under their own name — this includes AI vendors and health technology companies. Deployers are organisations that use a high-risk AI system in a professional context, such as hospitals and GP practices using third-party AI tools. Both parties carry distinct obligations that can't be transferred to the other. Healthcare organisations acting as deployers can't simply rely on vendor compliance — they must actively verify that systems meet the Act's requirements and that internal processes support compliant use.

▶ Does MDR compliance satisfy the EU AI Act requirements for AI medical devices?

No. AI systems regulated as medical devices under the Medical Device Regulation are subject to both frameworks simultaneously. The AI Act adds requirements around data quality, data governance, record-keeping, transparency, accountability, and human oversight that go beyond what the Medical Device Regulation requires. Organisations can integrate AI Act requirements into existing quality management system documentation, but MDR compliance alone isn't sufficient.

▶ What does the human oversight requirement mean in practice for clinical staff?

Human oversight means that consequential decisions affecting patients shouldn't be delegated entirely to automated systems. In practice, it means a clinician reviews AI-generated clinical documentation before it's saved to the patient record, a radiologist makes an independent judgement on an AI-flagged finding, and a triage nurse assesses an AI-generated priority score against direct patient observation. Clinical staff must also have both the technical ability and the organisational permission to override an AI output when their clinical judgement warrants it.

▶ How does the EU AI Act interact with GDPR for healthcare organisations?

The AI Act introduces data governance obligations that sit alongside GDPR but don't replace it. GDPR compliance doesn't automatically satisfy the AI Act's data requirements. Where GDPR focuses on the lawful processing of personal data, the AI Act focuses specifically on the quality and appropriateness of data used to train, validate, and test AI systems. Training datasets must be relevant, representative, and examined for biases that could produce discriminatory or unsafe outputs for particular patient groups.

▶ What are the penalties for non-compliance with the EU AI Act?

The penalty structure is tiered. Violations involving prohibited AI systems can result in fines of up to €35 million or 7 per cent of global annual turnover, whichever is higher. Violations of high-risk system obligations can result in fines of up to €15 million or 3 per cent of global annual turnover. Providing incorrect information to authorities can result in fines of up to €7.5 million or 1.5 per cent of global annual turnover. Penalties can apply to deployers as well as providers where the deployer has failed to meet its own obligations.

▶ What should healthcare organisations be doing now to prepare for the EU AI Act?

Organisations should start by auditing every AI system currently in use, including tools embedded in medical record systems, diagnostic software, and patient-facing applications. Each system should be assessed against the Act's risk classifications. Vendor contracts should be reviewed to confirm how compliance obligations are allocated. The AI literacy obligation has applied since February 2025, so training for staff who interact with AI tools should be a priority if it hasn't already been addressed. Internal incident reporting processes aligned with the Act's 15-day serious incident reporting requirement should also be in place.

▶ What questions should procurement teams ask AI vendors about EU AI Act compliance?

Procurement teams should ask vendors how they classify their system under the Act and what evidence supports that classification. They should request confirmation of whether a conformity assessment has been completed and, where a Notified Body is required, which body conducted it. Full technical documentation, including training data characteristics and known limitations, should be available on request. Vendors should also be able to demonstrate how the product's design supports clinician review and override, confirm where patient data is processed and stored, and explain their post-market monitoring and incident reporting processes.

Alusta Tandemiga täna

Liitu tuhandete tervishoiutöötajatega, kes naudivad stressivaba dokumenteerimist.

Alusta Tandemiga täna

Liitu tuhandete tervishoiutöötajatega, kes naudivad stressivaba dokumenteerimist.

Alusta Tandemiga täna

Liitu tuhandete tervishoiutöötajatega, kes naudivad stressivaba dokumenteerimist.