·

Technology Adoption

Primary Care

Practice Manager / Admin

AI tool contracts: what GP managers must verify first

Essential checks for GP practice managers before signing AI tool contracts: regulatory status, liability, data security, integration, and clinical evidence

Signing a contract for an AI tool can feel like the finish line of a procurement process, but for GP practice managers it's closer to the starting gun. Once a contract is signed and a tool is deployed, the clinical, legal, and regulatory obligations that attach to that deployment sit firmly with the practice, not with the vendor. AI deployment is outpacing robust real-world evaluation and regulation in ways that create genuine risk for practices that haven't done thorough due diligence. This guide sets out the specific questions practice managers should be asking, and the documents they should be reviewing, before any contract is signed.

Is the tool classified as a medical device, and what does that mean for you?

Not every AI tool used in a clinical setting is a medical device, but many are, and the distinction carries significant regulatory weight. In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) classifies software as a medical device. If an AI tool supports clinical diagnosis, triage decisions, or treatment recommendations, it's likely to fall within the definition of Software as a Medical Device (SaMD) and require UKCA or CE marking under the Medical Device Regulation (MDR).

Before signing, practice managers should:

  • Search the MHRA Product and Registration Database (PARD) to confirm whether the tool is registered as a medical device

  • Ask the vendor directly for their MHRA registration status and device class

  • Check whether the tool has a NICE evaluation, a Medtech Innovation Briefing (MIB), Diagnostics Guidance (DG), or Early Value Assessment (EVA)

  • Confirm whether the tool carries a valid UKCA or CE mark if it's classified as a medical device

The CQC's GP Mythbuster 109 on AI, published in July 2025, makes clear that inspectors will check whether practices have verified MHRA registration where applicable and whether procurement was conducted against DCB0160 and Digital Technology Assessment Criteria (DTAC) standards. Tools that aren't classified as medical devices aren't exempt from scrutiny. They still carry data protection, clinical safety, and governance obligations.

Who holds clinical and legal liability if something goes wrong?

This is one of the most consequential questions in any AI tool contract, and vendor agreements frequently obscure it. The general position in NHS primary care is that NHS organisations may still be liable for AI-related claims even when using third-party tools, and that liability for using a non-compliant AI solution sits with the deploying organisation or individual clinician.

Vendors typically limit their liability to the software performing as described in the contract. They don't accept liability for clinical decisions made on the basis of AI outputs. This means that if an AI-generated clinical note contains an error that influences patient care, the practice, not the vendor, is likely to bear the clinical and legal consequences.

When reviewing contract terms, practice managers should look for:

  • Clear statements of where clinical liability sits, and whether the vendor accepts any responsibility for AI-generated outputs that enter the patient record

  • Indemnity clauses and whether the vendor's indemnity covers clinical incidents or is limited to software performance failures

  • Whether the vendor's terms require the practice to maintain human oversight of all AI outputs, and what documentation of that oversight is expected

  • Whether the practice's medical defence organisation or indemnity provider has been consulted about the tool's use

Surrey & Sussex LMCs advise practices to engage their Integrated Care Board (ICB) for assurance and to ensure all AI-generated outputs are checked by a clinician before entering the patient record. This human-in-the-loop requirement isn't merely good practice. It's a condition of safe deployment under current NHS guidance.

Where is patient data processed and stored?

Data residency is a foundational question in any clinical AI procurement, and vendor answers aren't always straightforward. Under UK General Data Protection Regulation (GDPR) and the common law duty of confidentiality, patient data is special category data and its processing must be lawful, transparent, and proportionate. If patient data, including audio recordings of consultations, is transferred outside the UK or EU for processing or storage, additional safeguards are required.

NHS England's guidance on ambient scribing products confirms that a Data Protection Impact Assessment (DPIA) is highly likely to be a legal requirement before deployment, given the large-scale processing of special category health data involved. The NHS Transformation Directorate's information governance guidance reinforces this, noting that DPIAs must address the types of data being processed (including audio and transcripts), vendor data reuse for AI model training, and patient transparency obligations.

Key questions to ask vendors before signing:

  • Where is patient data processed? Is it in the UK, the EU, or a third country?

  • Who are the vendor's subprocessors, and where are they located?

  • Does the vendor use patient consultation data to train or improve their AI models, and if so, on what legal basis?

  • Is a Data Processing Agreement (DPA) included in the contract, and does it comply with UK GDPR Article 28?

  • What is the distinction between anonymised and pseudonymised data in the vendor's processing, and what is the residual re-identification risk?

Legal commentary from Spencer West LLP notes that the distinction between anonymised and pseudonymised data is particularly important here, as pseudonymised data retains a residual re-identification risk and can't be treated as outside the scope of data protection law.

What security certifications should the vendor hold?

Security certifications aren't a guarantee of security, but their absence is a meaningful signal about a vendor's maturity and investment in data protection. For clinical AI tools used in NHS primary care, there's a baseline of certifications and assessments that a credible vendor should be able to evidence.

Practices should expect vendors to hold or be actively working towards:

  • ISO 27001 — the international standard for information security management systems

  • NHS Data Security and Protection (DSP) Toolkit compliance — the NHS's own framework for assessing data security, which is a contractual requirement for organisations accessing NHS patient data

  • Cyber Essentials or Cyber Essentials Plus — the UK government's baseline cybersecurity certification

  • DTAC compliance — NHS England's framework covering data protection, technical security, interoperability, and clinical safety

The iatroX procurement guide for NHS buyers recommends that practices ensure a vendor has a passed DTAC pack before deployment. NHS England's ambient scribing guidance also flags cybersecurity risks specific to large language models (LLMs), including the potential for prompt injection attacks and data leakage through model outputs.

A vendor that can't produce evidence of these certifications, or that can't provide a current DSP Toolkit submission, should be treated with caution regardless of how compelling their product demonstration may be.

How does the tool integrate with your existing medical record system?

Most GP practices in England use either EMIS Web or SystmOne as their primary medical record system. The practical value of an AI tool, particularly an ambient scribe or documentation assistant, depends heavily on how well it integrates with the practice's existing system. Integration that requires manual copy-and-paste of AI-generated content into the medical record significantly increases the risk of transcription errors and removes much of the efficiency benefit.

Before signing, practice managers should establish:

  • Whether the tool has a native, certified integration with EMIS Web or SystmOne, or whether it relies on a workaround

  • Who is responsible for maintaining the integration when the medical record system provider releases updates: the AI vendor, the provider, or the practice

  • Whether the integration has been tested in a live GP environment, not just in a development or secondary care setting

  • What the fallback process is if the integration breaks during a clinical session

The iatroX guide to AI tools for UK GP practices highlights medical record system integration as one of the primary evaluation criteria for ambient scribes, noting that procurement decisions should account for total cost of ownership including the workflow adjustment required when integration is imperfect. Integration failures aren't just a technical inconvenience. In a clinical setting, they can create gaps in the patient record or introduce errors that carry patient safety implications.

What happens to your data if you end the contract?

Data portability and deletion at contract termination are areas where vendor contracts are frequently vague, and practices can find themselves in a difficult position if they haven't negotiated clear terms upfront. The questions to resolve before signing are:

  • How long does the vendor retain patient data after contract termination, and on what legal basis?

  • Can the practice export a complete copy of all patient data held by the vendor before termination?

  • What does "deletion on request" actually mean? Does it cover all copies, including backups and subprocessor data?

  • What is the timeline for confirmed deletion, and will the vendor provide written confirmation?

Under UK GDPR, data subjects have rights including the right to erasure, and the practice, as data controller, is responsible for ensuring those rights can be exercised even after a vendor relationship ends. Privacy and security throughout the health data lifecycle is a growing area of regulatory scrutiny, and practices that can't account for where patient data went after a vendor contract ended may face compliance exposure.

Contract terms that allow vendors to retain data for extended periods after termination for "product improvement" or "model training" purposes should be flagged for legal review before signing.

Is the AI tool trained on NHS or equivalent clinical data?

The training data used to develop an AI tool has a direct bearing on its clinical accuracy in a UK primary care context. A tool trained predominantly on US clinical data, for example, may perform poorly on UK-specific clinical coding (SNOMED CT as used in the NHS), drug names and dosing conventions, referral pathways, or the particular documentation style of GP consultations.

Clinician perspectives on AI in primary care consistently raise concerns about biases introduced through training data, and Laranjo et al. in the Lancet Primary Care have noted that rapid deployment ahead of robust evaluation raises concerns about unintended consequences on care quality. These aren't abstract risks. They translate directly into the accuracy of clinical notes, the appropriateness of suggested clinical codes, and the reliability of any decision support outputs.

When evaluating a vendor's training data claims, practice managers should ask:

  • Was the model trained on NHS or UK primary care data, and can the vendor provide documentation of this?

  • Has the model been validated on UK GP consultation data specifically?

  • Does the vendor publish a model card or equivalent technical documentation describing training data sources, known limitations, and performance benchmarks?

  • How does the model handle UK-specific clinical terminology, drug names, and referral conventions?

Vendors should be able to answer these questions with documented evidence, not marketing claims. Where a vendor can't provide this information, it's a meaningful gap in their clinical evidence base.

Who in the practice is responsible for overseeing the tool?

AI tool governance isn't a one-time procurement decision. It's an ongoing operational responsibility. The CQC's GP Mythbuster 109 sets out what inspectors will look for, including the appointment of a Clinical Safety Officer (CSO), the maintenance of a risk assessment and hazard log, and evidence that human oversight of AI outputs is embedded in practice workflows.

The requirement to appoint a CSO at practice level is one that Surrey & Sussex LMCs acknowledge as a practical challenge for smaller practices. It's a regulatory expectation, not an optional enhancement. The CSO doesn't need to be a full-time role, but the practice must be able to identify a named individual who holds this responsibility and can demonstrate they're fulfilling it.

Governance documentation that the practice should maintain includes:

  • A completed DCB0160 clinical safety case for the deployment of the tool

  • A risk assessment and hazard log, reviewed regularly and updated when the tool changes

  • Records of clinician training on the tool, including awareness of its limitations

  • A log of any incidents or near-misses involving AI-generated outputs

  • Evidence that all AI-generated content entering the patient record has been reviewed by a clinician

This documentation isn't only relevant for CQC inspection. It's the evidence base that protects the practice if a clinical incident occurs and questions are raised about how the tool was deployed and overseen.

What does the vendor's support and incident response look like?

A Service Level Agreement (SLA) is a standard component of any software contract, but for clinical AI tools the stakes of a support failure are higher than for most software. If an ambient scribe fails mid-consultation, or if an AI-generated note contains a systematic error that isn't caught immediately, the practice needs to know that the vendor will respond quickly and with clinical understanding.

Before signing, practice managers should review:

  • Response and resolution times in the SLA, and whether these differentiate between clinical safety incidents and general technical issues

  • Whether the vendor has a named contact for GP practices, or whether support is routed through a generic helpdesk

  • What the vendor's process is for notifying practices of a clinical safety incident, including how quickly they will communicate and what information they will provide

  • Whether the vendor has a documented clinical safety incident reporting process that meets NHS standards

NHS England's ambient scribing guidance requires that contracting arrangements clearly define roles, responsibilities, and liability, including incident response. A vendor that can't articulate a clear clinical safety incident process isn't ready for deployment in a primary care setting.

Has the tool been evaluated in a real primary care setting?

This is a question that many vendors struggle to answer with hard evidence, and the gap between a compelling demonstration and a peer-reviewed evaluation in a live GP environment is significant. Research on AI scribes in primary care has begun to document provider experiences and ethical concerns in real-world settings, but the evidence base remains limited and largely exploratory.

The Frontiers in Health Services review of NHS AI procurement frameworks recommends that vendor compliance documentation should include clinical evidence, not just technical certifications. Practice managers should ask vendors to provide:

  • Published peer-reviewed studies or documented pilot results from GP or primary care environments

  • Case studies from UK primary care settings with measurable outcomes (documentation time, clinical accuracy, clinician satisfaction)

  • Evidence of evaluation against NHS-specific workflows and medical record systems

  • Any known limitations or failure modes identified during real-world testing

Evidence from secondary care or international primary care settings doesn't straightforwardly transfer to UK general practice. Consultation structures, documentation conventions, and regulatory requirements differ materially. A tool that performs well in a hospital outpatient setting or a US primary care context may not perform equivalently in a UK GP surgery.

There's also a broader evidence gap to acknowledge. As Laranjo et al. in the Lancet note, AI deployment in primary care is currently outpacing the robust evaluation needed to understand its real-world impact. This doesn't mean practices should avoid AI tools, but vendor claims should be scrutinised carefully and post-deployment monitoring is essential.

A pre-signature checklist for GP practice managers

The following checklist consolidates the key verification points covered in this article. Use it as a practical reference before any AI tool contract is signed.

Regulatory and clinical safety

  • [ ] Confirmed MHRA registration status and device class (search PARD if applicable)

  • [ ] Vendor holds DCB0129 clinical safety case documentation

  • [ ] Practice has completed or begun DCB0160 clinical safety case for deployment

  • [ ] DTAC assessment completed and passed by vendor

  • [ ] NICE evaluation checked (MIB, DG, or EVA) if applicable

Data protection and information governance

  • [ ] DPIA completed before deployment

  • [ ] Data Processing Agreement included in contract, UK GDPR Article 28 compliant

  • [ ] Data residency confirmed — UK or EU processing and storage

  • [ ] Subprocessors identified and assessed

  • [ ] Vendor's policy on using patient data for model training reviewed and agreed

  • [ ] Data deletion and portability terms confirmed at contract termination

Security

  • [ ] ISO 27001 certification confirmed

  • [ ] NHS DSP Toolkit compliance confirmed

  • [ ] Cyber Essentials or Cyber Essentials Plus confirmed

  • [ ] DTAC cybersecurity section passed

Liability and contracting

  • [ ] Liability allocation clearly defined in contract

  • [ ] Indemnity clauses reviewed — scope confirmed

  • [ ] Medical defence organisation or indemnity provider consulted

  • [ ] Roles, responsibilities, and incident response defined in contract

Medical record system integration

  • [ ] Native integration with EMIS Web or SystmOne confirmed

  • [ ] Integration maintenance responsibility defined

  • [ ] Tested in live UK GP environment

Governance

  • [ ] Clinical Safety Officer appointed at practice level

  • [ ] Risk assessment and hazard log initiated

  • [ ] Clinician training plan in place

  • [ ] Human review of all AI outputs confirmed as workflow requirement

Evidence and support

  • [ ] Clinical evidence from UK primary care settings reviewed

  • [ ] SLA reviewed — clinical safety incident response times confirmed

  • [ ] Named vendor contact for GP practices confirmed

  • [ ] Training data origin documented by vendor

Signing with confidence, not just speed

AI procurement in general practice is a clinical governance decision as much as an operational one. The tools available in 2025 and 2026 offer genuine potential to reduce documentation burden and support clinicians, but that potential is only realised safely when the practice has verified regulatory compliance, data protection obligations, liability allocation, and clinical evidence before deployment begins.

The Digital Health reporting on NHS AI procurement captures the current regulatory reality clearly: existing standards weren't designed for learning AI systems, and the pace of vendor activity in primary care means practices need to be, in the words of NHS England's national Chief Clinical Information Officer, "robust in ensuring that what we are doing is safe, assured and is going to deliver benefit."

The questions and checks in this article aren't obstacles to adoption. They're the foundation on which safe, effective, and defensible adoption is built. A vendor that can't answer them clearly and with documented evidence isn't ready to be deployed in a clinical setting. A practice that hasn't asked them is carrying risk it may not yet be aware of.

Frequently asked questions

▶ Does an AI tool used in a GP practice need to be registered as a medical device?

It depends on what the tool does. If it supports clinical diagnosis, triage decisions, or treatment recommendations, it's likely to be classified as Software as a Medical Device by the Medicines and Healthcare products Regulatory Agency and will require UKCA or CE marking under the Medical Device Regulation. Practice managers should search the MHRA Product and Registration Database to confirm a tool's registration status before signing any contract. Tools that aren't classified as medical devices still carry data protection, clinical safety, and governance obligations.

▶ Who is liable if an AI-generated clinical note contains an error that affects patient care?

In NHS primary care, liability for AI-related clinical incidents typically sits with the deploying organisation or individual clinician, not the vendor. Vendors generally limit their liability to the software performing as described in the contract and don't accept responsibility for clinical decisions made on the basis of AI outputs. Practice managers should review indemnity clauses carefully, confirm whether their medical defence organisation has been consulted, and ensure that all AI-generated content is reviewed by a clinician before it enters the patient record.

▶ Where should patient data be processed and stored when using a clinical AI tool?

Patient data is special category data under UK General Data Protection Regulation and must be processed lawfully, transparently, and proportionately. If data is transferred outside the UK or EU, additional safeguards are required. Before signing, practices should confirm where the vendor processes and stores data, who their subprocessors are, and whether a Data Processing Agreement compliant with UK GDPR Article 28 is included in the contract. NHS England confirms that a Data Protection Impact Assessment is highly likely to be a legal requirement before deployment.

▶ What security certifications should a clinical AI vendor hold?

Credible vendors should hold or be actively working towards ISO 27001 (the international standard for information security management), NHS Data Security and Protection Toolkit compliance, Cyber Essentials or Cyber Essentials Plus, and Digital Technology Assessment Criteria compliance. The NHS DSP Toolkit is a contractual requirement for organisations accessing NHS patient data. A vendor that can't produce evidence of these certifications, or that can't provide a current DSP Toolkit submission, should be treated with caution.

▶ What happens to patient data if a practice ends its contract with an AI vendor?

This is an area where vendor contracts are frequently vague. Practices should confirm before signing how long the vendor retains patient data after termination, whether a complete export of all patient data is possible, and what "deletion on request" covers in practice, including backups and subprocessor data. Under UK GDPR, the practice as data controller is responsible for ensuring patients' rights, including the right to erasure, can be exercised even after a vendor relationship ends. Contract terms allowing vendors to retain data for model training after termination should be flagged for legal review.

▶ Does it matter whether an AI tool was trained on NHS or UK primary care data?

Yes. Training data has a direct bearing on clinical accuracy in a UK primary care context. A tool trained predominantly on US clinical data may perform poorly on NHS-specific clinical coding using SNOMED CT, UK drug names and dosing conventions, referral pathways, and GP consultation documentation styles. Practices should ask vendors whether the model was trained and validated on UK GP consultation data, and whether the vendor publishes a model card describing training data sources, known limitations, and performance benchmarks. Marketing claims aren't a substitute for documented evidence.

▶ Who in the practice is responsible for overseeing an AI tool once it's deployed?

The Care Quality Commission's GP Mythbuster 109 on artificial intelligence sets out that practices must appoint a named Clinical Safety Officer, maintain a risk assessment and hazard log, and embed human oversight of AI outputs into clinical workflows. This is a regulatory expectation, not an optional step. Practices should also maintain a completed DCB0160 clinical safety case, records of clinician training, and a log of any incidents involving AI-generated outputs. This documentation protects the practice if a clinical incident occurs and questions are raised about how the tool was deployed.

▶ How should a practice evaluate whether an AI tool integrates properly with its medical record system?

Most GP practices in England use either EMIS Web or SystmOne. Practices should confirm whether the tool has a native, certified integration with their system rather than a workaround, who is responsible for maintaining that integration when the medical record system provider releases updates, and whether the integration has been tested in a live GP environment. Integration that requires manual copy-and-paste of AI-generated content into the medical record increases the risk of transcription errors and removes much of the efficiency benefit. Integration failures can create gaps in the patient record with direct patient safety implications.

▶ What clinical evidence should a vendor be able to provide before a practice signs a contract?

Vendors should provide published peer-reviewed studies or documented pilot results from GP or primary care environments, case studies from UK primary care settings with measurable outcomes, and evidence of evaluation against NHS-specific workflows and medical record systems. Evidence from secondary care or international primary care settings doesn't straightforwardly transfer to UK general practice. As researchers writing in the Lancet have noted, AI deployment in primary care is currently outpacing robust evaluation, which means vendor claims should be scrutinised carefully and post-deployment monitoring is essential.

▶ What should a practice look for in a vendor's support and incident response arrangements?

The Service Level Agreement should differentiate response and resolution times between clinical safety incidents and general technical issues. Practices should confirm whether the vendor has a named contact for GP practices or routes support through a generic helpdesk, and whether the vendor has a documented clinical safety incident reporting process that meets NHS standards. NHS England's guidance on ambient scribing products requires that contracting arrangements clearly define roles, responsibilities, and liability, including incident response. A vendor that can't articulate a clear clinical safety incident process isn't ready for deployment in a primary care setting.

Get started with Tandem today

Join thousands of clinicians enjoying stress-free documentation.

Get started with Tandem today

Join thousands of clinicians enjoying stress-free documentation.

Get started with Tandem today

Join thousands of clinicians enjoying stress-free documentation.