·
Technology Adoption
Primary Care
Healthcare IT / CIO
Why many GPs still haven't adopted AI documentation tools
What's stopping European GPs from embracing AI documentation tools - barriers, sentiment and the road ahead

European GP practices are under significant administrative pressure. Waiting lists are growing, consultation times are shrinking, and clinicians spend a disproportionate share of their working day on documentation rather than direct patient care. Artificial intelligence documentation tools, including ambient scribes, real-time transcription assistants, and automated clinical note generators, have been positioned by vendors and health technology commentators as a direct solution to this problem. Yet uptake across European primary care remains strikingly uneven. The technology exists. The problem it claims to solve is real. So why aren't more GPs using it?
The answer is not simple resistance. When clinicians are asked directly, in surveys, focus groups, and qualitative interviews, what emerges is a layered set of concerns that are rational, specific, and largely unaddressed by the current generation of AI documentation products. Understanding those concerns is essential for anyone involved in procuring, deploying, or building AI tools for European healthcare settings.
The implementation paradox: "I don't have time to learn a new system right now"
The most immediate barrier GPs describe is also the most counterintuitive. The very problem that AI documentation tools are designed to solve, the crushing weight of admin burden, is frequently cited as the reason clinicians cannot find the time to evaluate or adopt them.
A landmark 2025 survey of 1,005 UK GPs published in Digital Health found that 75 per cent were still not using generative AI tools in clinical practice. Of those who did use them, only 35 per cent were applying them to post-appointment documentation, the core use case most AI documentation vendors target. Critically, 85 per cent of GPs reported their employer had not encouraged them to use generative AI tools, and 95 per cent had received no professional training in how to use them.
This is not a story about clinicians who have tried AI tools and rejected them. It is largely a story about clinicians who have never had the institutional support to try them at all. Without protected time for evaluation, structured onboarding, or a designated person to lead implementation, the cognitive load of adopting a new system falls entirely on individual clinicians who are already running at capacity.
A qualitative study of Lithuanian family physicians found that even minor inefficiencies, including a 15 to 20 second processing delay in AI-generated output, were perceived as serious problems in high-pressure clinical environments. When every second of a consultation is accounted for, any tool that introduces friction, however small, is likely to be abandoned.
"How do I know it's actually accurate?"
Trust in the accuracy of AI-generated clinical notes is a persistent and legitimate concern. GPs are not simply being cautious for its own sake. They are responding to a genuine accountability question. If an AI assistant misrepresents what was said in a consultation, or generates an inaccurate clinical code, the clinician, not the vendor, bears the legal and professional responsibility.
The companion opinion paper to the 2025 UK GP survey found that GPs' open-ended comments clustered around themes of unfamiliarity, ambivalence, and anxiety about AI's role in clinical tasks. While 69 per cent of GPs believed AI would improve documentation, up from 59 per cent in 2024, suggesting attitudes are gradually shifting, adoption continues to lag well behind stated optimism.
The Royal College of General Practitioners' December 2025 research report captured this tension directly. GPs in focus groups cautioned against overestimating the time-saving potential of AI tools, noting that if you "spend the time to check things in a lot of detail, the time saving benefits might be diminished." This is a clinically responsible position. If a GP must read every AI-generated note word for word before signing it off, the efficiency gain may be marginal or non-existent.
A 2025 Italian study on the optimism-knowledge gap among clinicians found that healthcare professionals are broadly enthusiastic about AI but lack the specific knowledge needed to evaluate its reliability in practice, a gap that makes meaningful informed adoption difficult.
Concerns about accuracy extend specifically to structured data and clinical codes. Errors in free-text notes are one thing. Errors in coded diagnoses or medication records carry downstream consequences for patient safety, referral pathways, and population-level data quality.
"What happens to my patients' data?"
Data security and privacy concerns are particularly acute in European primary care, where General Data Protection Regulation compliance is a legal baseline rather than an optional consideration. When GPs ask "Is this even legal in my country?", they are not being obstructionist. They are asking a question that many AI documentation vendors have not answered clearly enough.
The companion UK GP opinion survey found that clinicians specifically voiced concerns about "third parties having access to patient data," a concern that is structurally reasonable given that most AI documentation tools process audio or text on cloud infrastructure that may sit outside the EU.
The RCGP report found that GPs raised questions about where patient data is stored, whether it is used for commercial purposes, and whether sharing patient data genuinely benefits the individuals whose data is being used. These are not hypothetical concerns. They reflect real ambiguity in how many AI tools handle data residency and secondary use.
The European General Practice Research Network keynote paper on AI in European primary care raised additional concerns specific to the European research context: data ownership, data poisoning, and the risk of data leakage, particularly relevant when patient conversations are processed by third-party AI infrastructure.
A major EU-commissioned study on AI deployment in European healthcare identified legal and regulatory complexity as one of four primary barrier categories, noting that both providers and patients worry about AI reliability and data protection. The report found that most EU Member States lack clear reimbursement pathways for AI tools, and that adoption is currently concentrated in larger academic hospitals rather than primary care settings, where data governance infrastructure is often less developed.
"Our medical record system is ancient — will it even work?"
Integration with existing medical record systems is a practical constraint that vendors frequently underestimate. In reality, the IT infrastructure of European primary care is heterogeneous, often ageing, and rarely designed with third-party AI integration in mind.
The Spanish proof-of-concept study from Catalonia, which tested an AI clinical note generation tool called "Relisten" in primary care settings, surfaced exactly these friction points: medical record system workflow integration, time measurement challenges, and the difficulty of benchmarking AI-generated documentation against existing documentation standards in real clinical environments.
The EU healthcare AI deployment report categorised technological and data issues as a distinct barrier category, separate from regulatory or organisational concerns. Legacy systems in European public healthcare, many of which were not designed to expose application programming interfaces or accept structured input from external tools, represent a genuine technical obstacle that cannot be resolved at the practice level.
For GPs working in public healthcare settings, the decision to integrate a new AI tool is rarely theirs alone to make. It typically requires IT department involvement, procurement approval, and in some cases national or regional health authority sign-off. The gap between a clinician downloading an app and an AI tool being formally integrated into a practice's medical record system workflow is substantial.
"Nobody at the practice has signed off on this"
Individual clinical interest in AI documentation tools does not automatically translate into institutional adoption. Many GPs describe a situation in which they are personally curious about AI assistants but face organisational or governance barriers that prevent them from moving forward.
The 2025 UK GP survey makes this structural problem explicit: 85 per cent of GPs said their employer had not encouraged them to use generative AI tools, and 95 per cent had received no professional training. This is not a picture of a workforce that has been offered AI tools and declined them. It is a picture of a workforce that has largely been left to navigate AI adoption without institutional support.
The German physician attitudes study from RWTH Aachen University found that despite enthusiasm among individual physicians, clinical integration remained limited due to concerns about usability, ethical implications, and physician acceptance. The study called explicitly for standardised implementation strategies rather than leaving adoption to individual initiative.
Governance concerns also include questions about clinical accountability. If an AI-generated note contains an error, who is responsible? If a tool has not been formally approved by a practice's clinical lead or by a national regulatory body, individual GPs may be reluctant to use it even if they believe it would help, precisely because the accountability framework is unclear.
"I've seen too many tools come and go"
Clinician scepticism rooted in past experience is a factor that does not always appear in surveys but surfaces consistently in qualitative research. GPs have lived through multiple cycles of health technology promises, from medical record system implementations that took years to stabilise, to clinical decision support tools that were mandated and then quietly abandoned, and this history shapes how they evaluate new tools.
The European General Practice Research Network keynote paper noted directly that the pace of AI integration is outstripping the available evidence supporting its efficacy and safety. For clinicians trained to evaluate interventions against evidence, this is a meaningful concern. A tool shown to reduce documentation time in a vendor-sponsored pilot study is not the same as a tool with a robust evidence base from independent real-world evaluation.
The Polish mixed-methods study found that AI adoption remains limited due to reluctance to change, misperceptions, and knowledge gaps. It also noted that concerns about job displacement have largely eased, with AI increasingly viewed as augmenting rather than replacing clinicians. This is progress, but it does not automatically translate into trust in specific tools.
The PubMed-indexed survey of primary care clinicians on clinical decision support for HIV pre-exposure prophylaxis found that even when clinicians rated a tool as appropriate and useful, uptake was hindered by workflow and usability barriers, underscoring that perceived value and actual adoption are not the same thing. Under-supported rollouts, poor change management, and tools that don't fit real workflows have left a residue of caution that new AI documentation products must contend with.
"I'm not sure it would actually help my workflow"
Even GPs who are open to AI documentation tools often express doubt about whether existing products are designed for how they actually work. European primary care encompasses a wide range of consultation formats, languages, and documentation requirements that do not always match the use cases AI tools were built for.
The RCGP report found that GPs identified administration as a key area where AI could help. Their focus groups also revealed scepticism about whether AI tools could deliver on that promise in practice, particularly around the time required to verify AI-generated content.
The European General Practice Research Network keynote paper highlighted that the practical value of AI tools depends heavily on clinicians' prompt engineering skills, a capability gap that most GPs have not had the training to address. An AI documentation tool that requires significant configuration or prompting to produce useful output is not well-suited to the time-pressured reality of a GP consultation.
Remote and virtual consultations add further complexity. Ambient voice technology designed for in-person consultations may not function reliably in telephone or video triage settings. Multilingual patient interactions, common in urban European practices, introduce additional challenges around transcription accuracy and note quality. The Lithuanian qualitative study found that physicians remained sceptical of AI's reliability and efficiency, with trust, data privacy, and physician autonomy all identified as persistent concerns, concerns that are amplified when the tool is perceived as not quite fitting the clinical context.
The AI readiness study of young European family doctors published in the Annals of Family Medicine assessed readiness across four dimensions, cognition, ability, vision, and ethics, and found meaningful variation across countries, suggesting that the adoption gap is not uniform and is shaped by structural as well as individual factors.
"What does it cost, and who pays for it?"
Budget uncertainty is a significant and underreported barrier, particularly in European public healthcare systems where purchasing decisions are subject to procurement rules and central funding constraints.
The EU healthcare AI deployment report found that most EU Member States lack reimbursement pathways for AI tools, and that organisational and financial obstacles constitute one of the four primary barrier categories to AI adoption in European healthcare. Without a clear mechanism for funding AI documentation tools, whether through national health budgets, practice-level spend, or reimbursement from insurers, individual practices are left to absorb costs that may be difficult to justify in resource-constrained environments.
Pricing models for AI documentation tools vary considerably and are not always transparent. Subscription-based models, per-consultation fees, and enterprise licensing arrangements each create different financial dynamics for practices of different sizes. In mixed healthcare systems, where GPs may see both publicly funded and privately funded patients, the question of which consultations fall under which pricing tier adds further complexity.
The qualitative UK patient study on AI in primary care for patients with multiple long-term conditions found that implementation challenges and acceptance factors are closely linked, and that financial and organisational barriers interact with clinical and social ones in ways that make adoption a system-level challenge rather than an individual decision.
What these objections actually tell us about adoption
The concerns European GPs raise about AI documentation tools are not a catalogue of irrational resistance. They are a coherent set of questions about trust, fit, governance, and support, questions that the current generation of AI documentation products, and the health systems responsible for deploying them, have not yet answered convincingly enough to drive widespread adoption.
The barriers cluster into four broad categories:
Trust and accuracy: clinicians need confidence that AI-generated notes are reliable enough to sign off without extensive review, and that errors in structured data and clinical coding will not create downstream patient safety risks
Data governance: GDPR compliance, data residency, and clarity about secondary data use are non-negotiable for European clinicians operating under legal obligations that vary by country
Integration and fit: tools that do not connect reliably to existing medical record systems, or that were not designed for the specific consultation formats and linguistic diversity of European primary care, will not be adopted regardless of their technical capability
Institutional readiness: individual clinician interest is not sufficient; adoption requires employer encouragement, professional training, governance frameworks, and in many cases central funding or reimbursement pathways
The 2025 UK GP survey finding that 95 per cent of GPs had received no professional training in generative AI tools is perhaps the single most important data point for anyone seeking to understand why adoption remains low. It suggests that the primary gap is not in clinician attitudes, which are becoming more positive, but in the institutional infrastructure required to support responsible, informed adoption.
For health system leaders and procurement decision-makers, the implication is that building a business case for deploying AI documentation tools is not primarily a technology problem. It is an implementation problem, one that requires investment in training, governance, integration support, and clear communication about data handling, before clinicians can reasonably be expected to change how they work.
Frequently asked questions
▶ Why aren't more European GPs using AI documentation tools?
The main barrier isn't resistance to the technology. It's a lack of institutional support. A 2025 survey of 1,005 UK GPs found that 85 per cent had not been encouraged by their employer to use generative AI tools, and 95 per cent had received no professional training. Without protected time for evaluation, structured onboarding, or designated implementation leads, the burden of adoption falls on individual clinicians who are already working at capacity.
▶ How do GPs feel about the accuracy of AI-generated clinical notes?
Accuracy is a persistent and legitimate concern. If an AI assistant misrepresents what was said in a consultation, or generates an incorrect clinical code, the clinician bears the legal and professional responsibility, not the vendor. GPs in focus groups have noted that if you need to check every AI-generated note in detail before signing it off, the time-saving benefit may be marginal or non-existent. A 2025 Italian study found that clinicians are broadly enthusiastic about AI but lack the specific knowledge needed to evaluate its reliability in practice.
▶ What are GPs' main concerns about patient data and privacy?
Data security and privacy concerns are particularly acute in European primary care, where General Data Protection Regulation compliance is a legal baseline. GPs have raised questions about where patient data is stored, whether it's used for commercial purposes, and whether processing audio or text on cloud infrastructure outside the EU is permissible. A major EU-commissioned study identified legal and regulatory complexity as one of four primary barrier categories to AI adoption in European healthcare.
▶ Does AI documentation software integrate with existing medical record systems in European GP practices?
Integration is a practical constraint that vendors frequently underestimate. The IT infrastructure of European primary care is heterogeneous, often ageing, and rarely designed with third-party AI integration in mind. A Spanish proof-of-concept study surfaced friction points including medical record system workflow integration and the difficulty of benchmarking AI-generated documentation against existing standards. For GPs in public healthcare settings, formally integrating a new AI tool typically requires IT department involvement, procurement approval, and in some cases national or regional health authority sign-off.
▶ What role does institutional support play in AI adoption for GP practices?
It's central. Individual clinical interest doesn't automatically translate into adoption. A German physician attitudes study from RWTH Aachen University found that despite enthusiasm among individual physicians, clinical integration remained limited due to concerns about usability, ethical implications, and physician acceptance. The study called explicitly for standardised implementation strategies rather than leaving adoption to individual initiative. Governance questions also matter: if a tool hasn't been formally approved by a practice's clinical lead or a national regulatory body, GPs may be reluctant to use it even if they believe it would help.
▶ Why are some GPs sceptical about whether AI tools will fit their actual workflow?
European primary care covers a wide range of consultation formats, languages, and documentation requirements that don't always match the use cases AI tools were built for. Ambient voice technology designed for in-person consultations may not function reliably in telephone or video triage settings. Multilingual patient interactions, common in urban European practices, introduce additional challenges around transcription accuracy and note quality. A Lithuanian qualitative study found that even a 15 to 20 second processing delay was perceived as a serious problem in high-pressure clinical environments.
▶ How does the cost of AI documentation tools affect uptake in European primary care?
Budget uncertainty is a significant and underreported barrier. The EU healthcare AI deployment report found that most EU Member States lack reimbursement pathways for AI tools, and that organisational and financial obstacles constitute one of the four primary barrier categories to adoption. Pricing models vary considerably and aren't always transparent. In mixed healthcare systems, where GPs may see both publicly funded and privately funded patients, the question of which consultations fall under which pricing tier adds further complexity.
▶ Has clinician scepticism about AI tools been shaped by past experience with health technology?
Yes. GPs have lived through multiple cycles of health technology promises, from medical record system implementations that took years to stabilise to clinical decision support tools that were mandated and then quietly abandoned. The European General Practice Research Network keynote paper noted that the pace of AI integration is outstripping the available evidence supporting its efficacy and safety. A Polish mixed-methods study found that AI adoption remains limited due to reluctance to change, misperceptions, and knowledge gaps, though concerns about job displacement have largely eased.
▶ What are the four main barrier categories to AI documentation adoption in European GP practices?
The barriers cluster into four broad categories. First, trust and accuracy: clinicians need confidence that AI-generated notes are reliable enough to sign off without extensive review. Second, data governance: General Data Protection Regulation compliance, data residency, and clarity about secondary data use are non-negotiable for European clinicians. Third, integration and fit: tools that don't connect reliably to existing medical record systems, or that weren't designed for European primary care's consultation formats and linguistic diversity, won't be adopted. Fourth, institutional readiness: adoption requires employer encouragement, professional training, governance frameworks, and in many cases central funding or reimbursement pathways.