Part 3: GP Journey with Heidi AI - Data and DPIAs

May 2025 brought the most eye-opening exercise of my career in data protection. Working through Steve Durbin's comprehensive Data Protection Impact Assessment (DPIA) template revealed the complex web of processors, the Article 28 nightmare, and the harsh reality of liability in AI healthcare.

Dr. Chad Okay

Dr. Chad Okay

NHS Resident Doctor & Physician-Technologist

Part 3: GP Journey with Heidi AI - Data and DPIAs

Data, DPIAs, and the Accountability Trap

May 2025 brought the most eye-opening exercise of my career in data protection.

After receiving the DCB0160 templates in April, I spent the better part of May working through the Data Protection Impact Assessment. Steve Durbin, North Central London's Integrated Care Board's (NCL ICB) Data Protection Officer, had already done the heavy lifting: creating a comprehensive DPIA template that had evolved through multiple versions since October 2024. But even with this substantial groundwork, adapting it to our practice and understanding its implications was complex and time-consuming.

Why We Needed a DPIA (And Why Yours Probably Does Too)

Steve Durbin's template made it clear that the decision to conduct a Data Protection Impact Assessment wasn't optional. Under UK GDPR Article 35, any "systematic and extensive evaluation of personal aspects" involving automated processing triggers mandatory DPIA requirements. His screening questions had already identified that AI scribes tick every high-risk box: they don't just transcribe but they analyse, interpret, and structure clinical narratives. That's automated processing of special category data at scale.

What I hadn't appreciated until reading through his detailed assessment was that the "simple transcription service" vendors promised actually involved complex algorithmic decision-making. Steve had documented this clearly: when the AI decided that a patient's description of chest pain warranted the flagging of potential cardiac symptoms, that's not passive recording. It's clinical interpretation that could influence care decisions.

Data Flows: Following the Breadcrumbs

Steve Durbin had already done the detective work, documenting in his DPIA how Heidi's simple story "Your consultation audio goes to our secure UK servers, gets processed, and returns as text" involved a complex web of processors.

His DPIA identified the key players:

  • Heidi Health UK Ltd as the primary processor
  • Heidi Health (Australia) as another processor
  • Subprocessors including Google LLC (Ireland), AWS, Kinde (Ireland) for authentication, Stripe for payments, and Intercom (Ireland) for support

Steve had noted that while Heidi claimed the data was anonymised in their cloud infrastructure, the reality of Article 28 GDPR compliance meant we needed written contracts with every entity in this chain. Each processor represented different data processing agreements and compliance requirements. The liability question he'd raised kept me awake. If patient data was inadvertently exposed at any point in this chain, who exactly would be responsible?

The Article 28 Nightmare - Already Mapped

Steve's template had already confronted Article 28 nightmare. Under GDPR Article 28, we needed written contracts with every processor handling our data. His meticulous documentation revealed Heidi's sub-processor network was vast and fluid. It consisted of cloud services automatically scaling across regions, third-party APIs for specialised medical terminology, and "improvement services" that nobody at Heidi could initially clearly define.

Even more concerning was his discovery that some of Heidi's "quality assurance" processes involved human reviewers listening to anonymised audio samples. These reviewers weren't NHS employees or even UK-based. They were contracted workers in various countries, each representing another layer of sub-processing that needed contractual protection. By version 2.1 of his DPIA in February 2025, Steve had flagged that the free contract signup could breach Article 28 if the signatory wasn't acting as an agent of the organisation, particularly problematic for locums. He'd worked with Heidi to develop a separate "Hub contract" to address this by March.

Locum Access: The Compliance Wild Card

In version 2.2 of the DPIA (March 2025), Steve had specifically flagged the "locum wild card." Our regular staff could be trained on the AI scribe system, understand its limitations, and follow our internal protocols. But locums? They arrived, needed immediate system access, and often lacked the contextual understanding of how our AI processes worked.

The GDPR's accountability principle meant we were responsible for ensuring anyone accessing patient data understood their obligations. Steve had documented the clinical liability risks clearly. A locum might over-rely on AI-generated summaries without understanding the system's known limitations or failure modes. His solution was pragmatic: Heidi needed to provide a separate contract that practices could have locums sign, ensuring proper Article 28 compliance regardless of who was using the system.

The Liability Reality Check - Documented in Black and White

Steve's DPIA had confronted the liability landscape head-on. Despite all the AI assistance, all the automated transcription, all the "intelligent" clinical formatting, his documentation made it crystal clear that legal responsibility remained 100% with the clinician and practice.

He'd quoted NHS England's guidance: "NHS organisations may still be liable for any claims arising out of the use of AI products particularly if it concerns a non-delegable duty of care between the practitioner and the patient." He'd even scored this risk as 4x4 = 16 (high risk) in his assessment matrix.

This created what Steve had termed an accountability trap. We got all the efficiency promises but none of the liability protection. His mitigation was blunt but necessary: "Practitioners MUST review all outputs as they are confirming them as their medical opinion; they retain liability." He'd noted this was a legal requirement to avoid Article 22 (automated processing) being engaged.

The Verification Fatigue Paradox

This brought us to perhaps the most ironic aspect of our AI implementation: verification fatigue. The whole point of AI scribes was to reduce administrative burden and increase consultation efficiency. But proper clinical governance required constant verification of AI outputs.

Every AI-generated summary needed review. Every clinical code suggestion required validation. Every formatted letter needed checking. The automation bias research is clear: clinicians tend to over-rely on automated systems, especially when under time pressure. But the reality of liability demanded hyper-vigilance.

I found myself spending as much time reviewing AI outputs as I had previously spent on manual documentation. The cognitive load was different but not necessarily lighter. Instead of writing notes, I was fact-checking an algorithmic interpretation of my consultation, often harder than writing the notes myself.

Consent Processes: The Template

Steve's DPIA had thoroughly analysed patient consent requirements. While we could rely on legitimate interest for basic healthcare processing, he'd documented the complexity of making AI systems transparent to patients. His risk assessment scored "Patients not correctly informed of use of data by practitioner" as 4x4 = 16 (high risk).

His template included practical guidance: "I will be recording this consultation to assist me in summarising the discussion and updating your medical record." He emphasised that patients must be informed "at time of recording, in a manner that is intelligible to the patients (e.g. correct languages and level of language)."

Steve had worked with Heidi to ensure the privacy notice was updated appropriately, though he'd strongly disagreed with Heidi's initial suggestion that a waiting room notice was sufficient. The final DPIA made it clear: explicit notification at the point of recording was essential, not just general signage.

How do you explain to a 75-year-old patient that their consultation will be processed by artificial intelligence across multiple servers? Steve's template tried to balance legal requirements with readability, though he acknowledged few patients would read the full three-page technical explanation. The practical solution was a layered approach: simple verbal consent backed by detailed written information for those who wanted it on practice T&C.

Lessons from the Trenches

After working through Steve Durbin's comprehensive DPIA in May 2025, several key insights emerged. His document, refined through multiple versions since October 2024, had already identified and addressed most of the critical issues we would have otherwise discovered the hard way.

First, the groundwork matters. Steve's meticulous documentation of data flows, processing locations, and sub-processors saved us weeks of investigation. He'd already questioned everything and documented the answers.

Second, the liability gap is real. Steve had clearly documented how AI vendors carefully limit their liability while practices absorb all clinical risk. His risk scoring made this impossible to ignore.

Third, patient communication is genuinely difficult. Despite Steve's best efforts to create an accessible template language, balancing transparency with comprehensibility in AI systems remains a challenge the industry hasn't yet solved.

Finally, the human element remains crucial. Steve's insistence that every AI output must be reviewed wasn't just bureaucratic caution. AI can assist with documentation, but it doesn't reduce the need for clinical judgement. It just changes where that judgement gets applied.

The accountability trap Steve had identified remained unresolved: practices bearing full liability for systems they don't fully control. But thanks to his comprehensive work, at least we understood the trap we were walking into: from the hours I spent was understanding his analysis, to adapting it to our specific practice context, to ensuring our staff understood the implications.


This is Part 3 of a 6-part series documenting the implementation of Heidi AI Scribe in NHS primary care. ← Part 2: Navigating DCB0160 | Part 4: The MHRA "Bomb" and the Integration Illusion →

Share this article

Dr. Chad Okay

Dr. Chad Okay

I am a London‑based NHS Resident Doctor with 8+ years' experience in primary care, emergency and intensive care medicine. I'm developing an AI‑native wearable to tackle metabolic disease. I combine bedside insight with end‑to‑end tech skills, from sensor integration to data visualisation, to deliver practical tools that extend healthy years.

Related Articles