AI in healthcare: what every clinician needs to know right now

AI in healthcare

Artificial intelligence is no longer a future concept. It’s already embedded in the daily fabric of healthcare – documenting consultations, supporting diagnoses, generating patient letters, and automating admin tasks.

The narrative around AI is often polarised: either it will revolutionise medicine or replace doctors. But the truth is far more nuanced  (and far more urgent). Whether you’re a consultant, GP, administrator, or healthcare leader, AI is already reshaping your environment. The question isn’t whether it will impact your work, but how – and whether that impact will be safe, ethical, and human-centred.

This article brings together expert insights shared at Doctify’s recent AI in Healthcare event in London, exploring where AI is already delivering value, where risks lie, and how healthcare professionals can meaningfully engage with the change.

How AI is reshaping clinical practice

We were joined by an expert panel of speakers who brought a wealth of insight and first-hand experience using AI in the healthcare space:

One of the clearest insights from the event was this: AI is already delivering value in healthcare, but the wins look very different depending on where you’re standing. Whether it’s improving diagnosis or accelerating admin, AI is showing real-world impact across both clinical and operational fronts.

Ambient voice AI is helping clinicians reconnect with patients

AI is starting to integrate directly into the diagnostic and treatment process, enhancing how care is delivered on the front lines.

One of the most powerful examples came from Tandem, an ambient voice technology now being rolled out to over 200,000 NHS clinicians. It listens in during consultations, transcribes conversations in real time, and auto-generates structured clinical notes, freeing clinicians from their screens and allowing them to reconnect with patients.

Doctors and nurses have been tied to their computers, writing up notes. This is freeing them up, and when it frees them up, it allows them to actually engage with patients again, actually listen to their stories, spend less time writing and more time listening. This is what people come into healthcare to do.

Dr Keith Grimes, AI & Digital Health Consultant and Founder of Curistica

From transcription to full clinical workflow support

But Tandem isn’t just simplifying documentation. According to Ian Robertson from Tandem Health, the technology is evolving to support the full clinical workflow: “We’re building for the full clinical workflow – not just scribing the note, but handling everything before and after.”

That includes automating tasks like:

  • Generating blood test requests
  • Ordering radiology scans
  • Populating SNOMED or ICD-10 codes
  • Extracting structured data from free-text notes

Real-world clinical use cases: AI in diagnostics, emergencies, and mental health

Another standout example is Mia, a diagnostic AI tool developed by Kheiron Medical Technologies. Mia is already in use in NHS Grampian, where it acts as a first reader of mammograms in breast cancer screening programmes. This has helped reduce pressure on radiologists without compromising diagnostic accuracy. 

Ankita Negi, Healthtech & Biotech Lead at Microsoft UK, said: “With breast cancer, if the tumour is under 15mm, survival is 95%. Catching it early is everything. AI is helping us do that.”

Mia is part of a broader wave of clinically focused AI tools transforming care across specialties.

  • Ultromics is helping detect early signs of heart failure by analysing echocardiograms using AI, reducing diagnostic errors and supporting faster clinical decisions in cardiology.
  • Corti supports emergency medical dispatchers by analysing live emergency calls to detect conditions like cardiac arrest in real time, acting as an AI co-pilot during critical moments.

Together, these innovations highlight a shared goal: supporting earlier detection, faster intervention, and more personalised treatment across clinical pathways.

Generative AI tools are transforming everyday clinical tasks

While ambient voice and diagnostic tools are changing the consultation itself, generative AI tools are reshaping the tasks around it.

Clinicians are now using tools like Microsoft Copilot, ChatGPT, Claude, and Gemini to:

  • Draft patient letters
  • Summarise research
  • Generate presentation slides
  • Answer regulatory questions

These tools are already embedded in many healthcare workflows, offering speed, structure, and flexibility.

The risks: bias, hallucinations & accountability

Despite the enthusiasm, experts were clear: AI is not a magic solution. It’s a tool – and like any tool, it carries risk.

Automation bias

When AI tools become integrated into EMRs and workflows, there’s a risk of over-trust. Clinicians may begin to click ‘approve’ without fully reading outputs.

When AI tools become integrated into EMRs and workflows, there’s a risk of over-trust. Clinicians may begin to click ‘approve’ without fully reading outputs.

Ankita Negi, Healthtech & Biotech Lead at Microsoft UK

Several panellists emphasised the need for UI design that forces meaningful engagement, with prompts that ask clinicians to validate each section or identify potential errors.

Hallucinations

Large language models like ChatGPT or Claude generate text by predicting the most likely next word – not by verifying facts. That means they can invent information that looks accurate but is entirely false. “It does a really good job of fooling you. And in clinical practice, that’s terrifying”, said Ekanjali Dhillon, Digital Transformation Change Lead at HCA Healthcare UK.

This could mean inserting the wrong diagnosis into a letter, misattributing a treatment plan, or summarising a patient history inaccurately. If unchecked, these hallucinations could lead to clinical harm.

Accountability & legal risk

Perhaps the thorniest issue: if something goes wrong, who is responsible?  As Ekanjali Dhillon put it, “Is it your fault? Gemini’s fault? The trust’s fault? Good luck suing Alphabet.”

Currently, responsibility still lies with the clinician, even if they are using an AI-generated summary. This creates serious governance implications, especially as more tools are embedded in standard workflows.

Education isn’t keeping up

If AI is now part of daily practice, training must reflect that. But most medical education still omits digital health – let alone prompting, model bias, or AI safety.

We don’t even teach digital health in med school. And now we need clinicians who can oversee AI safely.

Dr Nikita Patel, Head of Propositions at AXA Health

Future clinicians will need to learn:

  • Prompt engineering: how to communicate effectively with AI tools
  • Model reasoning: understanding how LLMs generate outputs and what affects accuracy
  • Error detection: spotting when an AI is malfunctioning or hallucinating
  • Clinical governance: knowing when and how to override AI recommendations

As Ankita Negi put it: “You don’t need to be a data scientist. But if AI’s making decisions with you, you need to know how it works – and when it’s wrong.”

What you can do now (no tech degree required)

Many clinicians want to get involved but don’t know where to start. Here are four practical steps from the experts:

1. Experiment with different tools

Don’t stop at ChatGPT. Try Claude, Gemini, Copilot, or NotebookLM. Each has different strengths – research, summarisation, reasoning, etc.

2. Test prompting like a professional

Try: “You are a senior GP. Write a letter to a patient explaining their hypertension treatment plan in plain language.”

The more specific the role and goal, the better the result.

3. Start small in practice

  • Trial ambient voice transcription in one clinic
  • Use AI to summarise an academic paper
  • Turn existing notes into presentation slides
  • Draft a discharge summary with AI, then review before finalising
  • Translate a short explainer video for non-English-speaking patients
  • Create AI-generated FAQs for a common condition you treat

4. Ask hard questions of your AI vendor

  • What was the training dataset?
  • How is clinical risk handled?
  • Can I override outputs, and is that logged?
  • What’s the audit process?

If AI is the drug, clinicians are the prescribers. That means we need to ask the same questions we’d ask of any new treatment.

Ekanjali Dhillon, Digital Transformation Change Lead at HCA Healthcare UK

The future is human-led… with AI in support

Enkanjali Dhillon shared, “Progress in this space depends on collective learning. We can shape how AI is used – but only if we get involved.” 

AI is already here: documenting, suggesting, predicting. But it’s not here to replace clinical judgment. At its best, AI supports the very things that brought many into medicine: listening to patients, making informed decisions, and delivering meaningful care.

Human oversight will always be essential. But with the right tools, training, and questions, clinicians can harness AI to work smarter, not harder: reducing burnout, improving access, and strengthening trust.

The future of healthcare isn’t machine-led. It’s human-led, tech-enabled, and shaped by those who choose to engage with it.

Want to be invited to future Doctify events? Sign up for free – exclusively for practicing clinicians.

Found for you

For Providers

Long waits in healthcare are inevitable, but how patients experience that wait makes all the difference. While delays are ....

For Providers

Patient feedback can be both positive and constructive, and the majority of reviews collected through Doctify highlight the high ....

For Providers

The healthcare industry is undergoing a digital revolution, and video content is at its core. In the UK alone, ....