HeadlinesBriefing favicon HeadlinesBriefing.com

AI Medical Scribes Generate Dangerous Errors, Ontario Audit Reveals

Ars Technica •
×

AI-powered medical scribes approved for use in Ontario healthcare are producing alarming rates of inaccurate and fabricated patient information, according to a recent government audit. The technology, designed to automatically transcribe and summarize doctor-patient conversations, appears to be creating more problems than it solves.

The Ontario auditor general tested 20 pre-qualified AI scribe vendors using simulated patient consultations and found that every single system exhibited accuracy or completeness failures. Nine vendors hallucinated patient information entirely, while 12 incorrectly recorded medical details and 17 missed critical mental health discussion points. Specific errors included fabricated therapy referrals and wrong prescription names.

These mistakes aren't just technical glitches—they pose direct risks to patient care. The audit warned that such errors could result in inadequate or harmful treatment plans that jeopardize patient health outcomes. With healthcare providers increasingly relying on automated documentation tools to reduce administrative burden, these findings suggest the technology isn't yet ready for widespread deployment in clinical settings.

The audit examined vendors approved for provincial government purchase, meaning these systems were already deemed suitable for healthcare use before testing revealed their significant flaws. This raises serious questions about current validation processes for AI tools in medicine.