Ontario's auditor general has identified significant reliability issues with AI systems deployed in medical settings, with the systems generating hallucinated or fabricated information. The finding raises concerns about patient safety and the integrity of AI-assisted medical decision-making in the province.
Why it matters: Healthcare providers and AI vendors need to understand the real-world failure modes of medical AI systems before deployment, as hallucinations in clinical contexts pose direct risks to patient outcomes and regulatory compliance.