Healthcare Professional Struck Off for AI Deception in Interview
A healthcare worker applying for a position at a hospital in Guildford was removed from their professional register after a hearing by the Health and Care Professions Tribunal Service. The tribunal concluded the candidate used generative AI to produce answers during the interview process, a finding that led to a determination of professional misconduct and revocation of registration. The case has prompted renewed attention on the boundaries of acceptable AI use in hiring for clinical roles.
The Incident: Allegations and Detection
According to the tribunal record, interviewers noted answers that were unusually polished and inconsistent with the candidate’s prior records. Investigators presented linguistic analysis and other evidence that pointed to AI-generated responses. Panels reported repetitive phrasing, generic examples, and timing patterns that did not match normal speech. The HCPC considered this evidence alongside professional standards and concluded the conduct breached duties of honesty and integrity required of registered healthcare professionals.
Upholding Integrity: Lessons for AI in Healthcare Recruitment
This episode signals a clear tension between rapid AI adoption and longstanding professional norms. For healthcare, where patient safety and trust are paramount, recruitment must verify not only credentials but also the authenticity of candidate competence. Misuse of AI in selection risks placing underprepared staff in clinical roles and undermines public confidence.
Practical steps organisations can take include updated recruitment policies that require disclosure of AI assistance, structured competency tests and observed simulations, proctored or recorded interviews, and routine use of forensic and algorithmic checks where appropriate. Training interview panels to recognise AI-typical patterns and raising baseline AI literacy across hiring teams will help. Regulators and professional bodies will likely refine guidance and sanctions to make expectations clear.
The case is a reminder that technology can support healthcare when used transparently. It also shows that regulators are prepared to act when AI use crosses ethical or professional boundaries. Hiring practices must evolve to protect patients, maintain standards, and preserve trust in the professions that deliver care.




