Weekly Round-up: The Darker Side of AI in Medical Education
When technology becomes the problem rather than the solution: dependency, uncertainty around acceptable use, trust and ethics.
By now you’ve probably gathered that, while I’m genuinely excited about what generative AI can do, I’m equally cautious about what it might undo. Its flaws and biases run deep, and without careful testing and thoughtful guardrails, the risks could easily outweigh the rewards. The papers I’ve selected this week focus on the practical risks of deploying AI tools in medical education: dependency, dishonesty and ethical complexity.
The first paper I’ve selected this week focuses on Palestinian medical educators and discusses the concept of AI dependency1. This qualitative study uses the I-PACE model (Interaction of Person–Affect–Cognition–Execution) to explore why faculty increasingly depend on AI. Forty-six participants described workload, perfectionism, and low confidence as key drivers of over-reliance. Thematic analysis revealed six major consequences:
skill atrophy
pedagogical erosion
motivational decline
ethical risk
social fragmentation
creativity suppression
Many reported difficulty reverting to traditional practices, describing a “calculator effect” on cognition and autonomy. The authors call for clear institutional policies and AI-literacy programmes to prevent educators’ cognitive and professional deskilling—a timely reminder that faculty, as well as students, require support to maintain human-centred teaching.
I find this area of research endlessly fascinating; I can’t help worrying about what AI might be doing to our creativity. It feels a bit like how Google has dulled my memory for facts, or how the calculator app on my phone has wiped out any lingering need for mental arithmetic. There is a lot of work being done on AI dependency in the broader educational literature. I’ll write more about this soon.
The second study I’ve selected this week is a survey of 244 Ukrainian medical students, interns, and PhD candidates2. It found that 84% already use AI in their studies, mainly for information retrieval, while 14% admitted asking AI to write essays. Views on whether this constitutes cheating were divided: 36% considered it misconduct, 26% acceptable, and 38% undecided. Participants valued AI for speed and accessibility but feared dependency, misinformation, and ethical drift. Half had previously cheated on tests. The authors highlight the urgent need for national policy defining acceptable AI use, noting that Ukraine currently lacks enforceable standards. I think this might be something we’re struggling with elsewhere too.
The findings of this study align with the most recent data from the HEPI, which showed 88% of students in the UK are routinely using AI in their assessments3.
The third study I selected was published in Organizational Behavior and Human Decision Processes4. It’s not medical education specific, but the findings are almost certainly transferable. Through a remarkable series of thirteen experiments spanning academia, business, and creative industries, this paper demonstrates a paradox at the heart of AI ethics: transparency can make people less trustworthy.
Across contexts, from professors grading with AI to analysts using it for reports, those who disclosed AI involvement were consistently trusted less than those who stayed silent. The authors attribute this “trust penalty” to reduced perceptions of legitimacy, arguing that disclosure violates deep social expectations about human agency, competence, and authenticity. Even positive framings (“AI used only for proofreading”) failed to mitigate the effect, while being exposed by others for undisclosed AI use damaged trust most severely.
The findings challenge the assumption that openness automatically builds confidence, suggesting that honesty about AI can unintentionally erode the very credibility it aims to preserve.
The final paper this week is a scoping review of 50 papers, which delineates seven ethical problem-domains emerging from AI integration5:
privacy and data security
algorithmic bias
accountability
fairness
reliability
dependency
patient autonomy.
The analysis reveals tensions between transparency and confidentiality, global inequities in access, and uncertainty over legal responsibility for AI-generated errors. The authors advocate hybrid learning models that pair human oversight with structured AI-use guidelines, alongside curricular reform embedding digital ethics and data-literacy training. They argue that AI’s moral risks mirror those it is meant to teach against, posing a paradox for medical education itself.
Full references list available to paying subscribers:
Keep reading with a 7-day free trial
Subscribe to AI × MedEd by Andrew O'Malley to keep reading this post and get 7 days of free access to the full post archives.



