Of Minds and Machines: A Physician’s Reflection on AI, Cognitive Debt, and the Cost of Convenience
For the discerning doctor, an exploration of what we gain — and what we risk losing — in the age of large language models
Dr. Rehan Qureshi stood beneath the soft glow of the ward’s halogen lights, dictating his evening progress notes into his tablet. Behind him, the gentle beep of monitors hummed a familiar lullaby. He had just concluded a particularly intricate case of Guillain-Barré and was now summarizing the diagnostic sequence — EMG findings, CSF albuminocytologic dissociation, the reflex pattern — with the swiftness that only years of neurology could grant.
But this time, he wasn’t really thinking. ChatGPT was.
He had prompted the model with the clinical summary. The output was eloquent, even nuanced. It referenced the Hughes Disability Scale and included an apt note on IVIG vs plasmapheresis. Still, as he reviewed it, a peculiar hollowness surfaced — not in the content, but in himself.
He wasn’t tired. He was under-engaged.
A Study that Whispered What He Felt
Days later, a junior resident forwarded him a study with the subject line: “Sir, must-read. About LLMs + cognitive decline.”
The paper was by Kosmyna et al., titled Your Brain on ChatGPT, and it sent tremors through Rehan’s methodical mind. It wasn’t a tech op-ed or speculative philosophy — it was hard neuroscience. EEGs. NLP analytics. Real-time cognitive tracking.
The study recruited participants for an essay-writing task — one assisted by LLMs, one via search engines, and one done wholly unaided — brain-only, as the authors poetically termed it. What emerged was unsettling:
- Diminished neural synchrony in the LLM group.
- Persistent underperformance in linguistic, behavioral, and cognitive metrics, even after four months.
- The insidious accrual of what the authors coined “cognitive debt.”
Rehan paused. This wasn’t about machines replacing us. This was about us replacing ourselves with machines.
The Physician’s Paradox: Efficiency vs Erosion
In morning rounds, Rehan observed something he once admired: his colleagues, residents, and interns seamlessly integrated LLMs into patient care. Inputs were swift — anonymized histories, labs, and radiological summaries fed into GPT-based platforms for differential diagnoses and evidence-based suggestions.
But the study’s neural findings echoed in his mind:
“The LLM cohort demonstrated weakened intra- and inter-hemispheric neural connectivity, notably in frontoparietal and temporolimbic circuits responsible for executive function and semantic reasoning.”
In simpler terms: the more the brain outsourced its cognition, the less it retained its own architecture.
Clinical intuition — that felt sense cultivated over decades — risks rusting when AI fills in the blanks. The quiet nuance in an elderly patient’s gait, the asymmetry in a subtle tremor, the instinct to order a non-standard test — these live in neural grooves carved by years of friction and struggle, not by passive review of AI outputs.
And yet, many were defaulting to AI even for subtle interpretation tasks: a CT angiogram finding, a pharmacokinetic decision, a rare syndrome’s clinical arc.
Rehan wondered: Are we preserving time or trading pieces of our mind?
The Learner’s Dilemma: Instant Answers, Atrophied Reasoning
In the college library across campus, Dr. Ishita Narayanan, a third-year dental student, was composing a case report. She toggled between her patient notes and ChatGPT, using it to scaffold a prosthodontic treatment plan for a Kennedy Class I arch.
The language was perfect. The flow, academic. But somewhere in the layers of clinical rationale — the choice between a cast partial and implant-retained overdenture — she had relied on ChatGPT’s logic more than her own.
And she wasn’t alone.
The Kosmyna study described how students in the LLM cohort demonstrated not only linguistic homogeneity but also a flattening of critical reasoning pathways. Neural tracking showed blunted activation of the prefrontal cortex, the very seat of analytical processing.
In essence, what the LLM offered in polish, it stole in plasticity.
Rehan, who had mentored many such students, now reconsidered his teaching. Perhaps LLMs should be used only after the student has struggled. After they have missed, corrected, and reflected. For it is only through mental resistance that a neural pathway is strengthened.
Preserving the Mind in the Age of Machines
The implications were sobering. Rehan scribbled some thoughts, a kind of cognitive manifesto for himself — and maybe, for the future of the profession:
1. Use LLMs as mirrors, not maps.
Let the model reflect your own logic back to you. Verify its path with clinical guidelines and peers. Never let it chart the course alone.
2. Prioritize “Brain-only” Days.
Challenge your intellect with raw cases — no AI prompts. Discuss differentials on whiteboards. Read primary literature. Let cognition ache.
3. Reflect to Rewire.
Maintain a clinical journal. Articulate your thought process during decisions. Reflection, the study notes, enhances originality and counteracts AI-induced uniformity.
4. Engage in Pedagogy.
Teaching students or peers reinforces neural connections more robustly than passive review. When you speak, your brain sharpens.
5. Move, Meditate, Sleep.
Cognitive reserve isn’t built in books alone. Exercise, mindfulness, and sleep preserve your brain’s ability to bounce back from cognitive stagnation.
Epilogue: The Sacred Act of Thinking
Later that week, Rehan discussed the study during Grand Rounds. There was a hum of concern — and also resignation. One junior joked, “Sir, but GPT is like Google now. You can’t go back.”
Rehan smiled and replied:
“You can’t go back. But you can step aside. You can choose when to let it speak, and when to let your own mind stretch into discomfort. Because in medicine, the thinking is not a task. It is the treatment.”
In the age of infinite information, it is the physician who chooses to think deeply who will remain irreplaceable. Not because AI can’t perform. But because healing is not just in precision — it is in perception, in presence, in a brain still willing to wrestle with doubt.
Reference
Kosmyna, N., et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv:2506.08872
Author: Dr. Syed Nabeel, BDS, D.Orth, MFD RCS (Ireland), MFDS RCPS (Glasgow) is a clinician-scholar whose professional trajectory spans over a quarter century at the intersection of orthodontics, neuromuscular dentistry, and digitally driven diagnostics. As the Clinical Director of Smile Maker Clinics Pvt Ltd, he has articulated a refined philosophy of care that integrates anatomical exactitude with contemporary digital modalities, particularly in the nuanced management of temporomandibular disorders, esthetic smile reconstruction, and algorithm-guided orthodontic therapy. Grounded in the principles of occlusal neurophysiology, his approach is further distinguished by an enduring commitment to AI-enhanced clinical workflows and predictive modeling in complex craniofacial therapeutics. In 2004, Dr. Nabeel established DentistryUnited.com, a visionary digital platform designed to transcend clinical silos and foster transnational dialogue within the dental fraternity. This academic impetus culminated in the founding of Dental Follicle – The E-Journal of Dentistry (ISSN 2230-9489), a peer-reviewed initiative dedicated to the dissemination of original scholarship and interdisciplinary engagement. A lifelong learner, educator, and mentor, he remains deeply invested in cultivating critical thought among emerging clinicians, with particular emphasis on orthodontic biomechanics and the integrative neurofunctional paradigms that underpin both form and function.