You've watched it happen. A senior consultant arrives at the bedside, scans the room, and within seconds has a plan. No visible reasoning. Just recognition, then action.

That speed is the product of thousands of cases. Deeply organised knowledge, seemingly effortless to access, and integrated in ways that let an experienced clinician see what's happening before a junior doctor has finished taking the history.

But expertise doesn't just make you faster. It changes the way you reason. And some of those changes create vulnerabilities that experience alone cannot fix.


What experience changes

With experience, clinicians increasingly rely on faster, more intuitive processing. Knowledge becomes chunked and readily available. Decisions that once required conscious effort become automatic. Cognitive scientists describe this as a growing reliance on Type 1 thinking (fast, intuitive, pattern-based) over Type 2 (slow, analytical, effortful).1

This is what makes experienced clinicians good at complex problems, not just routine ones. Nobody disputes this. But the shift comes with a trade-off that is rarely acknowledged: when fast reasoning is wrong, it is wrong with confidence. The error doesn't feel like an error. It feels like expertise. And because the reasoning process that produced it is the same one that is usually right, there is no internal signal that anything has gone wrong.

Early in training, doctors reach for guidelines, check references, talk through decisions deliberately. Not because they've been taught a superior reasoning strategy, but because they have no alternative. As expertise develops, this deliberate checking naturally falls away. The more experienced you become, the less likely you are to second-guess yourself, to slowly reason through a problem, to interrogate your own approach.


The error that gets easier to make

In a comprehensive analysis of clinical errors, cognitive factors were identified in roughly three-quarters of diagnostic cases.2 The single most common cognitive failure was premature closure: accepting a diagnosis before it had been adequately verified. Not a knowledge gap. A reasoning process that stopped too early.

While junior doctors often err because of straightforward knowledge gaps, the errors of experienced clinicians are more tightly bound to the mechanics of their expertise itself: the automated reasoning that usually serves them so well.1 As knowledge gaps close with experience, these cognitive vulnerabilities don't disappear. They become a larger share of what remains, and they are harder to detect precisely because the reasoning feels so fluent.

This has been demonstrated repeatedly. When registrars were exposed to a clinical problem and then later presented with cases that superficially resembled it, the more experienced trainees were more susceptible to misdiagnosis through availability bias than their junior counterparts.3 In a separate study, both junior doctors and consultants were significantly more likely to anchor on an incorrect diagnosis when given a suggested diagnosis at referral, rather than just the clinical findings.4 In none of these studies did seniority confer immunity.


Confidence without calibration

The relationship between confidence and accuracy is far weaker than most clinicians assume. In an ICU autopsy study, physicians who reported complete diagnostic certainty before death were wrong at the same rate as those who reported major uncertainty.5 In a study of specialist radiologists interpreting the same set of films, the worst performers were actually more confident than the best.6 A systematic review found that overconfidence, anchoring, and availability bias were the cognitive biases most frequently demonstrated in physicians, that none were reliably attenuated by seniority, and that these biases influence not just diagnostic accuracy but also treatment and management decisions.7

These findings are uncomfortable. But they describe a well-documented property of human expertise, not a failing specific to medicine. In any domain where feedback is limited and decisions are complex, confidence and accuracy can drift apart.


The feedback desert

By comparison, in most domains where expert calibration matters, there are feedback loops. A chess player learns whether the move was good. An engineer whose bridge deflects beyond tolerance sees the data. A trader who backs a hunch sees the market move. The feedback is direct, timely, and tied to outcomes. Over time, it keeps confidence aligned with accuracy.

Medicine does have feedback. Pathology comes back. Imaging confirms or refutes. M&M meetings review cases. But for the vast majority of clinical reasoning, the feedback is fragmented, delayed, and not linked back to the specific decision that produced it. A registrar gets told by a consultant what they would have done differently. That's valuable. But it calibrates you against another person's reasoning, not against the patient's outcome.

Outcome-based feedback on everyday clinical reasoning is remarkably scarce. You assess a patient overnight with pleuritic chest pain, judge the pre-test probability of PE to be low, and don't request a CTPA. The patient doesn't deteriorate on your shift. You move on. But some proportion of those patients do have a clinically significant PE, and unless they deteriorate dramatically while you're still there, you will never know whether your reasoning was right or wrong. The decision disappears into the system.

A patient with abdominal pain gets labelled as constipation and discharged. They re-present two days later with a small bowel obstruction, but to a different team at a different hospital. The original clinician never learns. An elderly patient with a fall is treated for a mechanical injury. The syncope that caused it is never investigated. The reasoning error is never visible because the outcome is never linked back to the decision.

In intensive care, the problem runs deeper still. You decide that further treatment is futile and recommend palliation. The patient dies. Was the prognosis truly hopeless, or did the decision to withdraw create the outcome it predicted? Some of the highest-stakes clinical decisions are structurally immune to outcome feedback.8

Error in clinical reasoning is not systematically measured. Because it's not measured, it's invisible. And because it's invisible, the vast majority of reasoning errors simply vanish, uncounted and uncorrected.


A system designed for the wrong assumption

This isn't about questioning anyone's competence. The expertise that develops over a career is real. It saves lives.

But we've built a system around the assumption that expertise progressively eliminates error. That twenty years of practice produces not just faster but more reliable judgement, to the point where cognitive aids and decision support become unnecessary. Tools for trainees, not for consultants.

The evidence says otherwise. Experience changes the nature of our errors, even as it reduces their frequency.2 It trades foundational knowledge gaps for complex reasoning traps. And the system, by stripping out outcome feedback and treating seniority as a proxy for accuracy, makes those reasoning traps harder to detect and harder to correct.

When people think about designing systems around human performance in healthcare, they tend to think about fatigue management, checklists, communication frameworks like ISBAR. Those things matter. But the calibration of expert judgement is a design problem too, and it's one that gets harder, not easier, the more experienced the clinician becomes.


References

  1. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Academic Medicine 2017; 92(1): 23–30.
  2. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Archives of Internal Medicine 2005; 165(13): 1493–1499.
  3. Mamede S, van Gog T, van den Berge K, et al. Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA 2010; 304(11): 1198–1203.
  4. Blades R, Sherwood J, Parsons J, Mudalige N. Cognitive bias in the clinical decision making of doctors. Future Healthcare Journal 2019; 6(Suppl 1): 108.
  5. Podbregar M, Voga G, Krivec B, et al. Should we confirm our clinical diagnostic certainty by autopsies? Intensive Care Medicine 2001; 27: 1750–1755.
  6. Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. American Journal of Medicine 2008; 121(5 Suppl): S2–S23.
  7. Saposnik G, Redelmeier D, Ruff CC, Tobler PN. Cognitive biases associated with medical decisions: a systematic review. BMC Medical Informatics and Decision Making 2016; 16: 138.
  8. National Academies of Sciences, Engineering, and Medicine. Improving Diagnosis in Health Care. Washington, DC: National Academies Press; 2015.

Scott Santinon is an Intensive Care Fellow and Certified Practitioner in Human Factors in Healthcare, and the founder of Critical Condition.