“The Undoing Project,” published in 2017 by Michael Lewis, is the most interesting book I’ve read in ages.
Lewis chronicles the careers of Israeli psychologists Daniel Kahneman and Amos Tversky. In plain English and with minimal math, the author summarizes work done by these men and their colleagues on human decision-making.
I found a minor focus of the book – research into the fallibility of medical opinions – fascinating.
A couple of Lewis’ examples of medical fallibility reminded me of events from my own career:
As a medical student, I was assigned to “Julie’s” psychiatrist. Julie was a psychotic who had lost touch with reality. She was hospitalized and medicated, and after a few weeks of inpatient treatment, she seemed considerably improved.
Thinking she was ready to spend some time in the real world, Julie’s doctor gave her occasional passes to leave the psychiatric ward. Freed from constant supervision, Julie sneaked into the hospital’s parking garage and flung herself off the top floor.
I’ll never forget the morning after Julie died. Psychiatric nurses went about their tasks in silent gloom. Patients huddled in corners of their rooms. The doctor whose acumen I had so admired was shell-shocked. She had expected more of herself, and others expected more of her. Earlier research had hinted at how unrealistic such expectations were, but I knew nothing of it.
In the late 1960s, experienced psychiatrists and psychologists were given real patient scenarios and asked whether it would be safe to allow those patients to leave hospitals. Opinions were “all over the map.” Experts could predict suicide attempts no more accurately than graduate students.
During my pathology residency, pathologists assigned subjective “grades” to breast cancers based on their resemblance to normal tissue. Cells were well-differentiated if they closely resembled normal breast cells, poorly-differentiated if they looked very different from normal breast cells and moderately-differentiated if their appearance occupied some subjective middle ground. Oncologists decided among treatment options based on these grades.
I doubted the accuracy and reproducibility of the grading system I’d been taught, and I cooked up an experiment to prove my point.
I pulled microscopic slides and staff pathologists’ reports on previously-diagnosed cases of breast cancer and asked my mentors to re-examine the tumors. They disagreed with some of their original reports and commonly disagreed with each other.
Unknown to me, a similar study of medical decision-making had been done in the 1960s. Researchers asked radiologists how they determined from an X-ray whether a stomach ulcer was cancerous or benign. The doctors said they considered seven features, including the ulcer’s size, shape, contour and so on.
The researchers collected a series of stomach X-rays previously diagnosed by the radiologists and asked them to look at those films again. The researchers did me one better in that they also designed a simple computer program that gave each of the seven features equal weight. They pitted the computer against the radiologists.
Like the psychiatrists and my pathology mentors, the radiologists disagreed with each other. Individual radiologists disagreed with diagnoses they had made earlier. The computer beat not only the radiologists as a group but the best diagnostician of the lot.
This year, a computer program outperformed a group of 58 dermatologists when asked to distinguish between images of malignant melanomas and images of benign moles.
As far as I know, the opinions of forensic pathologists haven’t been studied. I suspect we’d do no better than our clinical colleagues. Reasonable medical certainty may be more elusive than I thought.
Dr. Carol J. Huser, a forensic pathologist, served as La Plata County coroner from 2003-12. She now lives in Florida and Maryland. Email her at email@example.com.