Skills, Knowledge, and Prediction




In the March 15, 2012, issue of The American Journal of Cardiology , Diamond and Kaul provided an insightful analysis of the complex relation between risk stratification schemes and therapeutic decision making. The investigators clearly identified some of the reasons why predicting response to treatment at the individual level is difficult. However, they conclude their report with a caution against “wholesale abandonment of evidence-based guidelines in favor of idiosyncratic clinical judgment,” which, in their opinion, runs the risk of “intellectual gerrymandering” and “wasteful utilization of high-cost technology.”


Proponents of quantitative methods of clinical assessment frequently portray critics as Luddites ready to “jettison” objective evaluation in favor of personal opinion rooted solely in clinical experience. This is an unfair characterization. No one is seriously suggesting that knowledge obtained from the analysis of large cohort studies is of no value or should not guide practice. The argument is rather directed at the emphasis placed on “best practice” guidelines that give prominence to outcomes research and increasingly serve as the basis to reward or penalize clinical performance.


For one thing, such guidelines presume bland skills on the part of the physician, because risk and outcome quantification in large clinical studies must necessarily amalgamate decisions made by numerous individual doctors. The treatment effect observed is therefore the rendition of the work of a synthetic, “average” clinician.


Integral to the skill of a physician is the ability to grasp a patient’s psychological and socioeconomic makeup and understand how these poorly quantifiable characteristics shape the patient’s preferences and desires. These features are essentially inaccessible to large cohort studies, yet they undoubtedly influence treatment response. Furthermore, it seems obvious that in some cases, deviating from guidelines on the basis of local knowledge can precisely avoid “wasteful utilization” that would arise from the blind application of treatment algorithms.


To turn the tables on the expounders of guideline medicine, one could point out that no study has ever prospectively compared outcomes obtained by a given clinician with those achieved by following guideline-derived decision rules. Such a “Kasparov versus Deep Blue” type of experiment is obviously impossible to carry out, but conceptualizing the match helps us recognize what is at stake in the debate.


Quantitative analyses of large clinical studies yield a highly instructive body of knowledge from which doctors should continually draw. However, as Diamond and Kaul noted, these complex methods can mislead even the experts. Clinical epidemiologists, guideline authors, and policy makers should not discount the fact that knowledge acquired at the bedside by a skillful clinician can mitigate the difficulties of statistical prediction.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Dec 7, 2016 | Posted by in CARDIOLOGY | Comments Off on Skills, Knowledge, and Prediction

Full access? Get Clinical Tree

Get Clinical Tree app for offline access