From
To
Doctor–patient relations
Doctors’ paternalism
Respect for patient autonomy
No patient access to their medical records
Patients encouraged to see their medical records
Disease orientation
Patient orientation
Goal of treatment: prolongation of life expectancy
Goal of treatment: prolongation of quality adjusted life years (QALY)
Patients as passive recipients of care
Patients as partners in self-care
Clinical reasoning
Denial of clinical uncertainty
Acceptance of clinical uncertainty
Intuitive decision-making
Analytic decision-making
Decisions based on unsystematic experience and pathophysiologic rationale
Evidence-based reasoning
Biomedical model of clinical practice
Biopsychosocial model of clinical practice
Doctor–society relations
Accountability to peers
Accountability to lay institutions
Unrestricted use of resources; loyalty to the patient at hand
Parsimonious use of resources; loyalty to all patients
Solo practice
Managed care
Medical education
Knowledge of subject matter
Ability to retrieve information in real time
Unsystematic acquisition of skills, including patient interviewing, by imitating role models
Systematic acquisition of skills by supervised practice and simulations. Introduction of teaching programs of patient interviewing
Orientation to biomedicine
Inclusion of the social and behavioral sciences into the teaching program
In-hospital clinical training
Clinical training in community medical settings
Doctor–Patient Relations
The main change in doctor–patient relations was the shift from doctors’ paternalism to respect of patient autonomy . In the past, doctors rarely shared health-related information with their patients, and almost never involved them in clinical decisions. Patients were not permitted to see their own hospital charts and a patient’s attempt to read her/his medical record was viewed as an impertinent incursion on a doctor’s private notes. A 1961 survey of US doctors cited 90 % as not telling the truth to patients with a malignant disease [1]. At that time, medical students frequently observed doctors telling patients with a malignancy that they had an inflammation and, more rarely, they also witnessed clinical experiments that were carried out without a patient’s consent. I am aware, of course, that while deception may have been excused as a misguided attempt to protect patients, there is no justification for experimenting on humans without their permission. Yet, both examples reflect the belief that doctors knew better and that clinical decisions were to be guided by the doctor’s assessment of the patient’s needs .
With the patients’ tacit consent, this belief had dominated medical practice for more than 2000 years. Hippocrates advocated “concealing most things from the patient … [and] revealing nothing of the patient’s future or present condition” [2]. In his 1871 medical school graduation address, Oliver Wendell Holmes was cited as stating: “Your patient has no more right to all the truth you know than he has to all the medicines in your handbag….” [3]. But since the 1970s, respect for patient autonomy has been part of the ethical doctrine of clinical practice [4]. Less than 20 years after the 1961 survey that I referred to earlier, a similar survey cited 97 % of the responding US physicians as telling their cancer patients the truth [5]. Today, the information that doctors provide has evolved from what they thought their patients needed to know to a shared model involving both doctors and patients in clinical decision-making [6]. Patients are not only permitted to see their medical records ; they are encouraged to monitor the documentation of their care on the Web and even to add their inputs to the records [7].
I am uncertain what led to the change. Possible causes may be the introduction of treatment modalities that require patient cooperation; the results of surveys indicating that most patients wish to be informed about their illness [8]; the finding that the respect for patient autonomy is associated with patient trust and satisfaction [9]; or the general feeling that, however well-intentioned, doctor’s paternalism creates dependence, which is inconsistent with the current orientation towards respect for individual dignity. Whatever the cause, the change led doctors to consider not only what patients need, but also what they want.
A consideration of “what patients want” requires that doctors gain an insight into the patient’s preferences. In the 1950s, doctors elicited disease-related data by asking such questions as: “Was your pain intermittent or continuous?” but not “What causes you the most worry?” Today on the other hand, teaching programs in patient interviewing emphasize not only acquisition of data on the disease-related history, but also attempt to share health-related information, allay patient’s anxiety and apprehend her/his preferences and needs for information. From a disease orientation in the 1950s, we are recognizing the importance of a patient orientation [10] .
The main consequence of this newly acquired patient orientation was the realization that patients and doctors often disagree in their choice of alternative treatment options. In the 1950s, the objective of treatment was to prolong a patient’s life; today, doctors consider a patient’s preference for quality of life, even when it may reduce the chances of survival. The measure of utility of treatment evolved from life expectancy to quality adjusted life years (QALY). In the 1950s, doctors believed that patients were too anxious and poorly informed to be trusted even with measuring their own weight or blood pressure. Today, on the other hand, patients with diabetes are taught how to adjust their insulin treatment to their self-tested blood sugar level, and patients with bronchial asthma are taught to adjust their corticosteroid medication to their self-tested pulmonary function. From a passive recipient of care, the patient has evolved into a partner in self-treatment.
Clinical Reasoning
Throughout history, clinical reasoning has been guided by theoretical models for understanding diseases, deducing treatment, and acquiring further knowledge. As long as these models satisfy clinical needs and are consistent with experience, they are used to guide practice. A change in the prevailing model occurs only when it can no longer accommodate new data [11]. In other words, theory informs practice and the feedback from practice modifies theoretical constructs. Observed incongruities between theory and experience should not be viewed as a source of confusion, but rather as opportunities for inquiry and learning.
Hippocrates’ humoral model dominated medical reasoning for about 2 millennia. It assumed the existence of four humors (blood, phlegm, yellow bile, and black bile), and viewed disease as a disequilibrium between them, which was due to heredity, nutrition, lifestyle, or the weather. Treatment was an attempt to restore stability by bloodletting, purgatives, emetics, and these interventions were the standards of treatment until the nineteenth century. However, the humoral model could not accommodate the discovery of infectious agents and nutritional deficiencies. This led to the advent of the biomedical model, which has guided medical education, clinical practice and research since the turn of the twentieth century.
Similar to the humoral model, the biomedical model is based on causal reasoning from derangements of the organism that are conceptualized as causes, to manifestations of disease that are conceptualized as effects. Therefore, both models are consistent with the human tendency to seek causal links [12]. However, the models differ in the way that they define the causes of disease: The biomedical model describes diseases as the consequence of observable structural or biochemical disorders, rather than of disequilibria among hypothetical humors.
The biomedical model must certainly be credited for advances in patient care. But in recent decades, its premises appear to be inconsistent with some clinical observations. This has led to several changes in the approach to clinical reasoning , specifically, to a shift from denial to acceptance of uncertainty, from intuitive to analytic decision-making and from the biomedical to the biopsychosocial model of clinical practice.
From Denial to Acceptance of Uncertainty
The biomedical model views causes (etiologic agents) as leading inevitably to their consequences (disease). Within this model, chance and uncertainty have a very small role. In the 1950s, doctors downplayed notions of probability and statistical inference from epidemiological data. The conventional wisdom was that epidemiology dealt with populations, and as such, was incompatible with clinical medicine, which dealt with individuals. The 1965 edition of DeGowin’s Introduction to Clinical Medicine stated that “…statistical methods can only be applied to a population of thousands … [T]he relative incidence of two diseases is completely irrelevant to … diagnosis. A patient either has or has not a disease” [13].
My clinical tutors similarly rejected the application of statistical inference to clinical practice by using expressions such as: “Nobody is 70 % pregnant,” and “Every patient is unique; I learn from a single case more than from epidemiological studies of a thousand patients.” The deterministic reasoning of the biomedical model also downplayed notions of uncertainty. I was taught that “Nothing is left to chance if the patient is properly worked up.” As late as the 1980s, several authors claimed that students were encouraged to ignore uncertainty rather than accept and deal with it [14–16], and a 1992 review of the sociological literature concluded that, “denial of uncertainty was one of the most consistent observations made by sociologists studying medical training” [17].
One of the first indications that this deterministic approach might be inadequate were observations that such conditions, as diabetes and smoking, increased the risk of ischemic heart disease, although they were not etiologic agents of atheromatosis. This observation suggested that disease was not the result of a single cause, but rather of a convergence of risk factors. Today, we speak of risk indicators rather than of etiologic causes; a patient is more or less likely to have a disease, rather than either have it or not. In 1993, DeGowin’s Diagnostic Examination replaced the rejection of the application of statistics to individual patients with the ambiguous statement “…Using the probability theory to evaluate the sensitivity and specificity of clinical findings … may serve to strengthen the credibility of these parts of the diagnostic examination as future clinical trials demonstrate their validity” [18]. In the 2009 edition, phrases, such as “relative disease probabilities,” “more or less common diseases,” and “incidence and prevalence of diseases in the patient’s age group” appear repeatedly in the chapter on diagnosis [19].
A change occurred in doctors’ attitudes to diagnostic tests . In the 1950s, teaching emphasized the importance of a thorough accrual of a patient’s history, and physical and ancillary data. Doctors believed that the more data obtained, the better the chances of making a correct diagnosis. Early detection of disease was thought to always improve the likelihood of cure, and therefore, students were required to perform a thorough review of systems and a head-to-toe physical examination . Today, doctors are aware that false-positive test results may confound diagnostic reasoning , and consequently, are selective in the choice of diagnostic tests and in screening asymptomatic persons for early detection of disease. Paradoxically, the increase in biomedical knowledge seems to have enhanced the awareness of its limitations and led to a transition from right/wrong determinism to acceptance of chance and uncertainty in clinical practice.
From Intuitive to Analytic Decision-Making
The Webster defines “intuition” as a perception that is independent of reasoning. A judgment is said to be intuitive when it is made rapidly and without apparent effort. In medicine, the term “intuition” has acquired several meanings. The first one equates intuition with an ability to recognize patterns/configurations of disease manifestations already encountered. The second meaning of intuition refers to judgments arising from simplifying heuristics (mental shortcuts) elaborated in Chap. 14. Finally, the term intuition is used to denote mastery of the “art of medicine,” which is a mystical ability to make clinical decisions that are not amenable to analysis or explication.
To prevent confusion I shall use the terms “pattern recognition” to refer to an ability to recognize previously encountered disease manifestations; “heuristics,” to refer to cognitive shortcuts; and “intuitive reasoning” or “art of medicine” to refer to the ability to make rapid decisions that are not amenable to explication. In the 1950s, this latter ability was characterized by confidence in its accuracy, precluding any attempt to improve clinical reasoning . Statements by experienced doctors beginning with “in my judgment” implied the closure of a clinical debate.
Such authoritarian attitudes are hard to comprehend in today’s climate of evidence-based practice. To understand doctors’ attitudes to the art of medicine, one must consider its strengths. First, the belief that in the indeterminate realm of clinical practice there is an absolute truth is extremely appealing. Conformity with authority has been identified as a means by which medical students and residents control anxieties generated by the complexity of clinical practice [20, 21]. Second, intuitive reasoning satisfied clinical needs. In the 1950s, doctors were not aware that there was anything wrong in clinical decision-making, or that it was in need of improvement.
However, since the 1950s, clinical decision-making has become more complicated. The number of diagnostic and therapeutic options expanded and the choice between them is no longer easy, even for experienced clinicians. Doctors have to consider the trade-off between the benefit of medical interventions and their risk: the risk of false-positive or false-negative diagnostic test results, and the risk of undesirable side effects of treatment. Health-care expenditures have increased and this necessitates considerations of trade-offs between cost and effectiveness. In the present reality of unlimited demand and finite resources, economic appraisal is increasingly being used to inform health-care decision-making. Finally, doctors have to balance conflicting values. The medical code of ethics has expanded to include not only the principles of non-maleficence “do no harm” and beneficence “do good,” but also “respect a patient’s autonomy” and justice “be fair in distributing health-care resources to those who need them.” Consequently, doctors today are much more likely to confront ethical dilemmas, i.e., to find themselves in situations in which they cannot honor one ethical principle without violating another.
Obviously, these trade-offs can no longer be resolved by intuitive reasoning: first, claims that the “art of medicine” eludes explication are no longer acceptable when patients, students, colleagues, and courts of law ask us to justify our decisions. Second, judgments based on simplifying heuristics may be confounded by cognitive biases [22, 23]. Therefore, since the 1970s, there have been sustained efforts to gain insight into the reasoning of expert clinicians, and to base clinical decision-making on a critical appraisal of risks and benefits.
From Pathophysiologic Rationale to Evidence-Based Reasoning
Another assumption of the biomedical model was that patient care should be deduced from disease etiology and pathophysiology. In the 1950s, the justification for treating patients with left ventricular failure with digitalis was its positive inotropic effect on the heart muscle. The justification for treating patients with peptic ulcer with aluminum salts was their antacid activity. However, in recent decades, doctors have become aware that pathophysiologic rationale does not always produce the expected outcomes. Consequently, there has been a growing tendency to base clinical practice on empirical evidence, and in 1992, Guyatt et al. coined the phrase “evidence-based medicine” (EBM) [24].
Unlike the traditional paradigm, EBM posits that intuition, unsystematic experience, and pathophysiologic rationale are insufficient for clinical decision-making. EBM places a lower value on authority and stresses the value of evidence from clinical research. From deductive reasoning we have moved to evidence-based reasoning: Digitalis treatment of heart failure and antacid treatment of peptic ulcer are justified not by their expected physiological effect, but rather by evidence provided by clinical trials.
From the Biomedical to the Biopsychosocial Model of Clinical Practice
The main flaw of the biomedical model was the assumption that all diseases are structural or biochemical dysfunctions of the body, and amenable to treatment only by surgical or pharmacological means. This assumption excluded the patient’s attributes as a person. It is true that, as early as the 1930s, there were attempts to correlate personality types with diseases, such as peptic ulcer or bronchial asthma. However, these attempts were mostly based on anecdotal observations. In the 1950s, the prevailing attitude towards psychosomatic medicine was one of mistrust.
A change in this attitude was brought about by evidence of the association between life events and morbidity [25], and between measures of socioeconomic status (income, education, and housing) and mortality [26–28]. More recent studies have detected an association between income inequality (rather than income per se) and population health [29]. These findings led to the recognition that some risk indicators of disease can be described as psychosocial, and to the adoption of Engel’s biopsychosocial model of clinical reasoning and practice [30]. The latter model encourages physicians to look into the biomedical and psychosocial components of a patient’s predicament and provide support and treatment for both. The model’s premise is that a patient’s complaints cannot be considered in isolation from the psychosocial context, and that disease cannot be divorced from a patient’s surroundings.
The nature of the human response to psychosocial stressors remains a subject of controversy. On the one hand, the theory of psychosomatic specificity assumes a one-to-one relationship between a specific psychosocial configuration and a disease, and views psychosomatic research as a quest for links between stressors and specific diseases. According to this theory, a specific set of circumstances (bereavement and loss of a job) in association with specific personality traits (submissive and manipulative) would result in a specific disease (bronchial asthma and peptic ulcer) [31]. For example, hostility has been linked with hypertension, and fear of separation with bronchial asthma. On the other hand, the association between life events and morbidity , and between socioeconomic state and mortality supports the theory of psychosomatic nonspecificity. According to this theory, any life event may increase the risk of any disease, irrespective of personality traits. The association between disease and psychosocial determinants is still in need of study.
![](https://freepngimg.com/download/social_media/63059-media-icons-telegram-twitter-blog-computer-social.png)
Stay updated, free articles. Join our Telegram channel
![](https://clinicalpub.com/wp-content/uploads/2023/09/256.png)
Full access? Get Clinical Tree
![](https://videdental.com/wp-content/uploads/2023/09/appstore.png)
![](https://videdental.com/wp-content/uploads/2023/09/google-play.png)