Integrative Model of Circulation: A Synthesis

Professor of Anesthesiology, Albany Medical College, Albany, NY, USA



Berlin school of physiologyPhysiologic reductionismCausalityOrganism versus mechanismEpistemologyGoethean archetypeVitalismUnifying principleEmergent propertiesHierarchical orderingQuantum mechanicsSelf-organizing principleMolecular biologySystems biologyNeo-DarwinismUpward causationDownward causationBoundary conditionsFormative forcesSomatic integrative unityLevel of lifeSoul levelI-organization

To advance our discussion to a level that would render the hitherto described circulatory phenomena—as paradoxical as they may appear—more intelligible, it behooves us to look at the source of ideas and the nature of scientific inquiry that contributed to the edifice of the classic, pressure-propulsion circulation model. The question can legitimately be asked, how is it possible that after well over a century of rigorous experimental work in all branches of cardiovascular physiology the central idea of what is circulation can deviate from the observed phenomena at such a fundamental level? “The method is everything” proclaimed Carl Ludwig [1]. And so it is! Thus, when a method is chosen by which a complex biological phenomenon is reduced to a mechanical model based on physical-chemical causality, the accomplishments will seem to eventually give way to the law of diminishing return. There is yet another, less obvious but equally important consequence of this approach, namely that it runs the risk of “obliterating in content, as well as in method of what is specifically human in the human being and what humans experience in the dimensions of life, soul and spirit” [2]. In a similar vein Horton notes that “…increasingly desiccated forms (of science) are widening the gap between human reality and public understanding. This disconnection between science and humankind has stripped the moral urgency from scientific inquiry” and empathically states that “anyone motivated by a wish to use science for public good should be obsessed by method” [3] (for an in-depth analysis of the scientific method, see Ref. [4, 5]). If the cardiovascular system is the fulcrum of evolutionary development, as suggested by the comparative approach followed in our explorations (cf. Fig. 11.​2), it should lead to the discovery of circulatory phenomena that are specifically human.

In this final chapter, we first look into the intellectual milieu that served as a backdrop to physical and biological sciences in the mid-nineteenth century and chartered a course of biophysical and physiological research that continues to dominate the field to this day. The assertion by the “Berlin School” of physiology that the cause for life must be sought in ordinary laws that govern inorganic nature, opened a vast field of material processes that underlie the biological phenomena to experimental investigation. The downside of this approach, however, was that it defined physiology as nothing more than “physics and chemistry of life” [6]. As an alternative to this view rose the “organicism” promoted by W. Cannon, C. Waddington, L. von Bertalanffy,1 J. Needham, and other members of the Oxford “Theoretical Biology Club” who, in an attempt to consider the organism as a whole, introduced dynamic concepts such as homeostasis, integration, and organization, in order to describe biological phenomena at the intermediate and systemic levels [6]. The migration of physical scientists—armed with quantitative techniques and exact analytical methods of investigation—to departments of biology and physiology resulted in the burgeoning new field of biophysics, molecular biology, and genetics. This resulted in further splintering of biological sciences and medicine to “-omics” level disciplines (genomics, pharmacogenomics, proteomics, etc.) with the aim to integrate biological components by means of computer modeling of “big data” at the systems level in an effort to deliver “personalized medicine” [8]. To what extent this goal has been achieved remains an open question.

In addition to the above, a stream that predated the “Berlin School” arose in opposition to vitalistic theories and reductionism in biological sciences that was already palpable since the middle of the eighteenth century. It was led by J. W. von Goethe (1747–1832) and a group of nature philosophers such as J. F. Blumenbach (1752–1840), A. von Humboldt (1769–1859), C.G. Carus (1789–1869), and others who took part in the ongoing debate on non-vitalistic alternatives and the formulation of developmental laws of organic form and structure [9, 10]. In place of a vitalistic or mechanistic conception of living nature, Goethe proposed that inherent in an organism is the idea, the archetype, or lawfulness which corresponds to natural laws in inorganic nature. Goethe carefully documented his experimental findings and described a dynamic cognitive method by which the “type” can become objective empirical knowledge [11]. By applying this method, R. Steiner, philosopher of science, editor of Goethe’s scientific writings and author of seminal works on the theory of knowledge [12], discovered the key circulatory phenomena which gave the impetus for the present investigation (cf. Sect. 16.​2).

To conclude, it will be argued that the cardiovascular system can be considered as a self-organizing emergent structure at different levels of organization which are compatible with major phylogenetic transitions in the development of species and in the ontogeny of an individual organism as outlined in Chap. 11.

25.1 The “Berlin School” of Physiology

We have already alluded to the importance of the Leipzig Physiological Institute for the advancement of physiology (cf. Sect. 16.​1). Carl Ludwig (1816–1895), its founder and foremost representative, was the first to introduce a systematic study of isolated organs (heart, muscle, kidney, liver, and lung) and made important contributions in virtually every field of physiology: the “all-or-none” law of the heart, absolute refractory period, post-extrasystolic potentiation, the staircase phenomenon, dependence of heart’s function on oxidative metabolism, and the degree of its filling on the strength of contraction (later known as the Frank–Starling law), were only some of the better known discoveries in the field of cardiovascular physiology ascribed to the Institute [13]. Ludwig, a man of practical genius, adapted the galvanometer for measuring blood pressure, invented a direct method for the recording of blood flow, i.e., the Stromuhr (lit. from German, the stream-clock), and was the first to use continuous graphic recording in experimental settings, the kymograph (Fig. 23.​3).

In addition to its voluminous research output, the Institute was a training ground for some of the better-known physiologists of this generation, such as E. Cyon, I. Pavlov, H.P. Bowdich, F.P. Mall, J.T. Abel, A. Fick (who discovered the law of diffusion and the Fick Method of measuring cardiac output), O. Frank, and numerous others who worked under Ludwig’s personal tutelage benefited from his insights and scrupulous methods. According to Cranefield, American physiology considered itself to be a descendant of Ludwig’s [6].

Lesser known is the fact, however, that Ludwig and his associates, Emil Du Bois-Reymond, Hermann von Helmholtz, and Ernst von Brücke, all eminent physiologists, were at the core of a nascent movement that had given itself the task of investigating physiological phenomena in accordance with rigorous physical methods and thus rooting out the last remnants of the nineteenth-century vitalism2 from medicine and biology. The four convened in Berlin in 1847 (hence, the “The Berlin School”) and, in Ludwig’s words, vowed to “…constitute physiology on a chemical-physical foundation, and give it equal scientific rank with physics” [6]. An intent of this trend was expressed by Du Bois-Reymond in a letter as early as 1842 where Brücke attempts to explain vegetative processes in the organism by physical means:

Brücke and I pledged a solemn oath to put in power this truth: No other forces than the common physical-chemical ones are active within the organism; in those cases, which cannot at the time be explained by these forces, one has either to find the specific way or form of their action by means of the physical-mathematical method, or to assume new forces equal in dignity to the chemical-physical forces inherent in matter, reducible to the force of attraction or repulsion. (Du Bois-Reymond, quoted in Ref. [2])

However, the primary argument for advocating mechanistic causality and adopting “physiological reductionism” was epistemological (episteme: Greek for knowledge; i.e., the science of knowing) rather than methodological. It grew out of the contemporary intellectual climate of neo-Kantianism3 that engulfed Europe at the beginning of the nineteenth century with the promise to usher in an undogmatic, direct path in scientific progress [17]. Prior to Kant the dualism between the external and the inner worlds took on a form of philosophic debate between rationalists and empiricists. The former believed that knowledge is the result of rational thought while the later maintained that one could ‘know’ the world only through experience. Kant, the founder of modern theory of knowledge, brought about the synthesis of these views and developed a theory of knowledge that profoundly affected the scientific method in the sense that the certainty of knowing depends on capacities of human mind rather than on dogmatic truths forced upon it from without.

For Kant, the laws discovered by natural science and mathematics cannot be found in the external world but only in the human mind. The process of cognition of reality involves a synthesis of sense percepts (colors, tones, shapes, and degrees of heat) and the so-called “things-in-themselves” of which nothing can be known, except for the fact that they exist. The way these “things-appear-to-us” is influenced by the sense organs and the mind and can, therefore, only be subjective and hence unreal. Thus, the perceived object, as well as the perceiver, lack objective reality and are mere epiphenomena of the mind [18]. For the sake of establishing the certainty of mathematics and scientific truths, Kant transferred all of nature with its laws into laws of the mind and thus “erected insurmountable barriers to the faculty of knowledge” [19].

Such interpretation of reality found an echo in descriptions of the tasks of science expressed by Helmholtz in his 1853 essay “On Goethe’s Scientific Research” in the following way:

For a natural phenomenon is not considered in physical science to be fully explained until you have traced it back to the ultimate forces which are concerned in its production and its maintenance. Now, as we can never become cognizant of forces as forces, but only of their effects, we are compelled in every explanation of natural phenomena to leave the sphere of the sense, and to pass to things which are not objects of sense and are defined only by abstract conceptions. […In the final instance] this is a world of invisible atoms and movements, of attractive and repulsive forces. (Quoted in Ref. [20])

A logical outcome of this line of reasoning was the development of a colossal hypothesis system in science (present to this day) which does not investigate how nature is, but rather how it can be reconstructed from postulates and assumptions. Kant’s influence on the scientific method led to a paradoxical situation where by virtue of method, which is its supposed strength, science deals with fragments which it attempts to piece together. Physicist and philosopher of science Henri Bortoft expressed it as follows:

Science believes itself to be objective but is in essence subjective because it is compelled to answer questions which the scientist himself has formulated. Scientists never notice the circularity of this because they believe they hear the voice of “nature” speaking, not realizing that it is the transported echo of their own voice. [21] (emphasis by H.B.)

Du Bois-Reymond, credited with the discovery of the nerve action potential, authored several books, including the influential “Investigations in Animal Electricity” (1848) which he continued to publish in parts over three decades. In one of the passages, he refers to the absolute scope of the physical analytical method that would determine not only biological processes, but possibly predict human actions (quoted in [5]):

…those who are of one mind with me will not permit themselves to be shaken in the conviction, that nevertheless, if only our methods sufficed, an analytical method on the general life process would be possible. This conviction rests on the insight, possessed even by Aristotle, that all changes in the material world within our conception reduce to motions. But again, all motions may ultimately be divided into such as result in one direction or other along the straight line connecting two hypothetical particles. Therefore, to such simple motions must even the process within the organic state be ultimately reducible. This reduction would indeed initiate an analytical mechanics of these processes. One sees, therefore, that if the difficulty of the analysis did not exceed our ability, analytical mechanics fundamentally would extend even to the problem of personal freedom…4

The consequence of the reductionist approach was that sense perceptions were interpreted as an illusion and the hypothetical world of atoms and molecules as reality from which everything else can be derived on the basis of mechanical (atomic) and mathematical models. The introduction of a quantitative, mathematical method into medicine and biology was based on the distinction between “primary” qualities, such as size, shape, and motion, and “secondary” qualities, e.g., color, tone, temperature, scent, and touch,5 of which only the former can be quantified and expressed mathematically directly, whereas the latter were deemed only subjective experiences without “objective” value. This led to the dualistic (Cartesian) split in science by which a direct encounter with the sense world is deemed to be unreal (an illusion) and reality needs to be reconstructed through intellectual reasoning [21].

To set the proposed method on a sound epistemological footing, Du Bois-Reymond called upon Aristotle’s causal principles, but only two of the four principles were acknowledged.6 In as much as linear causality which provides the basis for their calculability is appropriate when applied to physical phenomena, when transposed to an organism, it reduces the latter to a mechanism and attempts to explain it by the means of its parts. And if, by extension “no other forces than the common physical-chemical ones are active within the organism,” as posited by Du Bois-Reymond, sense perceptions and thoughts, too, are all reduced to electro-chemical emanations of the nervous system.

25.2 Organism Versus Mechanism

The notion that the reductionist method used in physical sciences can equally be applied to an organism was countered by several philosophers of science and contemporaries of the Berlin group (cf. Footnote 2), in particular by Rudolf Steiner (1861–1925) who argued that a method which views an organism as a complex biological machinery is unable to penetrate to its essence or provide its causal explanation. Like Goethe before him [11], Steiner maintained that underlying an organism is an active idea (the type or an archetype) which organizes its parts in the process of self-organization. In a treatise on scientific methodology, Steiner explains the difference between the physical and biological systems:

This is precisely the contrast between an organism and a machine. In a machine, everything is the interaction of the parts. Nothing real exists in a machine itself other than this interaction. The unifying principle, which governs the working together of the parts, is lacking in the object itself, and it lies outside of it in the head of its builder as a plan. Only the most-extreme short-sightedness can deny that the difference between an organism and a mechanism lies precisely in the fact that the principle causing the interrelationship of the parts is, with respect to a mechanism, present only externally (abstractly), whereas with respect to an organism, this principle gains real existence within the thing itself. Thus, the sense perceptible components of an organism also do not appear out of one another as a mere sequence, but rather as though governed by that inner principle…. In this respect it is no more sense-perceptible than the plan in the builder’s head that is also there only for the mind; this principle is, in fact, essentially that plan, only that plan had now been drawn into the inner being of the entity and no longer carries out its activities through the mediation of a third party—the builder—but rather does this directly itself. [22]

What is the nature of the unifying principle, the archetype, referred to by Goethe in light of contemporary evolutionary biology? According to Riegner, Goethe’s dynamic typological thinking is a “forerunner to modern evolutionary developmental biology” that can be viewed as the ideal, organizing evolutionary principle, the lawful integration of genotype and phenotype which plays a role in morphogenesis and development [23]. This lawfulness is the general principle which governs all specific cases.

In Steiner’s words:

The type is thus the idea of the organism: the animality in the animal, the general plant in the specific plants (…) it is something entirely ‘fluid’ out of which may be derived all separate species and families, which we may consider sub-types, specialized types. The types do not exclude the (Darwinian) theory of descent (…) it is only a rational protest against the idea that organic evolution proceeds merely in the successively appearing objective (sense-perceptible) forms. It is the type that establishes the interconnection amid all infinite multiplicity. It is the inner aspect of that which we experience as the outer forms of living creatures. The Darwinian theory presupposes the type. [24] (see Refs. [14, 25] for detailed discussion about the type)

Steiner further argued that “secondary” qualities are perceived by the senses on an object in the same way as the “primary” qualities and that from an epistemological perspective are thus equally objective. Hence, there is no reason to give preference to one over the other:

Magnitude, shape, location motion, force, etc., are perceptions in exactly the same sense as light, colors, sounds, odors, sensations of taste, warmth, cold, etc. Someone who isolates the magnitude of a thing from its other characteristics and looks at it by itself no longer has to do with a real thing, but only with an abstraction of the intellect. It is the most nonsensical thing imaginable to ascribe a different degree of reality to an abstraction drawn from a sense perception than to a thing of sense perception itself. Spatial and numerical relationships have no advantage over other sense perceptions save for their greater simplicity and surveyability. It is upon this simplicity and surveyability that the certainty of the mathematical sciences rests. [26] (emphasis by R.S.)

When a view of nature insists on expressing the corporeal world in a mathematical or mechanical way, maintained Steiner, it does so because of ease and comfort for our thinking. This explains, at least in part, the above-mentioned reasoning for choosing the mathematical method by Du Bois-Reymond and Helmholtz. More importantly, by reducing macroscopic phenomena to the “world of invisible atoms and movements and attractive and repulsive forces,” the possibility of retaining anything that is objectively present entirely disappears, and it is difficult to imagine how even the primary qualities, reduced to hypothetical atoms and vibrations, have any objective value.

In light of this explanation, let us briefly revisit the experiment of “active fluids” (Sect. 23.​3.​2). A rational explanation of such self-driven flows calls for a twofold system of causes: (a) procurement of individual parts: cylindrical channels, kinesin clusters, ATP, microtubules, etc., and (b) arrangement of these components according to the experimental protocol (active idea or a plan) in a way that shows directional movement of kinesin clusters. It is evident that the unifying principle which demonstrates a self-driven flow originates in the mind of the experimenter.

In an organism, on the other hand, the material components (cells and organs), the effective forces as well as the unifying principle (the underlying lawfulness or the plan of a particular organism) form a unity. This is to say, in a chemical (or mechanical) system the structure, i.e., parts, always precede and are separate from its function, whereas in a living system, form and function manifest simultaneously. Thus, in addition to consisting of molecules, organelles, cells, and organs, “the organism contains a second active system which permeates the first at a higher level to it because it creates the order in the first” [14]. In the sense of Aristotelian epistemology, this second system is the formal, but also the final cause enacted in time (cf. Footnote 6).

The task of the biologist then is to grasp the active, organizing principle of an organism (the type) with the same exactness as the physicists formulates the natural (physical) law. For Bortoft, the organizing principle is no longer an idea which is external to the phenomenon and is imposed on it by the experimenter, but rather it is the organizing principle which appears as an active idea which is experienced in the mind of the researcher (in the sense described by Steiner in the above quote, Ref. 19). The experience is that of entering the dimension of the phenomenon itself:

The organizing principle of the phenomenon itself, which is its intrinsic necessity, comes into expression in the activity of thinking when this consists in trying to think the phenomenon concretely. [27]

Rather than relying on vague intuition, Goethe’s cognitive approach is based on enhancement of observational faculties based on a rigorous and persistent method by which the investigator’s mind functions as an “organ of perception” rather than merely as a medium of ordinary thought [2830]. In Goethe’s own words:

To grasp the phenomenon, to fix them to experiments, to arrange the experiences and to know the possible modes of representation of them—the first attentively, the second as accurately, the third as exhaustively as possible and the last with sufficient many-sidedness—demands molding of man’s poor ego so great that I never should have believed it possible. (Goethe, letter to Jacobi, cited in [31])

Thus, the participatory way of science developed by Goethe and Steiner differs substantially to the analytical approach whereby the content of sensory perception is transformed into quantitative data and the significance derived through mathematical analysis. In the process by which the investigator maintains the objective, “onlooker” status, an organism is dissected to the smallest possible parts with the intent to reconstitute it at higher levels of organization. It should be noted, however, that the aim of this discussion is to endorse neither the reductionist nor the wholistic approach. As noted by Primas, nature is exceedingly diverse and stratified and

Each hierarchical level entails an autonomous nonreducible language which must not be eliminated in favor of an empty “universal language”. Mutually exclusive complementary descriptions of nature are not only admissible but are equally entitled and necessary. That is, science is necessarily pluralistic. [32]

It is of interest that dynamic typological thinking espoused by Goethe and Steiner has been, according to Riegner, (falsely) accused by the neo-Darwinists of espousing the view which has no place in modern evolutionary biology, on the basis that ideas have no inherent reality. Should this be the case, rejoins Riegner, “… it follows that, because their criticism is itself an idea, it too would need to be discounted” [23]. It appears that ideation is part and parcel of the world and, as commented by R. Richards:

It hardly seems easier to believe that the world is really a ball of mathematical strings that reveals itself to our consciousness as natural objects of ordinary experience than to believe it is an organic structure of ideas that reveals itself in comparable fashion. Idealism cannot be defeated, only forgotten. (quoted in [23])

25.3 Emergence and the Hierarchical Ordering in Nature

Bridging the gap between physical and biological phenomena, at least on theoretical grounds, historically paved the way for viewing the organism no longer as a mechanism but as a system organized at multiple levels. Erwin Schrödinger (1887–1961), one of the principal founders of quantum physics and a Nobel laureate (1933) wrote an influential book “What is Life?” where he elaborated on the role of thermodynamics in living systems. Schrödinger was intrigued by the idea that beyond the known laws of physics, “negative entropy” and other physical laws may be active in living systems. Ludwig von Bertalanffy, one of his many followers, known for his opposition to the reductionist as well as vitalistic approach, originated the “general systems theory” with the intent to reconcile the existing knowledge of physics with biology. As mentioned, Bertalanffy and fellow members of the “Theoretical Biology Club” promoted an “organismic” conception of biology that emphasizes the organism at various levels of organization and yet conceives it as a unity (cf. Sect. 23.​3.​1). In his “Theoretical Biology,” a classic in the field, an attempt was made to apply the concept of “unity of life” to the problem of development [33]. Bertalanffy demonstrated that true thermodynamic equilibrium can only develop in an isolated (closed) system, whereas biological organisms are open systems in continuous interaction and exchange with the environment. Through inner dynamics, the organisms develop higher degrees of “equilibrium” (steady-state) while maintaining their structural integrity [34]. Paul Weiss (1898–1989) similarly made pioneering contributions to the organismic approach in biology. He defines a system, say, a cell, an organ, or an organism as follows:

Pragmatically defined, a system is a rather circumscribed complex of relatively bounded phenomena, which, within those bounds, retains a relatively stationary pattern of structure in space or of sequential configuration in time in spite of a high degree of variability in the details of distribution and interrelations among its constituent units of lower order. (quoted in Ref. [35])

The oscillating, pattern-forming chemical reactions discovered by P. Belousov and A. Zhabotinsky in the 1950s showed that sustaining, far-from-energy-equilibrium chemical reactions occur also in the domain of the inorganic. When a continuous flow of energy is supplied to a pool of disordered molecules, a spontaneous order arises, exhibiting cooperative behavior, memory, and sensitivity to small perturbations. Such chemical oscillators were used by Prigogine as models for the understanding of self-organizing biological systems that use nutrients as an external source of energy to establish a stable state, far-from-thermodynamic equilibrium.

However, the self-organizing properties of chemical oscillators, that in certain respects imitate the behavior of an organism, can easily be interpreted as a “bottom up” phenomenon resulting from the interaction of specific atoms and molecules in the presence of sufficient energy. Such a view overlooks the fact that the resulting system shows an entirely new set of properties. This brings us to the important concept of emergence (emergere, Latin: to surface), defined as the appearance of a new set of properties, not yet present at the lower levels of organization and cannot be derived or explained from the lower level [36, 37]. For example, the characteristics of cooking salt (NaCl), consisting of sodium and chloride, cannot be predicted from properties of the white amorphous sodium powder and foul-smelling, toxic gas chlorine. As noted by Heusser, the reductionist explanation of such chemical reaction is by continuity of the atoms. However, he maintains that this is no explanation of the phenomenon because the emergent qualities cannot be derived from “those ascribed to the imagined or inferred atoms alone (…) and secondly, the substances on both sides must be treated epistemologically the same way”, namely substances on both sides of the equation correspond to their particular lawfulness. In effect, we are dealing with three different lawfulness: that of Na (solid) and Cl (gas) and with NaCl, the lawfulness of the whole [20]. Even if the same atoms continue to be present in the derived substance, the quality of “saltiness” and other characteristics of this new compound are unique and cannot be predicted from its constituent parts and, therefore, have to be explored empirically as is the case with laws of the constituent parts.

The following observation on emergence by B. Kiefer in the publication of the Swiss National Science Foundation captures the essence of this universal phenomenon (quoted in ref. [2]):

Among the most puzzling and yet most fundamental phenomena of the universe we find Emergence: the appearance of new properties at successively higher levels of complexity, which could not have been predicted from the preceding stage. For instance, the characteristics of life could not be derived (deduced) from those of inert lifeless matter. No matter how far research proceeds in physics and chemistry, we shall not be able to predict in this way the specific behavior of living organisms. It seems to be a universally valid principle that a complex whole cannot be reversed and reduced to its simpler parts. This is true no matter how far we go in complexity. At the level of the atoms: if I look at the atoms of hydrogen and oxygen in isolation, nothing points to the properties of the water molecule. At the other end of the spectrum: the properties of consciousness cannot be extrapolated from the behavior (…) Emergence leads to the important final conclusion: Reductionism is a false theory. [38]

Thus, not only in biology (see below) but also in the realm of physics and chemistry the “whole” cannot logically be predicted from its parts because of emergent properties of the new molecular structure which is characterized by its own lawfulness. This non-derivation is the most fundamental quality of higher emergent structures that essentially disqualifies the naïve reductionist model of substance [2, 20]. In a remarkable article “Rethinking the Natural Science,” the late Hans Primas, a notable chemist, prolific inventor (e.g., high-resolution magnetic resonance imaging), and former director of the Swiss Federal Institute of Technology, spells out the growing recognition of the inadequacy of the Cartesian (reductionist) model as the sole basis for the understanding of nature and how modern science, in particular quantum mechanics, has rendered the reductionist model obsolete (quoted in Ref. [2]).

Atomism claims that matter is made up of the smallest, not further divisible building blocks and that all natural events must be explained by the properties and movements of these atoms (…) Molecules, atoms, electrons, quarks or strings are not, however, building blocks of matter (…) nothing remains of the original concept of matter in contemporary physics (…) If we consider quantum mechanics a valid theory, then the statement “matter is made from the most elementary building blocks” is scientifically untenable. The decisive point is not the fact that the chemist’s atoms can be broken down further—this would be a trivial matter of nomenclature—but rather that material reality constitutes a whole, that is not made up of parts in the first place (…) the dialectic of whole and part is fundamentally different in the quantum world from that customary in classic natural scientific descriptions. For scientific-empirical reasons, physics was forced to acknowledge, against the fierce opposition of most philosophers, the integral (holistic) nature of the material world (…) According to the conception of quantum physics, material reality is a whole, namely a whole that does not consist of parts. Quantum physics is the first logically consistent and mathematically formulated theory. Modern Physics’ concept of wholeness is much more comprehensive than the holistic rudiments of the other natural sciences. [32]

In view of the above, it can be stated that self-organization of substances into molecules and higher order structures, such as crystals, is not merely a self-aggregation of parts but an expression of the higher-level lawfulness that suppresses or sublates7 activity of the parts into a new, emergent whole.8 In this sense, the higher-level lawfulness is the active or causative agency, and the lower level becomes the receptive, passive or subordinate principle.9

25.4 The Rise of Molecular Biology

We have seen that contemporary physics, based on quantum mechanics, has indeed been able to break away from the self-imposed constraints of the reductionist model, not so, for the most part, biology and medicine [4, 9, 41]. In molecular biology such “bottom-up” approach is exemplified by a causal chain that leads from transcription of genetic sequence of genes, to proteins, to cells, organs, organ systems, and finally arriving at the organism (cf. Fig. 25.1, blue arrows). The metaphor was introduced by Monod and Jacob in the 1960s by drawing an analogy between biological systems and computers, where DNA codes the program, and genes, gene switches and gene networks direct transcriptional processes [42]. Stated in Jacob’s own words, “The genetic program is a model borrowed from electronic computers. It equates the genetic material with the magnetic tape of a computer” (quoted in Ref. [43]). According to this model, a set of genes is the blueprint for specific phenotypes and their mutations, the cause of a disease processes. Rapid technical progress in the field of molecular biology and the discovery of cystic fibrosis gene in 1989 fueled the promise of gene substitution therapy for monogenetic disorders [44] and causal explanation of common diseases such as diabetes, hypertension, and atherosclerosis [45]. However, the prediction of Francis Collins in 1999, the head of Human Genome Project, that “the healthy form of gene itself may even be used in so-called gene therapy” [44] is yet to materialize. Similarly, the “War on Cancer” declared by the US researchers in the early 1970s on the premise that specific cancer phenotypes can be traced to discrete molecular dysfunction and targeted by chemotherapeutic agents has since been abandoned [46].


Only gold members can continue reading. Log In or Register to continue

May 1, 2020 | Posted by in CARDIOLOGY | Comments Off on Integrative Model of Circulation: A Synthesis
Premium Wordpress Themes by UFO Themes