Principles of Physics and Technology in Diagnostic Ultrasound


1 Principles of Physics and Technology in Diagnostic Ultrasound


Bernhard J. Arnolds, Bernhard Gaßmann, Peter-Michael Klews


1.1 Introduction


Human beings have natural receptors for light and sound. The eyes can process electromagnetic waves over just a limited range of frequencies. The ears have similar limitations when it comes to sound. To perceive frequencies outside these naturally visible or audible ranges, special technology is needed. In a sense, then, the pictures generated by this technology are “artificial” images.


The images produced by X-rays or ultrasound depend on the methods that are used for data acquisition and image processing. An image is considered “good” if it has high spatial resolution and, in the case of gray scale, has a subjectively pleasing distribution of gray levels. Another requirement is high contrast resolution, or the ability to perceive slight differences in adjacent shades of gray.


Blood flow imaging has been a topic of growing interest in diagnostic ultrasound. In 1842, C. Doppler described his eponymous effect, which states that the wavelength of light (or sound) measured by an observer depends on the relative motion between the source and receiver. This effect has been utilized in medicine since the late 1950s. Bidirectional Doppler was introduced in 1959, followed by pulsed Doppler in 1967.


The technique of color encoding of blood flow in the B-mode (gray scale) image was introduced in 1982. This technology is referred to as color duplex sonography (CDS) or color flow imaging (CFI).


The use of ultrasound contrast agents has become an established part of routine ultrasound examinations. The injection of microbubbles increases acoustic backscatter from the blood, and special signal-processing methods are used to suppress the tissue signals, resulting in images with exquisite vascular detail. One important application of this technology is in the diagnosis of intra-abdominal tumors.


Elastography is also being used in vascular ultrasound to investigate the elasticity of artery walls.


1.2 Overview of Ultrasound Techniques


Ultrasonography is used both for determining (organ) morphology and evaluating function. An ultrasound system always consists of a transducer with an application-specific shape and frequency combined with a control unit, which is the ultrasound machine itself.


The Doppler effect is useful for determining the velocity of moving objects. In medicine, the Doppler effect is most commonly used for the investigation of blood flow. Tissue Doppler is a technology that analyzes the motion of tissue structures such as the myocardial walls. Flow characteristics are displayed as either a Doppler spectrum or velocity spectrum plotted over time, or points in the B-mode image are color-encoded according to the motion measurable at those sites (Doppler shift).


All ultrasound imaging techniques described in this chapter, with the exception of continuous-wave (CW) Doppler, are based on the analysis of multiple pulse-echo cycles. The individual pulses are successively emitted from the transducer along selected ultrasound scan lines, while the echoes are continuously received and analyzed for their amplitude, phase, and frequency. Each of the continuously acquired and analyzed echoes represents a sample.


Except for CW Doppler, the ultrasound techniques described here are transit-time techniques, meaning that the depth from which echoes are received is calculated from the total pulse-echo travel time, based on the assumption of a constant sound velocity. To avoid ambiguity, the next pulse is not emitted until the transducer has received an echo from the greatest possible (or preassigned) depth. The only exception to this rule is high pulse-repetition-frequency (HPRF) Doppler, in which additional pulses are transmitted before the echo from the first transmitted pulse has been received.


All ultrasound techniques besides M-mode are sectional imaging techniques. The analysis of many consecutive scan lines, including a technique-dependent interpolation of lines between the received scan lines, results in the creation of a two-dimensional sectional image. Generally speaking, a scan line is defined as a discrete line in the ultrasound image along which the ultrasound pulse travels. It may be oriented in a perpendicular or radial direction relative to the transducer. The scan lines are idealized lines. Their thickness depends on the ultrasound wavelength and they do not take into account the true dimensions of the ultrasound beam. In some cases, as in CDS, multiple pulse-echo cycles are successively transmitted along the same scan line in order to collect the necessary echo information. Many individual scan lines are composed into a side-by-side array to produce a two-dimensional ultrasound image.


The use of multiple pulse-echo cycles per scan line does not increase the number of image increments. Only a write-zoom feature (magnified view) will increase the amount (density) of increments for a given area of interest. Thus, an ultrasound image is formed within a time period that is defined by the image depth, the number of pulse-echo cycles per line, and the number of lines per image. This is different from an ordinary photograph in which all image points are formed at the same time.


The far edges of an ultrasound image may be separated from each other by a time lag of 0.2 s or more. This may become significant, especially in the color-encoded imaging of blood flow. For example, a systolic pulse may be displayed on the left side of the image while the right side is still in diastole. This “windshield-wiper effect” depends strongly on the time required for signal acquisition and processing. The visible parameter for evaluating these temporal characteristics is the image repetition frequency called the frame rate.


The ultrasound scan lines should not be confused with the image lines on the monitor display. The number and density of image lines depend on the video standard and the area of the (ultrasound) monitor image. The number of image increments is considerably smaller than the number of image points, or video pixels; otherwise, image generation would take too long and the frame rate would be much too slow.


1.2.1 A-Mode


A-mode ultrasound (for “amplitude mode”) is rarely used nowadays but forms the basis of the B-mode technique. An A-mode image is a graphic trace of the echo amplitudes of individual scan lines (y-axis) plotted over time (x-axis). The measured transit time is converted to distance from the transducer. The deflection parallel to the y-axis on the monitor screen is proportional to the amplitude of the received echo.


1.2.2 B-Mode


B-mode ultrasound (for “brightness mode”) is the mainstay of ultrasonography and is by far the most widely used ultrasound imaging technique. The B-mode image is gray scale, meaning that it is composed entirely of different gray levels. Many successive scan lines are assembled and displayed side-by-side on the monitor to form a two-dimensional picture. The gray levels in the image are proportional to the amplitudes of the returning echoes. The greater the amplitude, the greater the brightness of the corresponding point in the image (see Fig. 2.2).


1.2.3 M-Mode


Another gray scale technique is the M-mode (for “motion mode”) or TM (“time-motion”) mode (Fig. 1.1). In this technique, different points are insonated along a single scan line. Successive acquisitions of the same scan line are displayed side by side on the monitor, although they originate from the same location in the body. The purpose of M-mode imaging is to track and display dynamic processes inside the body. It is used mainly in cardiology for evaluating the motion of the cardiac valves. M-mode is the basis for color Doppler M-mode techniques. All M-mode techniques supply functional information.




Fig. 1.1 M-mode image tracks the motion of the mitral valve over time. The temporal resolution of M-mode imaging is unmatched by any other technique. M-mode is indispensable for the visualization of moving structures.


1.2.4 Color Duplex Sonography (CDS)


CDS techniques (except for tissue Doppler) work by color encoding of sites in the image where blood flow is detected. Areas devoid of blood flow are shown in gray scale. Thus, CDS or color flow mapping (CFM) superimposes areas of color-encoded motion over the B-mode image (Fig. 1.2). The reference point for defining the direction of blood flow is the transducer (or more precisely, the direction of the scan line). Only components moving toward or away from the transducer are measured. The standard practice in conventional flow-velocity-based CDS images is to encode the different flow directions in shades of red and blue. The operator can choose which flow direction is encoded in blue and which in red. The blood flow velocity indicated in all conventional CDS techniques is the intensity-weighted mean blood flow velocity or Doppler shift (phase shift of the Doppler signals). Lighter shades of color indicate higher flow velocities. The color green may be added to the red and blue shades to indicate variance, especially in scanners designed for echocardiographic use. Variance is often used in medicine as a measure of turbulence. On a physical level, variance represents the scatter of Doppler frequency shifts. Turbulence increases the scatter of flow velocities, with an associated increase in the scatter of Doppler frequencies.




Fig. 1.2 Arterial bifurcation visualized by color duplex sonography. Blood flow toward the transducer is encoded in red. A color reversal (to blue) is noted just proximal to the bifurcation. It is caused by aliasing because the flow velocity in that area exceeds the measurable range (±19 cm/s) and therefore appears at the opposite end of the color scale. A distinctive feature of aliasing is the direct juxtaposition of contrasting colors, whereas a true flow reversal would always show a black zone interposed between the colors (Doppler angle = 90 degrees).


1.2.5 Power Doppler


Rather than color-encoding the sign and amplitude of the Doppler signals as in CDS, a power Doppler image is produced by color-encoding the intensity of the local Doppler signals. Sites with stronger local Doppler signal intensities appear brighter in the image. Red and orange color shades are commonly used (Fig. 1.3). It is also possible to indicate flow direction in the image, but this sacrifices some key advantages of power Doppler, namely, its high sensitivity to very low flow velocities and its ability to depict high flow velocities in the same image without aliasing.




Fig. 1.3 Inflow from a tributary into the jugular vein through a venous valve. (a) Color duplex sonography. Note the reflection of the flow against the proximal wall, with blood streaming in the opposite direction on the right and left sides of the actual jet. (b) “Wideband” Doppler with a lower pulse repetition frequency (PRF) obscures core flow details but increases sensitivity to low velocities. (c) Unidirectional imaging with power Doppler is very sensitive and highly susceptible to artifacts. Long integration times (large number of pulses per scan line) often leads to washout of anatomic boundaries, especially in the distal direction.


1.2.6 Tissue Doppler


In tissue Doppler imaging, the motion of the myocardium or other tissue of interest is color-encoded relative to an arbitrary reference point. The signals arising from blood are not displayed.


1.2.7 B-Flow


B-flow imaging generates a gray scale image of blood flow. As the red cells move between the transmission of two successive pulses, the backscatter from the blood changes. The effect is greatest when the blood flow is directed perpendicular to the ultrasound scan lines. This effect is virtually negligible in blood flowing along the scan lines because successive pulses will “hit” the same red cells. Consequently, this mode is best for imaging blood flow in vessels the run parallel to the skin surface (Fig. 1.4).




Fig. 1.4 B-flow provides a real-time image of splenic blood flow. The vascular architecture is clearly defined, and even higher order branches are visualized. (With kind permission of Dr. H.P. Weskott.)


1.2.8 Color M-Mode


Color M-mode imaging uses pulsed Doppler interrogation along a single scan line similar to conventional M-mode echocardiography. While M-mode echocardiography displays location and intensity of reflected spectral signals, in color M-mode the Doppler velocity shift of moving reflectors is recorded and then color encoded and superimposed on the M-mode image. This process results in high temporal resolution data on the direction and timing of flow events. Since this is a pulse Doppler technique, just as it is with color Doppler imaging, velocity resolution is limited.


Motion is plotted along the time axis, which forms the abscissa, while the ordinate is the scale for image depth. Tissue motion or blood flow is interrogated along a single scan line. Although color M-mode is mainly used in cardiology (Fig. 1.5), it is also used in vascular imaging to determine blood flow volume.35 ,​ 5




Fig. 1.5 Color M-mode image. This technique can define local flow patterns with high temporal resolution. This is an image of mitral valve prolapse (MVP, bulging of the mitral valve into the left atrium during systole). The course of the Nyquist limit (aliasing, red/blue color reversal) can be used to calculate flow volumes.


1.2.9 Doppler Spectral Analysis


To quantify Doppler-shifted signals from moving reflectors, Doppler spectral analysis using fast Fourier transformation (FFT) is a common standard. The echoes are analyzed for their frequency distribution within a given time interval (e.g., one analysis each 20 ms or faster). The spectra (power spectrum with frequency along the x-axis and amplitude along the y-axis is calculated) of each time interval are then added side by side and displayed as Doppler spectral analysis with time along the x-axis and frequency along the y-axis. The amplitude of each FFT point is represented by color code, e.g., blue for weak and white for echoes with a high amplitude. The amplitudes are not displayed on the axis (Fig. 1.6). The spectrum that plots frequency intensities over time is commonly referred to as the “Doppler spectrum.”




Fig. 1.6 Pulsed-wave (PW) Doppler spectrum from a healthy common carotid artery (CCA) in a standard gray scale display. Data derivable from the spectrum are indicated.


Pulsed-wave (PW) Doppler is distinct from CW Doppler. In PW Doppler, a Doppler sample volume is positioned in the B-mode image by the operator. Only echoes recorded from this user-defined region are analyzed for their spectral (frequency and amplitude) composition. The distance of the sample volume from the transducer defines the maximum pulse repetition frequency (PRF). In CW Doppler, ultrasound pulses are continuously emitted from the transducer while all echoes are continuously received and spectrally analyzed for their Doppler shift relative to the mean frequency of the transducer in use. Depth discrimination is not available as in PW Doppler; hence, the exact site of origin of the velocity information cannot be determined.


HPRF Doppler is a special type of PW Doppler that employs multiple, equidistant Doppler sample volumes of the same size. The number of sample volumes depends on the selected PRF. The higher the PRF, the more sample volumes there are on the selected scan line at a given image depth. The information is ambiguous because the signal to be analyzed may originate from any of the sample volumes. The use of HPRF Doppler is an effective way to avoid aliasing. This technique is most commonly used for the detection of high flow velocities across sites of stenosis.


1.2.10 Three-Dimensional Ultrasound Techniques


Three-dimensional techniques for displaying B-mode and color duplex images will be mentioned only in passing. Conventional systems process a number of sectional images that have been stored in the scanner memory. The sectional images are acquired either as parallel planes (freehand) or as planes arranged in a pyramidal array (motor control). Using computer postprocessing, the volume data set is displayed as a combination of three sectional planes or as a three-dimensional rendering. In the future this display will be generated in the ultrasound machine. This process does not represent a fundamentally new analysis of the original echo signals, but just a different mode of display. While this process creates displays that are pleasing to the eye, it is time-consuming and ultimately does not supply any new information. The most popular use of this technology is in displaying fetal images. Basically, this rendering process is the same as that used for three-dimensional reconstructions in computed tomography (CT) and magnetic resonance imaging (MRI).


A new approach was presented by O. T. von Ramm at the American Institute of Ultrasound in Medicine (AIUM) conference in 1997. An unfocused planar sound wave is transmitted into the body. The receivers consist of numerous, parallel-processing electronic units called “receive beamformers.” At that level, the scanned volume is divided into individual segments, and the signals are analyzed along the scan lines for two-dimensional visualization. The received signals can be analyzed within the individual segments as a function of time to produce a real-time, three-dimensional volumetric image of the scanned volume.


Besides the high costs of this technology, image display requirements place high demands on research and development. How can a volumetric image be displayed on a two-dimensional monitor? Which structures are essential and which should be suppressed? What it is that we wish to display? Like other three-dimensional techniques, real-time volumetric imaging does not achieve higher spatial resolution than two-dimensional imaging.


Real-time, three-dimensional imaging (4D) is integrated in midrange and high-end level ultrasound systems. New techniques and a considerable calculation power are required for that.


The development of matrix transducers combined with high-speed computer technology expanded three-dimensional imaging to 4D imaging. The high frame rate permits the three-dimensional visualization of dynamic structures in real time. This 4D technique has already been widely utilized in the form of 4D-transesophageal echocardiography (4D-TEE) probes. So-called “plane wave imaging” and software beamforming can achieve the high frame rates necessary for 4D imaging.


1.2.11 (Tissue) Harmonic Imaging


In harmonic imaging, a certain fundamental frequency is transmitted into the body, and analysis of the returning echoes is limited to a frequency range that is approximately twice the fundamental transmitted frequency. For a 2.0-MHz transducer, for example, the “second harmonic” waves at a frequency of approximately 4.0 MHz would be analyzed. The signal intensity of the tissue echoes in this range is very faint and therefore requires some type of amplification. The harmonic signal intensities from blood were initially found to be below the detectability threshold. But the total echo signal intensity, including the second harmonics, can be increased to the detectable range by means of microbubble contrast enhancement. The use of phase and amplitude modulation with multiple consecutive pulses can be analyzed on the receiver side in such a way that the tissue signal is strongly suppressed and the backscatter from the contrast agent depicts both the vascular distribution and the time course of the flow (Fig. 1.7). The details of harmonic imaging are discussed more fully in the section on Innovations (p. 27).




Fig. 1.7 The smallest vessels in an enlarged lymph node (lymphoma) can be visualized with a summation technique using microbubble contrast enhancement. (With kind permission of Dr. J. Vogelpohl.)


1.3 General Physical Properties


Light and sound have much in common. Both are based on the propagation of waves and both are subject to the same processes of reflection, refraction, interference, diffraction, attenuation, and absorption.


Electromagnetic radiation (e.g., X-rays and light) and acoustic radiation both involve the propagation of waves. But while electromagnetic radiation can travel even in a vacuum, sound propagation requires a medium such as air or water. As a sound wave passes through the particles that make up the object, it sets them into mechanical vibration about their resting position. The vibrations propagate in a regular, periodic pattern as kinetic energy is transferred from one particle to the next. Each particle transmits momentum to its neighbor along the propagation pathway.


In a reversible process, the particles vibrate but do not change their location during the energy transfer; only a state of motion is transferred from one particle to the next. This locally spreading, periodic change of state is called wave motion. Thus, a wave transports both energy and momentum. The sound wave is a pressure wave (or density wave) and is based on alternating compression and decompression of the medium. The pressure at any given site changes in a time-varying manner.


The stronger the bond between the particles, the faster the state of motion propagates, i.e., the higher the sound velocity in the given medium. Sound velocity depends on the compressibility and density of the medium, and therefore the temperature of the medium is also a factor. For our purposes, the temperature and external pressure may be considered constant in all cases, so sound velocity is viewed as a material constant. The sound velocities in various tissues and fluids are shown graphically in Fig. 1.8.9 ,​ 10




Fig. 1.8 Velocities of sound propagation in various human tissues and fluids.


A distinction is made between transverse waves and longitudinal waves (Fig. 1.9). In a transverse wave, mobile structures oscillate about their resting point in a direction that is perpendicular to the direction of wave propagation and energy transfer. In a longitudinal wave, the vibration is parallel to the direction of wave propagation and energy transfer. Remember that the particles that oscillate in longitudinal waves do not travel with the wave. Assuming that permanent deformation does not occur (i.e., the amplitudes are within the Hooke range), the pressure wave will cause only a transient disturbance in the medium. Sound propagation can occur in both ways. But because liquids and gases do not transmit shear forces, only longitudinal sound waves can propagate through them. Human beings are composed mostly of water (at least their nonskeletal portions), and so diagnostic ultrasound is based on the propagation of longitudinal waves.




Fig. 1.9 Sound propagation requires a medium. Both transverse and longitudinal waves are generated in solids, while only longitudinal waves occur in liquids and gases. Transverse waves are negligible in human tissues. (a) Transverse waves. (b) Longitudinal waves.


In the range of ultrasonic frequencies, particles vibrate 20,000 to 1 billion times per second about their resting point. The unit of measure for frequency (f) is the hertz, or number of cycles per second (Hz = 1/s). Medical imaging generally employs frequencies between 2 and 25 MHz and occasionally as high as 70 MHz.


Wavelength is defined as the shortest distance between two successive wave peaks. Wave propagation is also subject to the time–distance law, i.e., the ratio of the distance traveled λ and time t equals a constant c. In other words, the propagation velocity c of a wave equals the product of the wavelength λ and the frequency f:




A transmitted frequency of 1.54 MHz has a wavelength of exactly 1 mm in tissue. Doubling the transmitted frequency shortens the wavelength by one-half. The wavelength at 15 MHz is 0.1 mm.


Although the mean sound velocity in tissue is assumed to be 1,540 m/s, the velocity of electromagnetic radiation (light) in tissue is 3.3 × 107 m/s, or 2 × 104 times greater. This means that the wavelength of sound is shorter than that of an electromagnetic wave of equal frequency by the factor stated (Fig. 1.10). The relatively low sound velocity in tissue is a key factor in understanding the processes involved in creating an ultrasound image.


The basic principle of ultrasound imaging is that an ultrasound pulse is emitted from the transducer, and the echoes that return at different times from different depths are received by the same transducer (the pulse-echo principle). A single ultrasound pulse consists of only a few wavelengths. The longer the time interval between pulse transmission and echo reception, the longer the transit time of the sound and thus the greater the distance from the transducer to the reflector from which the sound wave has returned.




Fig. 1.10 Comparison of the frequency ranges of electromagnetic (EM) radiation and sound. In both cases, c = λ f. Because light velocity in tissue (3.3 × 107 m/s) is 2 × 104 times faster than sound velocity in tissue (1540 m/s), the wavelength of EM radiation is greater than that of sound by the same factor, given equal frequencies. (a) Frequency range of electromagnetic radiation. (b) Frequency range of sound.


An echo is generated at the interface between two media with different acoustic properties. The amplitude of the echo depends greatly on the difference in acoustic impedance between the adjacent media. Impedance can be thought of as resistance to transmission. The impedance z of a medium is equal to the product of the sound velocity c in the medium and the density ρ of the medium:




The greater the difference in acoustic impedance between the media, called the impedance mismatch, the greater the amplitude of the returning echo because less sound is transmitted into the adjacent medium. This means that all ultrasound imaging modes (A-mode, B-mode, M-mode) depict only the interfaces that are encountered within the field of view. Without interfaces there are no echoes, and the monitor image will be black and featureless. Signal analysis utilizes the reflected or scattered wave energy that is returned to the transducer. This energy flow is defined in physics by its intensity, and its unit of measure is W/m2. The intensity of a sound wave is proportional to the square of the wave amplitude.


Fig. 1.11 shows that a large impedance mismatch is associated with very little sound transmission, as most of the intensity is reflected. The impedance of air is approximately 0.0004 × 106 kg/m2s due to its low density and sound velocity, while the impedance of tissue is approximately 1.62 × 106 kg/m2s. This fact alone makes it necessary to use an aqueous coupling medium between the skin and transducer during ultrasound imaging, otherwise very little sound intensity could be transmitted into the body. Because the impedance differences in the tissue itself are very small, only weak echoes are generated within tissue, making it possible to achieve deep penetration.




Fig. 1.11 Reflection coefficient R and transmission coefficient T in the case of normally incident sound as a function of the impedance ratio.


Two other properties of wave propagation are reflection and scattering. Reflection occurs only at interfaces that are large relative to the ultrasound wavelength. If the structures are smaller than λ, some of the intensity will be scattered. Reflection is a directional process, whereas scattered energy is distributed in all directions. Because the angle of incidence is equal to the angle of reflection, the angle between the path of the incident sound wave and a line perpendicular to the interface is equal to the reflection angle between the reflected wave path and the perpendicular line. This is why vessel walls perpendicular to the ultrasound beam appear very bright in the B-mode image, since most of the incident wave intensity is reflected back to the transducer.


The intensities It and Ir of the transmitted and reflected pulses (echoes) depend on the ratio of the impedances and the associated angles of incidence, reflection, and refraction (Fig. 1.12). An ultrasound pulse strikes an interface between medium 1 and medium 2 at incidence angle θe. Some of the pulse intensity at that point is reflected at angle θr, and some is transmitted at the refraction or transmission angle θt. Whether the transmission angle is greater or less than the incidence angle depends on the ratio of sound velocities in the media.


When an ultrasound pulse crosses from medium 1 to medium 2, the reflection coefficient R and the transmission coefficient T at that interface are defined by the following formulas:




Fig. 1.12 An ultrasound pulse strikes an interface between medium 1 and medium 2 at the incidence angle θe. Some of the pulse intensity at that point is reflected back at angle θr and some is transmitted at the refraction or transmission angle θt. The transmission angle may be greater than or less than the incidence angle depending on the ratio of the sound velocities in the media.




In case of a normally incident pulse (i.e., one angled 90 degrees to the interface), all the angles are equal to 0 (θr = θt = θe = 0), so:




The weak scattering of ultrasound by red blood cells (spheric emitters) occurs almost uniformly in all spatial directions. Therefore, the echoes from blood that are received by the transducer are extremely faint, and blood vessels appear almost black in the B-mode image relative to tissue, which is a much more efficient scatterer and reflector, for a given scan depth and gain setting. The intensity of scattering by red cells is proportional to the fourth power of the sound frequency.4 ,​ 7 Attenuation increases with frequency. For example, the scattering intensity of a 7.5-MHz signal is 21 times greater than that of a 3.5-MHz signal. A 5-MHz transducer still provides a fourfold improvement over a 3.5-MHz probe. Thus, the more favorable scattering properties at higher frequencies can compensate for tissue attenuation to some degree.


Scattering plays a central role in ultrasound imaging and in Doppler scans. Structures of different densities act as scatterers. This particularly applies to red blood cells, which range from approximately 7 to 2 μm in their greatest and smallest dimensions. The scattering properties of blood are of key importance in the determination of blood flow. Unfortunately, these properties are very difficult to measure in vivo.


Another phenomenon that occurs at interfaces besides reflection and scattering is refraction (Fig. 1.12, Fig. 1.13). This denotes a change in the direction and velocity of a wave due to a difference in the sound velocities of the media. If the sound velocity in the first medium is greater than in the second, the wave will be refracted toward a line perpendicular to the interface. If the opposite is true, the wave will be refracted away from the perpendicular. Refraction can be misleading when it comes to judging the exact size and location of a perceived structure. Anyone who reaches for an object submerged in water will find that it appears to be at a different depth and location than it actually is. Refraction is usually of minor importance in diagnostic ultrasound but may become significant in ultrasound-guided aspirations and biopsies, for example.




Fig. 1.13 As sound travels through different media, refraction of the sound can cause an apparent displacement. A knowledge of anatomy is essential for recognizing this artifact. The occasional presence of a “double aorta” is a well-known refraction phenomenon.


Interference refers to the interaction of two superimposed waves. The amplitudes of the waves may be added together (constructive interference) or they may diminish or even cancel out (destructive interference), depending on the relative phase positions of the interacting waves.


Diffraction occurs when the wave deviates from a straight line and bends around objects. Without diffraction, we would be unable to hear sounds behind an obstacle. The Huygens principle, which states that each point encountered by a wave becomes the starting point for a spherical wave, provides a qualitative means for describing the beam pattern emanating from a transducer. Diffraction and interference are the primary determinants of beam shape.


Attenuation limits the penetration depth of an ultrasound pulse. This phenomenon reduces the initial intensity I0 of the ultrasound pressure pulse. The intensity I declines exponentially with distance s, the attenuation coefficient α being a material constant. As the intensity of the pulse is attenuated, its energy is converted to heat (absorption), along with intensity losses due to reflection and scattering and other geometric losses.




The human body is definitely not a homogeneous medium. Instead, it is composed of layers. Different types of tissue such as fat, muscle, blood, tendons, and organs each have their own attenuation coefficients.12 Despite local differences in the layered composition of the body, the average attenuation is from 0.3 to 0.6 dB/MHz cm.2 This corresponds to a total round-trip attenuation of 0.6 to 1.2 dB/MHz cm for a pulse-echo system.


Signal amplification, called gain, is used to compensate for ultrasound attenuation in tissue (Fig. 1.14). The settings are based on a combination of time gain compensation (TGC), also called depth gain compensation (DGC), and the overall gain. Because of tissue-dependent differences in attenuation that occur at different sites and among different patients, the gain settings should be optimized for each examination.




Fig. 1.14 Echo signal intensity and gain (normalized). Signal intensity is amplified as a function of depth to compensate for sound attenuation in the tissue. This feature, called time gain compensation, creates a uniform appearance of equally echogenic tissues located at different depths.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 7, 2024 | Posted by in CARDIOLOGY | Comments Off on Principles of Physics and Technology in Diagnostic Ultrasound

Full access? Get Clinical Tree

Get Clinical Tree app for offline access