Patient Specific Modelling


Term

Definition

Comment

Idealised

Simplified representation which captures key features

Relevant for studies involving general trends

Patient specific

Values from measurement or modelling which are relevant to the individual patient

Hence can be used in diagnosis and surgical planning

Image guided modelling

Integration of imaging with computational modelling

Image data may be idealised or patient specific

Patient specific modelling

Image guided modelling which provides data relevant to the individual patient

Input data is relevant to the individual patient; hence output data can be used in diagnosis and surgical planning of the individual patient





11.1.3 History and Motivation


Numerical modelling was pioneered in the early 1950s in an attempt to better understand the vibration response of new aircraft wing designs under loading (Turner et al. 1956). The first numerical simulations involving stress in biological tissues were applied to idealised geometries in bone and teeth (Rybicki et al. 1972; Thresher and Saito 1973). The first numerical simulations of blood flow were based on idealised 2D geometries (Perktold et al. 1984; Friedman and Ehrlich 1984). The first numerical simulations to estimate cardiovascular stress were in 2D idealised arteries (Richardson et al. 1989; Loree et al. 1992; Cheng et al. 1993). Patient specific modelling has been gaining momentum since around 2000. There are four main reasons for this which are briefly outlined in this section.

Improvements in computer power. There is no doubt that the principal reason for the spread of computational modelling, not just for biomedical applications but throughout engineering and industry, has been the continued increase in computer power. Moore’s law, which is that the number of transistors on an integrated circuit doubles every 2 years, has been applicable for 40 years from 1971 to 2011, with only a brief change to every 2.5 years, since 2011. In parallel to the improvements in processing power there have been reductions in price. The cost of a high-end workstation has decreased to the point where these are affordable for the individual researcher. In 2016, a high-end workstation (32-core) costs around £5000 ($8000, €7000). Three-dimensional simulations involving 500,000 nodes with time-resolved output have run times of just a few hours.

Improvements in modelling software. Early work on numerical modelling used software developed in-house. While some groups still use in-house software, commercial packages are now widely used in image guided modelling and patient specific modelling. These packages have undergone extensive validation and have options specifically designed for biomedical applications.

Availability of highresolution medical imaging. At the beginning of the PSM processing chain is medical imaging. The modern medical imaging department has a number of 3D imaging systems with spatial resolution of around 1 mm suitable for acquiring high quality geometries for PSM.

Potential of biomechanical measurements for clinical decision-making and surgical planning. Major clinical events such as aneurysm rupture or rupture of atherosclerotic plaque are associated with mechanical failure of tissues, where tissue stress exceeds tissue strength. Growth of atherosclerotic plaque and aneurysm is associated with changes in both tissue stress and wall shear stress. This has led many groups around the world to think that measurements related to the biomechanical status of disease may provide improved diagnosis and selection of patients for treatment. Medical imaging systems are unable to measure mechanical stress, whereas stress is one of the main outputs of PSM. This has provided a key rationale for the development of PSM as a potential diagnostic tool for use in cardiovascular disease. In parallel, there has been the realisation that PSM may be used as an aid to surgical planning where, for example, the effect of different surgical approaches on the haemodynamics may be investigated in the computer first.



11.2 Computational Mechanics



11.2.1 Introduction and Rationale


It has been noted in previous chapters that there are governing equations which describe the behaviour between strain and stress in a solid and between strain rate and shear rate in a fluid. There are a very few geometries for which exact solutions of these equations exist. For a fluid, the governing equations are the ‘Navier–Stokes equations’. Exact solutions of the Navier–Stokes equations may be found for simple geometries such as motion of an infinite plate and steady or pulsatile flow in a cylinder. For more complex geometries, computational mechanics provides a framework to allow the governing equations in a fluid and a solid to be solved. The mathematics of computational mechanics is complex and outside the scope of this book, but is available in standard texts and review articles for interested readers (Zienkiewicz 2004; Zienkiewicz et al. 2005; Chapra and Canale 2014). The basic steps involved in computational mechanics are detailed below:

Digital computation. The governing equations are applicable to a continuous media (i.e. all x, y, z, t). Computational mechanics operates using a digital or discrete model in which equations are solved at many specific values of x, y, z, t. This digital model is then suitable for calculation using a computer. The set of points is referred to as the ‘mesh’ or ‘grid’. The smallest unit of the mesh is called an ‘element’ which consists of a number of nodes. Each node of the mesh has a number of values related to the variables in the governing equations.

Discretization. The governing equations are broken down into simpler equations which are suitable for an iterative solution using a computer. This process is called ‘discretization’. There are a number of different discretization methods; ‘finite element method’, ‘finite volume method’, ‘finite difference method’, ‘spectral element method’. These all have slight differences in the mathematical formulation and some are more suited to solid modelling while others are more suited to flow modelling.

Solver. An initial set of values is assigned to each node of the mesh. The computer programme operates in an iterative manner in which the equations at each node are solved and the set of values at each node adjusted. After several iterations, the set of values will be stabilised and the solution will be reached. It should be noted that not all solutions are physically realistic. An experienced fluid or solid mechanics specialist is required to design the mesh and prepare the simulation to avoid physically unrealistic solutions.


11.2.2 Flow and Solid Modelling


Estimation of flow field data using computational mechanics is referred to as ‘computational fluid dynamics’ or CFD. Commonly the finite difference and finite volume discretization methods are used in CFD software packages. Estimation of stresses in solids usually involves finite element discretization, so that solid modelling is usually referred to as ‘finite element analysis’ or FEA. A number of commercial packages are available for flow and solid modelling including CFX and Fluent (ANSYS, Canonsburg, PA, USA) and Abaqus (Dassault Systemes, Simulia, Rhode Island, Providence, USA). For use in blood flow, several groups have developed their own CFD packages (for example, Minev and Ethier 1998; Ethier et al. 1999; Sherwin and Karniadakis 1995; Witherden et al. 2014).


11.3 Processing Chain


This section describes the PSM processing chain (Fig. 11.1). The sections below describe each component of the chain in detail. An example of a full PSM processing chain has been described by Antiga et al. (2008).

A320302_1_En_11_Fig1_HTML.gif


Fig. 11.1
Patient specific modelling processing chain


11.3.1 Imaging


The starting point of the PSM processing chain involves the acquisition of medical imaging data. Medical imaging systems were described in Chap. 9, where it was noted that a number of imaging modalities provide high-resolution 3D data. The medical imaging dataset provides 3D data on the tissues of interest in the patient. In the case of arteries this will be the arterial wall, any thrombus (e.g. present in clinically relevant aneurysms) and the vessel lumen. The ideal imaging modality for PSM should have the following features: high resolution, low noise, artefact-free, high contrast between tissues. A brief summary of the use of different imaging modalities in PSM is described below:

Computed tomography (CT). This is the imaging modality which comes closest to the set of criteria listed above. It has been widely used in PSM of the larger arteries and of aneurysms. With multislice scanning it is possible to acquire gated 3D cardiac data from which coronary artery geometries may be obtained. Using CT it is possible to distinguish the thrombus from the lumen in aneurysms. The main limitation is the inability to measure wall thickness; this arises due to resolution limitations (around 0.6 mm) and due to insufficient contrast between the wall and surrounding soft tissue. For use in atherosclerosis, the resolution is insufficient to distinguish the detailed structure within the plaque.

Magnetic resonance imaging (MRI). Image contrast is high which enables different soft tissues to be distinguished. The good soft tissue contrast also enables some visualisation of the aortic wall, which is often not possible on CT. For use in atherosclerosis, MRI has been used to acquire 2D and 3D data in atherosclerotic plaque, where its excellent soft tissue discrimination enables visualisation of the different plaque components. However, MRI has a number of limitations for PSM. Acquisition times for 3D data can be long. For data acquired from the thorax and upper abdomen, the patient must hold their breath to limit the displacement of the chest. Image registration tools are required to register the images together and remove this motion. MRI has much larger slice intervals compared to CT, typically around 5–6 mm for abdominal imaging protocols. So although, MRI has excellent in-plane (x, y) pixel resolution, <1 mm, the resulting geometries may lack detail in z-direction and be unsuitable for PSM.

Ultrasound. In principle, 3D ultrasound data may be acquired using externally applied transducers. However, in practice, there are problems associated with registration, low resolution, loss of data due to calcifications and bowel gas (Hammer et al. 2009). These problems are mostly resolved by the use of intravascular ultrasound where the transducer is much higher frequency (resulting in much improved spatial resolution of 50–100 μm), and where the transducer images the tissues from inside the vessel. IVUS has been used to provide 3D geometries for PSM since the very early days of PSM (Chandran et al. 1996; Krams et al. 1997). The position and orientation of the IVUS scan-plane needs to be known so that the IVUS data can be positioned within a 3D geometry. Krams et al. (1997) used an angiography system to obtain this information; the overall system was known as ANGUS (ANGiography and UltraSound).

Optical coherence tomography (OCT). This is also an invasive technique which can be used for imaging arteries and has very good spatial resolution of 10–20 μm. It was noted in Chap. 9 that the critical thickness of the cap in atherosclerotic plaque is around 70 μm, far below the resolution of CT, MRI or transcutaneous ultrasound, and the only technique capable of providing accurate measurements of cap thickness in vivo is OCT.


11.3.2 Segmentation


Segmentation is the process whereby the surfaces of the organ of interest are identified. Segmentation may also involve defining the boundaries between different regions in the organ. Typically for fluid modelling the inner lumen of the vessel is required. For solid modelling ideally both inner and outer lumen of the vessel wall should be identified. For abdominal aortic aneurysms the region of thrombus is required. For atherosclerosis, the regions need to be identified corresponding to different plaque constituents. Segmentation concerns the detection of edges in an image and is commonly used in 3D imaging for visualisation of structures. For example, in CT imaging in the Radiology Department, the soft tissues can be ‘peeled back’ to reveal underlying organs, skeleton etc. Segmentation can be manual, automated or semi-automated.

Manual segmentation consists of a trained operator laboriously going through each image defining the boundaries by hand. This is extremely time-consuming for the operator. Manual segmentation is still used, often when image quality is low and automated methods fail.

Threshold methods. These are the simplest automated segmentation methods and involve looking for differences in intensity values between adjacent voxels. Figure 11.2 shows a threshold-based method in operation for identification of the inner lumen of an abdominal aortic aneurysm in a CT dataset. The algorithm starts in the middle of the lumen and works outwards radially. When the intensity value exceeds a threshold value, the edge has been found. These simple methods work best when image noise is low. Some commercial software such as Mimics (Materialise, Belgium) allows manual definition of the organ boundaries which are then smoothed to produce a 3D surface mesh suitable for modelling.

A320302_1_En_11_Fig2_HTML.gif


Fig. 11.2
Segmentation of the lumen of an abdominal aortic aneurysm. a Lumen marker is placed on the CT scan. b A simple threshold method is used to identify the lumen. c A median filter is applied to smooth the contour. d The contour is further smoothed using a mean filter

Deformable models. These are often referred to as ‘active contours’ (2D), ‘active surfaces’ (3D) or ‘snakes’ (Xu et al. 2000). These provide a widely used and powerful segmentation technique. The snake is a connected set of points which can deform dependent on the local image content. Typically an initial contour is seeded and grows outwards until it reaches the organ boundary. Local image measures which have high values where there is an edge are used to constrain the snake. The snake will often be ‘trained’ from learning datasets to exhibit some sort of geometrical behaviour, e.g. when segmenting the left ventricle the snake knows roughly what shape a typical left ventricle is and this is used as a constraint in the process. The result from this approach is a robust 3D reconstruction with good reproducibility.

Automatic segmentation methods represent the ideal scenario for the operator. These work best when the image quality is very good such as for CT images. Automated segmentation techniques can be difficult to develop, especially when the image quality is poor. If the information required for segmentation is not present in the image then no amount of processing will help and the best that can be done is a ‘best guess’, either by the automated software or by the operator. Usually segmentation in PSM is done as a combination of automated and manual input.


11.3.3 Geometry Preparation


Prior to meshing, the surfaces obtained from segmentation must be prepared. For flow modelling, the blood is only in contact with the inner surface of the vessel. In this case, a single layer surface will suffice. For solid modelling in a vessel both an inner and outer surface are required. Due to imaging constraints, especially in CT, it is often not possible to identify the outer surface of the vessel. In this case, it is commonly assumed that the vessel has a particular wall thickness. In the case of abdominal aortic aneurysm, it is common to assume that the wall has a constant thickness of 1.9 mm (Raghavan et al. 2000), or that the wall thickness varies between 1.5 and 1.13 mm at thrombus-free and covered sites respectively (Gasser et al. 2010). The surfaces of resulting geometries must be smoothed to remove artefacts of the reconstruction algorithm and to ensure that surfaces do not cause undesirable issues during the meshing or modelling stages.


11.3.4 Meshing


Meshing is the process where the 3D segmented geometry is divided into many elements (Figs. 11.3 and 11.4). Mesh generation is one of the critical parts of the patient specific modelling process. Increased solution accuracy is produced with a larger number of elements, but at the expense of increased processing time. The total number of elements employed for a given model is a balance between solution accuracy and processing time. While the mesh is generated using a specialist computer programme, the operator has considerable input in defining element types and element sizes. For example, when there are large velocity gradients or stress gradients then the mesh density needs to be higher. The process of mesh optimisation is therefore essential and involves adjusting the local mesh following examination of the solution. In practice, several iterations between mesh and solution may be required.

A320302_1_En_11_Fig3_HTML.gif


Fig. 11.3
Meshing of an abdominal aortic aneurysm geometry. a Reconstructed geometry from CT data. b Volume mesh. c Tetrahedral element. d Close up showing position of the 10 nodes


A320302_1_En_11_Fig4_HTML.gif


Fig. 11.4
Typical 3D reconstruction based on the marching squares/cubes algorithm in Mimics v18 (Materialise, Belgium). a The entire torso with external leads. b The main features of the skeleton. c The abdominal aorta with infrarenal and renal branches, iliac bifurcation, common iliac arteries and internal/external iliac arteries. This case shows a 91 year old female with an isolated common iliac artery aneurysm and thrombus (green)

Different types of element may be used. Common elements are tetrahedron (shaped like a pyramid) or hexahedron (shaped like a brick). Meshes may involve both types of elements. Hexahedral elements are generally more suited to structures with straight edges, hence in PSM the use of tetrahedral elements is more common. For CFD, some codes such as STAR-CCM+ (CD-adapco Group) use polyhedral elements which offer improved computation time over tetrahedral elements, while maintaining the flexibility of tetrahedral elements to mesh complex geometries.


11.3.5 Computational Modelling


The next stage is computational modelling in which the solution is produced after several iterations. A number of different modelling regimes can be adopted. Computational fluid dynamics can be performed using a rigid-walled approach, which is simple and sufficient for providing output data on basic haemodynamics. A moving-wall method can be adopted with input of moving-wall geometry data. Solid modelling is used for estimation of tissue stress. Solid modelling alone is suitable for estimation of tissue stress in abdominal aortic aneurysms and for 2D studies in atherosclerotic plaque. Combined solid-fluid modelling is called fluid structure interaction or FSI. This is needed when the pressure distribution is not uniform within the 3D geometry and is essential for 3D studies of tissue stress in stenosed arteries.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Nov 3, 2017 | Posted by in CARDIOLOGY | Comments Off on Patient Specific Modelling

Full access? Get Clinical Tree

Get Clinical Tree app for offline access