© Springer International Publishing AG 2017
Jon Kobashigawa (ed.)Clinical Guide to Heart Transplantation10.1007/978-3-319-43773-6_1818. The Future of Heart Transplantation
(1)
Director, Advanced Heart Disease Section, Cedars-Sinai Heart Institute, Los Angeles, CA, USA
(2)
Director, Heart Transplant Program, Cedars-Sinai Heart Institute, Los Angeles, CA, USA
Keywords
Heart failureHeart transplantationToleranceChimerismGenomicsPersonalized medicineWhole organ engineeringStem cell therapyIntroduction
The field of heart transplant has made undeniable progress since the first human-to-human heart transplant was performed in 1967. Advances in translational medicine bring tremendous potential to the field of heart transplantation. As heart transplantation remains the preferred therapy for end-stage heart failure, this chapter provides an overview of the most promising innovations in heart transplantation, including advances in immunosuppression and inducing tolerance. Acknowledgment will also be given to recent advances in the prevention of heart failure, as well as the rise of mechanical circulatory support devices as destination therapy, which may reduce the demand for donor hearts in a time of short supply.
Acquired Tolerance: the Holy Grail of Transplant, and How It Might Be Achieved
As emphasized previously in Chap. 10, contemporary immunosuppression plays a crucial role in maintaining the success of heart transplantation in the modern era. Unfortunately, life-long immunosuppression does not only prevent rejection and reduce the risk of subsequent poor outcomes. Long-term treatment also results in toxicity, particularly nephrotoxicity, as well as increased risk of infections and malignancy.
While there have been advances in immunosuppression in the last two decades, improvements in long-term survival have plateaued [1]. Future improvement in post-cardiac transplant survival is more likely to be achieved by targeting the mechanisms responsible for long-term mortality. This includes cardiac allograft vasculopathy, which is essentially a form of chronic rejection and could be targeted effectively by theoretical induction of tolerance. Furthermore, complete tolerance would lessen the need for immunosuppression, and thus reduce malignancy-related complications. In order to achieve the holy grail of acquired tolerance, one must understand the mechanisms behind chronic rejection and utilize novel strategies to abrogate them; much work is ongoing in this arena.
Manipulation of T- and B-cell Mechanisms
While the traditional methods of immunosuppression and previous attempts at inducing tolerance with agents such as ATG have targeted the pathways leading to activation of T-cells [2], recent research focuses on the role of regulatory T cells (T-regs) [3]. In the thymus, naturally occurring CD25+CD4+ T-regs that develop under the control of transcription factor Foxp3 suppress immune responses to foreign antigens in the context of both animal and human models of solid organ transplantation [3]. Furthermore, existing pharmacologic agents demonstrated to reduce rejection also increase T-reg frequencies [4]. Thus, these alloantigen-induced T-reg cells are able to prevent acute as well as chronic graft rejection. Interestingly, while these T-regs can be induced by alloantigen pretreatment [5, 6], the presence of the allograft as the source of donor alloantigen is essential for maintaining the unresponsive state [7]. The ability to generate and maintain these specific alloantigen reactive T-regs could theoretically induce tolerance in future, preventing rejection while remaining immunosuppression-free.
While B-cells are recognized more as antibody-secreting cells in the pathogenesis of rejection, they also function as antibody-presenting cells that interact with T-cells [8], leading to antibody mediated rejection, and express complement receptors through which adaptive immunity is regulated [9]. Analogous to the regulatory T-cell pathways mentioned previously, only recently have the immune-regulatory roles of B cells come to light; indeed, there is some evidence that they are increased in tolerant human renal transplant recipients, as compared to stable recipients receiving immunosuppression [10, 11], and their presence in tertiary lymphoid tissue may even regulate immune responses [12]. Pre-clinical models demonstrate that B-regulatory cells (B-regs) synergistically increase the number of T-regs [13], and secrete the anti-inflammatory cytokine IL-10 [14]. In humans the B cell subset CD19+CD24hiCD38hi secretes the highest amount of IL-10 in response to CD40 stimulation, compared to other peripheral blood B cell subsets [14]. However, like T-regs, the role of B-regs in the possible induction of tolerance remains to be fully defined; how these findings can subsequently be exploited to maintain tolerance remains to be seen.
Strategies to Achieve Chimerism
Chimerism, defined as the existence of two allogeneic cell lines, which would enable specific tolerance to donor antigens while simultaneously retaining the ability to fight infection and prevent malignancy, remains the ultimate goal. In recent years, researchers have attempted to establish central tolerance via transplantation of donor bone marrow. One study involving six human kidney transplant patients appropriately conditioned with non-myeloablative therapy, including cyclophosphamide, ATG and thymic irradiation, bone marrow transplantation has been demonstrated to induce tolerance requiring no immunosuppression. However, all reports of this method have resulted in loss of mixed chimerism within months of transplantation [15], possibly due to inflammatory responses [16].
A newer approach by Leventhal et al. [17], using bioengineered mobilized cellular product enriched for hematopoietic stem cells and tolerogenic graft facilitating cells combined with non-myeloablative conditioning, was employed in a recent study involving 19 kidney allograft recipients with highly mismatched donors. Thus far, 12 of the 19 patients have been effectively weaned off immunosuppression, with intact grafts and maintenance of stable mixed chimerism.
In the future, this early success of induction of stable mixed chimerism across HLA barriers may be achievable in the regular clinical practice of heart transplantation. A new clinical trial entitled “Bone Marrow Transplant to Induce Tolerance in Heart Transplant Recipients” is currently taking place at the University of Louisville [18] and results are keenly anticipated. Further tolerance-induction research will depend on two different aspects: further investigation of the mechanism of tolerance, and further studies to increase safety and broaden the applicability of initial studies using enhanced stem cell transplantation.
New Directions in Immunosuppression
Novel Immunosuppressive Agents
Given that it is unlikely that induced tolerance will be achieved in the near future, the use of immunosuppression medicine and immune monitoring will still be required. Thus, minimizing immunosuppression and immunosuppression-associated complications while maintaining efficacy remains the goal of post-transplant management.
During the past few decades, new drugs have been added into post-transplantation clinical practice. From the development of more powerful and specific immunosuppressants, especially beneficial for sensitized patients (see Chap. 6), to new treatments for cardiac allograft vasculopathy (see Chaps. 10 and 12), advances in the science of immunology seem to hold the key to expanding the success of heart transplantation in our treatment of end-stage cardiac disease.
T-cell mediated acute cellular rejection remains a common issue post-transplant. A sustained T-cell response following antigen recognition requires costimulatory signals to be delivered through accessory T-cell surface molecules. An example of such a costimulatory pathway is CD28-B7. Inhibition of CD28 has been demonstrated in animal models to therefore result in reduced T-cell proliferation and prolonged allograft survival. This highly specific mechanism of immunosuppression may also negate the undesired adverse effects seen in other immunosuppressants. Belatacept is a humanized fusion protein, a homolog of CD28, which binds to the B7 molecule and inhibits its interaction with the true CD28. Currently, phase 3 trials are taking place with belatacept in kidney transplantation; phase 2 trials have shown that when used in combination with MMF, basiliximab and steroids, it allows safe avoidance of CNIs with good outcomes [19, 20].
Eculizumab, a humanized monoclonal antibody directed against the terminal complement protein C5, is also being investigated in a pilot trial in heart transplant recipients. By inhibiting the cleavage of C5, it prevents the formation of the membrane attack complex [21]. In sensitized renal transplant recipients with high levels of donor-specific alloantibody, peri-operative eculizumab administration is associated with significantly decreased incidence of early AMR [22]; the hope is that this finding will translate to cardiac transplant recipients.
Personalized Medicine for Immunosuppression
As mentioned in Chap. 10, maintaining an optimal immunosuppressant level is crucial to suppress rejection, while avoiding infection. Currently, therapeutic drug monitoring, clinical evaluations, endomyocardial biopsy, echocardiography, and the T-cell immune assay are used as the principal tools for rejection monitoring during drug weaning.
However, pharmacogenetic polymorphisms may have the potential to predict future adverse events from certain immunosuppressants and more specifically, individual dosages of different immunosuppressants. For example, certain single nucleotide polymorphisms (SNP), such as the ones found in the CYP3AP1 pseudogene, which is strongly associated with hepatic CYP3A5 activity, are more common in African Americans. Subsequent studies have suggested that CYP3AP1 genotype is a major factor in determining the dose requirement for tacrolimus [26]; a recent pharmacogenetic analysis of tacrolimus that included a large group of African American patients post-kidney transplant showed that African Americans had consistently lower median troughs despite 60% higher daily doses. Furthermore, the CYP3A5*3 variant was associated with a reduction in troughs [27].
Nevertheless, genetic variations do not completely account for trough variability; clinical factors and other comorbidities also play a role [27]. Further explication of these pharmacogenetic mechanisms might lead to targeted dosing based on genetic profiling. Hopefully with further explication of these pharmacogenetic mechanisms [28, 29], dosing equations that use genotype and relevant clinical variables can be developed, in place of dosing based on weight. These equations may also be able to provide transplant physicians more personalized targets of immunosuppression for patients (rather than the current suggested “range”).
Genomics for Rejection Monitoring and Outcome Prediction
The primary focus of care in organ transplant recipients has always been to prevent rejection. While this is currently achieved in many ways, including monitoring of serum immunosuppressant levels, clinical assessments, echocardiography, tissue endomyocardial biopsy remains the gold standard. Unfortunately, as highlighted in Chap. 12, biopsy is an invasive process with potential complications, and rates of pathologist discordance remain high. Genomic medicine, a discipline that uses an individual’s genomic information to help guide clinical care, offers an alternative, non-invasive avenue to monitor for rejection in the transplant recipient. Through the analysis of specific DNA, RNA and protein targets, genomics offers a personalized approach to organ rejection surveillance. Most appealing is that most genomic testing can be done in the form of a laboratory test without requirement for invasive procedures or hospitalization.
Gene Expression Profiling
While covered in depth in Chap. 12, gene expression profiling using the Allomap test remains the only FDA-approved non-invasive test in the surveillance of rejection, and in clinical trials is non-inferior to the endomyocadial biopsy for rejection surveillance in stable, low-risk patients greater than 2 months post-transplantation While the negative predictive value is extremely high at 99% (i.e. for predicting quiescent patients that do not require biopsy), positive predictive value remains low at 7%. Further retrospective cohort studies have subsequently demonstrated associations between Allomap score variability and risk of subsequent mortality [30, 31].
Donor-Derived Cell-Free DNA
Donor-derived cell-free DNA (dd-cfDNA) is a new modality currently under investigation. Like Allomap, it is a non-invasive blood test; it exploits the fact that the donor genome is separate and unique compared to the recipient genome, and that components of donor DNA can be detected in the serum of the recipient [32]. The basic principles behind dd-cfDNA testing in transplantation rely on the fact that rejection causes damage to donor graft cells, leading to the release of DNA fragments from the donor organ cells into the periphery. These fragments of dd-cfDNA can be detected and quantified, and assessed over time to correlate to clinical organ function [33].
The concept of dd-cfDNA testing was originally pioneered in sex-mismatched donor-recipient pairs in solid organ transplantation, with male donors and female recipients; the SRY gene marker of the Y-chromosome was employed as a target in order to detect dd-cfDNA in the periphery of recipients [34, 35]. Following on from this, a more universal approach not limited to sex-mismatched recipients was pioneered in liver/kidney/pancreas transplant recipients; instead of sex-specific DNA markers, DNA fragments released from apoptotic donor leukocytes were instead used as a DNA target, assessing for donor-specific HLA DR genes [36]. At 1 year post-transplant, donor specific HLA-DR genes were identified in 32% of the recipients. However, no correlation was found between the presence of donor HLA-DR and the incidence of rejection episodes.
The analysis of donor-specific HLA-DR, while useful in reinforcing the concept that dd-cfDNA could be found in recipients, was too specific and would have required specific assays to be developed for each donor-recipient pair. Thus, a broader approach was subsequently pioneered by Snyder et al., in which DNA from heart transplant donors and recipients was sequenced in a genome-wide manner [33]. Through genotype analysis, recipient plasma cell-free DNA was scoured for donor-specific alleles of single nucleotide polymorphisms (SNP) not present in the recipient’s genome. The fractional concentration of dd-cdDNA compared to total cell-free DNA in each sample was subsequently calculated. These plasma samples were collected longitudinally and compared to concomitant endomyocardial biopsy samples assessed by pathologists for grading of rejection over the course of the first year post-transplant.
Based on these analyses, it was established that a dd-cfDNA value of 1.7% could be used as a threshold to generate an 83% true positive rate and 16% false positive rate for rejection. Furthermore, a dd-cfDNA concentration below 1% appeared to demonstrate a “normal” value for healthy cardiac transplant recipients. In patients who experienced significant rejection episodes, the concentration of dd-cfDNA rose prior to clinical and histopathological evidence of rejection–but once treated for acute rejection, dd-cfDNA levels decreased to the baseline values found prior to rejection. These results have been duplicated in a prospective 65-patient study by De Vlaminick et al. in heart transplant recipients [37] and a 63-patient study by Grskovic et al. [38]; in the latter study, it was also noted that if dd-cfDNA did not fall greater than twofold after rejection treatment that there was a higher incidence of persistent low-grade rejection, suggesting insufficient treatment.
Interestingly, dd-cfDNA has also been demonstrated to be useful in detecting certain types of infection in transplant recipients. In lung transplantation patients with cytomegalovirus (CMV) infection, the level of dd-cfDNA has been used to differentiate infection versus rejection [39]. In this study, levels of dd-cfDNA enabled differentiation between no rejection vs. moderate to severe rejection. Notably, patients with CMV infection demonstrated elevated dd-cfDNA levels, but not to the degree of patients with rejection. In future, this application may have potential with regard to assessing clinically deteriorating patients in whom the diagnosis of rejection vs. infection must be made quickly.
Overall, these data support the idea that dd-cfDNA may be a useful biomarker for organ health, and theoretically would be advantageous over Allomap due to its high positive predictive value, its ability to be used before 2 months post-transplant (unlike Allomap), and its potential ability to be more useful for cases of antibody-mediated rejection. Larger, multicenter studies to further validate the use of dd-cfDNA monitoring are required. However, this type of SNP genome parallel sequencing of both the donor and recipient is expensive, and one would need to potentially maintain donor DNA samples years after transplant (for as long as the recipient is alive). Promisingly, a quick and more economical method using a combination of assays that allows for the detection of dd-cfDNA in a short time was recently developed [40]; in this study by Beck et al., they only used SNPs already investigated for their minor allelic frequency and that had frequencies greater than 40%. Using the Hardy-Weinberg principle, a SNP with a minor allelic frequency of between 40% and 50% would be found homozygous in both the donor and recipient in about 25% of cases for each allele. Based on this, the probability of both the donor and recipient having a different allele was calculated to be approximately 12.5%. Thus, to identify at least 3 SNPs no fewer than 30–35 different SNPs with the minor allelic frequency mentioned above would have to be scoured; this would require considerably less resources than the 3000 SNPs that would need to be analyzed if the SNPs were unselected.
Assessment of MicroRNA
The use of microRNAs (miRNA) as biomarkers of rejection represent another exciting recent development in the field of genomic medicine as applied to transplantation. miRNAs are a class of short RNA sequences that act as post-transcriptional regulators, binding to messenger RNA (mRNA) causing either degradation or silencing of the translation of mRNA. While there are only 1000 miRNAs (with more being detected), there are approximately 30,000 mRNAs, and thus one individual miRNA may regulate the expression of many mRNAs and have a widespread effect on gene expression. With regard to the detection of rejection, the microRNAs implicated in the regulation of B-cell and T-cell differentiation and function, T-cell receptor signaling, toll-like receptor signaling, cytokine production, T-regulatory cell function, and antigen presentation are of most interest [41]. These miRNAs can be found in plasma in stable form and are shed during cell turnover–which makes them potentially very useful for the purposes of a peripheral blood test to detect rejection. While investigation of miRNAs for detection of rejection initially began with intragraft miRNAs, given the high rate of pathologist discordance and need for more definitive biopsy diagnosis, the eventual goal of genomic applications in transplantation is to avoid invasive biopsy procedures. Thus, newer miRNA research has also examined the potential of peripheral miRNAs, taking into account the need for an accurate non-invasive method of rejection detection.
The concept that miRNAs are differentially expressed during acute rejection was pioneered by Sui et al., who correlated biopsy samples with acute rejection with the expression of 20 intragraft miRNAs. They demonstrated in renal transplant patients that these miRNAs were differentially expressed in a specific pattern with 8 up-regulated and 12 down-regulated [42]. Despite the small sample size of 9 (3 with rejection, 6 controls), this study lent credence to the concept of defining an organ-specific signature or pattern of miRNA expression as a marker of rejection. A further validation cohort study by Anglicheau et al. [43], with a greater number of renal transplant recipients (33–7 in test cohort of which 3 had rejection, 4 were healthy, 26 in validation cohort), showed that the miRNAs of miR-142-5p, miR-155 and miR-223 predicted biopsy-proven acute rejection with a sensitivity and specificity greater than 90%. Notably, miR-155 is encoded within an exon of the gene B-cell integration cluster (bic), and B-cell and T-cell receptor activation as well as toll-like receptor activation leads to increased bic expression, suggesting a role of these processes in acute rejection [44].
The first study to translate this concept to peripheral miRNA was published in 2014 by Van Huyen et al. This study demonstrated that plasma levels of circulating miRNA could be used as a biomarker for the detection of rejection [45]. In this study, 14 miRNAs of interest were assessed, involving 4 associated with endothelium activation, 3 cardiac myocyte remodeling, and 7 associated with inflammation; the selection of miRNAs assessed was based on those previously known to be involved with graft rejection, cardiovascular pathogenesis, endothelial injury/activation, vascular inflammation, immune signaling pathways and T-cell activation. Tissue and plasma samples were collected from 60 heart transplant recipients, 30 of whom had acute rejection and 30 of whom were matched controls (matched on recipient and donor age, cold ischemic time, time from transplant to first biopsy, immunosuppression). Every patient had concomitant tissue biopsy for histopathology evaluation along with intragraft and peripheral miRNA analysis. Of the 14 miRNAs assessed, 7 were highly differentially expressed in intragraft biopsies, and 4 were highly differentially expressed peripherally, with strong statistical significance. Specifically, serum levels of miR-31, miR-92a and miR-155 were significantly higher in the sera of patients with rejection compared to normal and the level of miR-10a was significantly lower. A further cohort of 53 patients (31 with rejection, 22 healthy) further validated the ability of these 4 miRNAs to discriminate rejecting from non-rejecting samples; crucially, a subsequent subgroup analysis of those with cellular rejection versus antibody-mediated rejection (AMR) showed that these 4 miRNAs continued to differentiate between normal and either form of rejection. Furthermore, the 4 circulating miRNAs were differentially expressed in cases of rejection regardless of whether the rejection was early (<1 year post transplant) or late (>1 year post transplant) rejection.
While most studies assessing intragraft or peripheral miRNA have focused on their ability to discriminate rejection, other work has also been performed correlating miRNAs with other negative sequelae post-transplant such as development of CAV. A 52-patient (30 with CAV, 22 without) study by Singh et al. [46] assessed levels of five different miRNAs known to be associated with endothelial activation/injury and correlated them with the presence of CAV at the time of angiography; two of the miRNAs, miR-126-5p and miR-92a-3p, were found after multivariate analysis to be able to discriminate patients with CAV compared to those without.
Certainly, these studies confirm that both intragraft and peripheral circulating miRNAs have potential as viable biomarkers of rejection. Both acute cellular and antibody mediated rejection, as well as acute and chronic forms of rejection have been detected with high accuracy using this modality. This avenue of genomic medicine offers exciting potential, as they may be able to help reduce or eventually replace the invasive endomyocardial biopsy for the screening of graft rejection.
Assessment of Molecular Messenger RNA
Assessment of molecular RNA, much like assessment of microRNA, seeks to define a definitive molecular signature or pattern of expression for rejection, given the current high rates of intra-pathologist discordance. Furthermore, with the conventional histopathological criteria, there are many “borderline” or ambiguous cases. To solve this problem, Halloran et al. [47], using data from kidney transplant biopsies and from the Genome Canada Study created a new disease classification for both ACR and AMR with the use of mRNA microarrays. This work was predicated on the notion that ACR classification is frequently ambiguous and that kidney transplant AMR is frequently C4d negative and has been greatly underestimated by conventional criteria. The use of microarrays helped to define the mRNA transcripts induced by acute kidney injury which correlated with reduced function. For example, expression of the endothelium-associated mRNA transcripts was increased in injured and diseased kidneys, with several increased in AMR. Based on the expression values of select mRNA transcripts for each biopsy, an AMR score was developed. The AMR score correlated with the presence of AMR microcirculation lesions and the detection of DSA and was high in both C4d-positive and C4d-negative AMR. The AMR score also predicted future graft loss in Cox regression analysis better than the conventional diagnosis of AMR.