(C) PLOS One [1]. This unaltered content originally appeared in journals.plosone.org. Licensed under Creative Commons Attribution (CC BY) license. url:https://journals.plos.org/plosone/s/licenses-and-copyright ------------ An explainable artificial intelligence approach for predicting cardiovascular outcomes using electronic health records ['Sergiusz Wesołowski', 'Department Of Human Genetics', 'Utah Center For Genetic Discovery', 'University Of Utah', 'Salt Lake City', 'Ut', 'United States Of America', 'Gordon Lemmon', 'Edgar J. Hernandez', 'Alex Henrie'] Date: 2022-02 Abstract Understanding the conditionally-dependent clinical variables that drive cardiovascular health outcomes is a major challenge for precision medicine. Here, we deploy a recently developed massively scalable comorbidity discovery method called Poisson Binomial based Comorbidity discovery (PBC), to analyze Electronic Health Records (EHRs) from the University of Utah and Primary Children’s Hospital (over 1.6 million patients and 77 million visits) for comorbid diagnoses, procedures, and medications. Using explainable Artificial Intelligence (AI) methodologies, we then tease apart the intertwined, conditionally-dependent impacts of comorbid conditions and demography upon cardiovascular health, focusing on the key areas of heart transplant, sinoatrial node dysfunction and various forms of congenital heart disease. The resulting multimorbidity networks make possible wide-ranging explorations of the comorbid and demographic landscapes surrounding these cardiovascular outcomes, and can be distributed as web-based tools for further community-based outcomes research. The ability to transform enormous collections of EHRs into compact, portable tools devoid of Protected Health Information solves many of the legal, technological, and data-scientific challenges associated with large-scale EHR analyses. Citation: Wesołowski S, Lemmon G, Hernandez EJ, Henrie A, Miller TA, Weyhrauch D, et al. (2022) An explainable artificial intelligence approach for predicting cardiovascular outcomes using electronic health records. PLOS Digit Health 1(1): e0000004. https://doi.org/10.1371/journal.pdig.0000004 Editor: Mecit Can Emre Simsekler, Khalifa University of Science and Technology, UNITED ARAB EMIRATES Received: August 31, 2021; Accepted: November 17, 2021; Published: January 18, 2022 Copyright: © 2022 Wesołowski et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: We obtained medical records from the University of Utah and Primary Children’s Hospital under an IRB that waived consent (see ethics statement). We refer to this cross-institution extract as the Utah Data Resource. Because the aggregate is comprised of exact dates and other protected patient information, the data cannot be made publicly available. Information regarding how qualified researchers might apply for data access can be found here https://irb.utah.edu/about/contact/. However, All Probabilistic Graphical Models described in this paper are available through the web using the following link: https://pbc.genetics.utah.edu/lemmon2021/bayes/. Funding: This research was supported by the AHA Children’s Strategically Focused Research Network grant (17SFRN33630041) (https://professional.heart.org/en/research-programs/strategically-focused-research/strategically-focused-research-networks) and the Nora Eccles Treadwell Foundation. RD’s effort was supported by the National Institutes of Health under Ruth L. Kirschstein National Research Service Award T32 HL007576 from the National Heart, Lung, and Blood Institute (https://grants.nih.gov/grants/oer.htm). GL was supported by NRSA training grant T32H757632 (https://researchtraining.nih.gov/programs/training-grants/T32). SW was supported by NRSA training grant T32DK110966-04 (https://researchtraining.nih.gov/programs/training-grants/T32). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: GL, VD, MY own shares in Backdrop Health, there are no financial ties regarding this research. Introduction The application of data-science methods to electronic health record (EHR) databases promises a new, global perspective on human health, with widespread applications for outcomes research and precision medicine initiatives. However, unmet technological challenges still exist [1–3][. One is the need for improved means for ab initio discovery of comorbid clinical variables in the context of confounding demographic variables at scale. Moreover, how best to tease apart the intertwined impacts of multiple comorbidities and demographic variables on patient health remains a daunting challenge [1, 3–9]. We used a massively-scalable comorbidity discovery method called Poisson Binomial based Comorbidity (PBC) discovery [10] to search Electronic Health Records (EHRs) from the University of Utah and Primary Children’s Hospital for comorbid diagnoses, procedures, and medications. In this context, we refer to co-occurring medical diagnoses, procedures and medications using the single blanket term, comorbidity. PBC can also discover temporal relationships and quantify transition rates between various comorbidities. The result is a disease network, devoid of Protected Health Information (PHI), that is well-suited for powering downstream outcomes research. Although comorbidity discovery is a necessary first step towards enabling outcomes research, it is not an end in itself. Comorbidities do not exist as isolated pairs, rather they combine to create a complex web of influence on any given outcome. While PBC is powered to discover that web, harnessing it for outcomes research requires a separate computational machinery, one capable of calculating the joint contributions of multiple, conditionally dependent variables on an outcome, so called multimorbidity calculations [1,3,11–13]. Moreover, because researchers seek not merely to predict outcomes, but also to measure the contributions of factors driving them, ‘explainable’ solutions [14–22], rather than black box approaches are required. We have adapted Probabilistic Graphical Models (PGMs) [2,22–27] to address these needs. PGMs are well suited for outcomes research. Contrary to other methods, e.g. generalized linear models (with or without mixed effects), PGMs are capable of: (1) discovering and modelling any number of multilevel dependencies between variables, (2) capturing non-additive or non-multiplicative interactions, and (3) their application does not require excluding nor imputing missing data [28]. Moreover, PGMs model the full joint probability function governing relationships in the data, and thus do not necessitate a dichotomy between response and input variables. Rather, PGMs are capable of answering a prediction query for any variables conditioned on any set of inputs included in the model. Using these computational technologies, we mined the EHRs of over 1.6 million University of Utah and Primary Children’s Hospital patients, including over 500,000 mother-child pairs, for comorbid diagnoses, procedures, medications, and lab tests driving diverse cardiovascular health outcomes, focusing on three areas: heart transplant, sinoatrial node dysfunction, and congenital heart disease. Our results illuminate the comorbid and demographic landscapes surrounding these key cardiovascular outcomes in the US intermountain west, and demonstrate how our approach can inform health care disparities with precise, quantitative results in the context of a specific health care system. Methods Ethics statement Human subjects approval for this study was obtained following review by the University of Utah Institutional Review Board, IRB_00095807 under a waiver of consent and authorization. Patient data was not anonymized prior to the start of the study. All authors completed Human Subjects research requirements. Utah data resource The University of Utah maintains an Enterprise Data Warehouse (EDW)–a central storage and search facility for all clinical data collected from all affiliated University hospitals and clinics across the Intermountain West. SQL queries were used to aggregate data from various tables and collect the following information: (1) gender, ancestry, ethnicity, and age for each patient; (2) list of patient visits, along with visit dates, and medical terms associated with each visit, including diagnostic codes, procedure codes, and medications ordered. ICD9 and ICD10 diagnosis codes consist of 18,000 and 142,000 codes respectively, while procedural codes (CPT) include around 10,000 codes. In all, we collected records for 1.6 million patients, 21 million visits and 166 million diagnosis (DX), procedure (PX) and medication (RX) codes. See S1–S5 Tables for additional details. We combined these data with the Primary Children’s Hospital’s database of echocardiographic variables (diagnoses, ventricular function, valve gradients, chamber/vessel sizes, etc.) dating back to 2006 for 65,618 probands, 44,254 of which also appear longitudinally in the University’s EDW. These data contain 529,317 mother-child pairs with EHR data, 14,155 of which include a child with echo data, allowing us to study maternal contributions to congenital heart disease (CHD). Collectively, these data comprise the Utah Data Resource (UDR). For the purposes of computation, custom encryption is applied to the UDR to produce data free of protected health information (PHI) and unintelligible without its cyphers. We can then generate statistics on this PHI free data in a variety of compute environments, decrypting the results on PHI approved machines. In this analysis, a patient’s diagnoses are inferred via billing codes. Thus, the investigations and risk calculations presented herein reflect medical practice within the University of Utah Hospital network and Primary Children’s Healthcare. How closely they approximate underlying universal (‘true’) risks is still unknown. Moving forward, we note that the methods described below provide powerful means for large-scale cross institutional comparisons aimed at discovering differences in medical practice and billing trends. Patient disease network We used a Poisson Binomial based methodology called PBC [10] to discover comorbidities within our EHR corpus. Standard methods such as stratification seek to control for confounding variables through ‘stratifying’ by age and gender (for instance) and calculating comorbidity statistics for each strata, under the restrictive assumption a every patient in a stratum has the same probability of manifesting each morbidity. However this approach fails to scale, since the use of many confounding variables leads to strata too small to detect a statistical significance comorbidity. In contrast, PBC models the effects of age, gender, race, ethnicity, insurance type, and the length and density of each patient’s medical record. These input features are used to determine per-patient probabilities for each medical term, using a Poisson binomial test. The result is much greater statistical power [10]. PBC was used to find significant connections among every possible combination of ICD diagnoses, procedures, and RxNorm medication terms, thereby creating a patient disease network [10]. Patient disease network is a term borrowed from Capobianco et. al3 and denotes a network comprising all significant connections among diagnoses, procedures and medications (Bonferroni p-value cutoff 10E-9.48). We only considered terms appearing in at least 15 patients. This filter reduced the number of unique terms to 39,055 ICD10 diagnosis codes, 5,716 CPT procedure codes, and 1,764 RxNorm medication codes. We used Minimum Description Length clustering [32] to visualize the data, so that nodes with similar combinations of edges would lay near one another in the network. We also determined the patient flux between every pair of nodes. The result is shown in Fig 2A, which provides a visual representation of our patient disease network for the entire EHR corpus. In keeping with previous work [13,33–36] on patient disease networks, we refer to a sub-portion of the network, focused on a single outcome as a trajectory, or term trajectory. Fig 2B shows a trajectory for adult heart transplant. Trajectories provide means to display additional features of the network, such as transition probabilities (which correspond to patient flux between nodes), and the marginal frequencies of outcomes and comorbid terms within the EHR corpus. Collectively, this information allows for better intuition of the disease landscape surrounding an outcome. The trajectory is also a useful starting point for cost and service allocation calculations. Multimorbidity networks While trajectories describe transition probabilities between two comorbid terms, they provide no means to determine the combined effects of multiple comorbid diagnoses, and associated clinical procedures and medications upon an outcome. We have employed Probabilistic Graphical Models (PGMs) to overcome this limitation. We learned the structures of the PGMs using the python3 package “pomegranate” [28], which provides a Bayesian Information Criterion (BIC)-based DP-A* exact structure search algorithm [37,38,46]. The exact search algorithm explores the entire applicable space of conditional dependencies in order to discover the optimal network structure for the data. Parameter learning for this optimal network is accomplished using the loopy belief propagation algorithm [39]. We use the same package for our inference and multimorbidity risk calculations. The visual interpretation was designed using the graph_tool [40] Python3 package and D3.js Java library. For each Probabilistic Graphical Model, a maximum of 25 comorbid features were selected using PBC and validated by experts in the medical field (TAM, DW, MDP, BEB, RUS, MTF). Features that were judged to be of clinical relevance, importance or interest for the field under study were selected and used as inputs to learn the PGM structure and infer risk. These selected features became the inputs used to learn the PGM structure and infer risk. The patient’s features were described in a categorical data format, (e.g. indicating the ancestry, ethnicity, or insurance type) or “present/absent” binary variables in case of medical diagnoses and procedures. A continuous feature (e.g. age, BMI, blood pressure) were discretized based on established clinical thresholds. Because the PGMs only present the facts about the data, PGMs themselves cannot discover or infer the temporal order of the events (unless specified as a Dynamic PGM). To overcome this issue, for our temporalized PGMs we have imposed the order (discovered using PBC; see [10] for additional details.) on the EHR extraction process prior to learning the Probabilistic Graphical Model structure. When trained on temporalized data, PGMs are forced to learn temporal conditional probabilities. Missing data are handled inherently by the Probabilistic Graphical Model structure learning process. That is, no patients were excluded due to missing data and no missing data was imputed. The resulting temporalized structures we call multimorbidity networks. Probabilistic Graphical Models represent conditional dependencies in the dataset as a directed acyclic graph (DAG); however, it is important not to confuse directionality with causality or temporal ordering. In keeping with best practice, the multimorbidity networks are visualized in their undirected, moralized form, in which every node is connected to its Markov blanket. A single constructed multimorbidity Network provides an inference engine capable of answering O(3n) personalized conditional risk queries, where n denotes the number of features describing a patient’s condition, and the base of the exponent is 3, because in case of binary health records data there are three states for each node that can be specified: present, absent, or status unknown. Confidence values Risk estimates derived from Probabilistic Graphical Models are maximum likelihood estimates given the optimal structure under the BIC and an assumed uniform prior probability of any distinct EHR. To obtain standard deviation values for these estimates, we created 100 nets in parallel [41] from bootstrap replicates of the same data used to create Figs 3, 4 and 5. We then queried the resulting replicate nets, and calculated standard deviations of risks of outcomes of interest. Discussion The ability to model dependencies among multiple risk factors is crucial for meaningful outcomes research. Unfortunately, traditional techniques, such as logistic regression, have limited ability to capture so-called ‘conditional dependencies’ between variables, which are the heart and soul of multimorbid analyses. Although mixture and generalized linear models with mixed effects can (in principle) overcome this weakness, these techniques are limited because a new model must be designed for every question. Neural nets provide one possible alternative. Although they can account for non-linear interactions in the data and are scalable [7], Neural nets are often referred to as ‘black boxes’ (i.e., lacking explainability) [14,15,20,21,42–46] due to the difficulties in determining precisely how and why different input variables were used to produce the outputs. Because we sought not merely to predict outcomes, but also to understand the relationships between multiple clinical variables and outcomes, we selected an ‘explainable’ AI solution, rather than a black box approach. Probabilistic Graphical Model-based [23–25,46] multimorbidity networks offer best-practice solutions to this problem. Moreover, they effectively model data without recourse to a fixed decision protocol (e.g decision trees), and are resilient to missing/unknown data. Crucially, the contributions of different combinations of variables to an outcome can be precisely and easily determined. Explainability comes at a cost; unlike Neural nets, which are incredibly scalable, multimorbidity networks can model a maximum of only 30 or so variables at once [28,37,38]. It is therefore necessary to pre-identify high impact variables when modeling an outcome, a need fulfilled by PBC [10]. We argue that the ability to rigorously investigate interrelations among 30 or so primary determinants represents a giant step toward understanding cardiovascular disease. Our results illustrate how multimorbidity networks provide explainable solutions for understanding the joint impacts of diagnoses, medications, and medical procedures on cardiovascular health outcomes. We emphasize that the necessarily brief results reported here hardly exhaust the contents of these machineries. Consider that a multimorbidity network with n nodes supports ~3n possible queries. The net shown in Fig 4B, for example, supports ~314 different queries—a number that gives some indication both of the complexity of the data being extracted from the EHR corpus by our approach, and the value of these multimorbidity networks to further outcomes research. Conclusion The analyses presented here provide a first step toward a global description of heart disease and associated comorbidities across the USA intermountain west. However, the map we seek resides not so much in the results reported here, as it does in the products of our analyses: the PGM multimorbidity networks. As we have explained, these networks support multitudes of queries, and when used in combination, support both wide-ranging and focused explorations of a disease landscape. Given the right datasets, we have shown that the approach can provide new insights, such as the mother-child cross-generational cardiovascular multimorbidities we described. However, our approach also has limitations. Our exact approach allows us to model at most ~30 health conditions at a time. In future work we would like to relax this limiting factor by allowing approximate solutions that enable us to scale up the complexity of the multimorbidity networks to thousands of health conditions. Another area for innovation regards incorporation of continuous variables, as current software packages do not allow us to incorporate such variables at scale, however there is no theoretical limitation preventing their use in a PGM framework. A major strength of our approach is that these outcomes machineries can be redistributed as web-based tools. Indeed, the multimorbidity Networks described here have been made available online [pbc.genetics.utah.edu/lemmon2021/bayes], with the hope that the wider scientific community will find them useful for their own outcomes research. The ability to transform enormous collections of EHR data into compact, portable machines for outcomes research, with no exchange of PHI, solves many of the legal, technological, and data-scientific challenges associated with large-scale EHR analyses. Acknowledgments We thank Barry Moore, Jacob Shreiber, Jerry Rudisin, Sepideh Ebadi, Edward B. Clark and members of the University of Utah EDW, UPDB and Utah Center for High Performance Computing for insightful discussions, facilitating access to medical records and familial relationships, and computational support. [END] [1] Url: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000004 (C) Plos One. "Accelerating the publication of peer-reviewed science." Licensed under Creative Commons Attribution (CC BY 4.0) URL: https://creativecommons.org/licenses/by/4.0/ via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/plosone/