A Pragmatic Comparison of Human In Vitro with Animal In Vivo Approaches to Predicting Toxicity in Human Medicines

Home / New Perspectives / Overarching Challenges / A Pragmatic Comparison of Human In Vitro with Animal In Vivo Approaches to Predicting Toxicity in Human Medicines

Overarching Challenges

A Pragmatic Comparison of Human In Vitro with Animal In Vivo Approaches to Predicting Toxicity in Human Medicines

Robert Coleman, Katya Tsaioun & Kathy Archibald, Safer Medicines Trust Published: March 27, 2014
About the Author(s)

Dr. Coleman worked for 30 years for the Glaxo group, before co-founding Pharmagene (now Asterand), the first drug discovery company to work exclusively on human biology. He was awarded an honorary DSc in recognition of his contributions to the use of human tissues in drug research. Dr. Coleman has served as an Advisor to Safer Medicines Trust since 2007.

Email: Bob@SaferMedicines.org


Dr. Katya Tsaioun, PhD is our US Science Director. Dr. Tsaioun worked for 15 years in R&D in pharmaceutical and food industries prior to co-founding Apredica, a company focused on development of in vitro ADME and toxicology tools for de-risking drug discovery programmes. Dr. Tsaioun has served as an Advisor to Safer Medicines Trust since 2008.

Email: Katya@SaferMedicines.org


Kathy Archibald, BSc is our Managing Director. Ms. Archibald worked in drug development for pharmaceutical and biotechnology companies (Searle, Medisense) after graduating in genetics from Nottingham University, UK. In 2005, she co-founded Safer Medicines Trust, which has hosted international conferences at the Royal Society and the House of Lords, aimed at improving the safety of medicines through an increased focus on human biology throughout the drug development process.

Email: Kathy@SaferMedicines.org

Despite the best efforts of the pharmaceutical industry to weed out potentially toxic drug candidates before they reach patients, the frequency of liver and other toxicities associated with new medicines remains at unacceptable levels (Verma & Kaplowitz, 2009; Chen et al., 2011). The “best efforts” concerned still rely predominantly on studies in experimental animals, mainly rodents, dogs, and nonhuman primates. Unfortunately, drugs that cause organ toxicity in these species do not necessarily cause such effects in humans, and vice versa. These species differences are particularly dramatic in the case of liver, due to large differences in metabolism between species. For this reason it is becoming apparent that something needs to be done about the way that we identify potential human organ toxicity before exposing human subjects to experimental medicines, and there is mounting scientific evidence for the utility of human-, rather than non-human-based test methods (Krewski et al., 2009; FDA, 2012; Firestone et al., 2010; Collins, 2011).

However, despite growing enthusiasm for “humanizing” drug testing, which Safer Medicines Trust believes will reduce the human and financial burden of adverse drug reactions, this approach has been extraordinarily difficult to get established as a viable alternative. The reasons for this difficulty are various, such as the obvious difficulties of modelling the complexity of the whole organism by extrapolating from isolated parts, the fact that the regulatory authorities demand results from animal studies, and the lack of validated alternative human-based methods. While there is a degree of validity in each point, they do not represent the whole case. For example, the objections to human in vitro and in silico approaches ignore the fact that these technologies have become more sophisticated and physiologically relevant in the last decade, yet the regulatory authorities still rely on extrapolation of human safety from animal in vivo studies. Why? Mostly for historical reasons: back in the 1960s animal tests were all that were available. As to the claim that few human-based tests have achieved formal validation, here we have only half the story; the truth is that not only is the validation process, orchestrated by organisations such as ECVAM and ICCVAM, time-consuming (Mak & Perry, 2007; Leist et al., 2012), but the animal in vivo tests that we currently rely on and that (sometimes inappropriately) serve as a “gold standard comparator” for the in vitro approaches, have never themselves been subjected to such scrutiny (ECOPA, 2008). Thus, when a new test method is evaluated, there is no way of knowing what it has to improve on. And finally, we strongly believe that the best model for humans has to be human.

We would like to suggest that rather than subjecting each new test method to several years of in-depth analysis to determine whether it passes all the theoretical criteria, and perfectly matches human outcomes, as is currently the case, a more fruitful approach would be to investigate whether a range of new approaches could actually improve on the patently flawed tests that are our current standards. Safer Medicines Trust proposes that what is needed is an objective independent study comparing human in vitro and in silico approaches with animal in vivo counterparts (Coleman, 2011; Clotworthy & Archibald, 2013), an approach we term “pragmatic validation.”

The Basis of a Pragmatic Validation Study

While to perform such a study prospectively would take many years and be resource-intensive, there is a simpler approach, which recognises that we have a vast wealth of (theoretically) available clinical and pre-clinical information on marketed drugs, whereby we can judge just how reliable the current safety testing paradigm has proven. Safer Medicines Trust’s proposal is based on identifying a number of marketed drugs that have been judged safe and effective for use as human medicines, based on the regulatory panel of animal tests, but have subsequently gone on to cause various toxicities in patients, and pair each of these with a structurally similar marketed drug that lacks such toxicity, acting as test drug and negative control, respectively. These drugs would then be submitted for blind testing using a variety of human-tissue-based technologies to determine whether any of these tests, either alone or in combination, could provide an indication of the toxicity known to be exhibited by the test compound in human subjects. A wide range of different organ toxicities could be studied using this approach, with a variety of high-throughput and high-content technologies selected from those perceived to be among the best available, offering complementary capabilities that should maximise the chances of identifying the majority of toxicities. These would, of course, include 3D and co-culture systems, as well as broad and hypothesis-free screens, in order to test the limits and the synergies of various different types of assays.

Before embarking on such a study, which although obviating the costs of both clinical and animal studies, would still incur the expense of the in vitro studies, we are currently initiating an entirely retrospective proof-of-principle study, whose only costs will be those entailed in data analysis.

This initial study focuses exclusively on liver toxicity and is made possible because of two existing initiatives: the FDA’s Liver Toxicity Knowledge Base (LTKB) (Chen et al., 2013) and EPA’s ToxCast program (Dix et al., 2007).  From the LTKB, we identified 104 drugs that have been withdrawn or relabelled due to their hepatotoxicity. Using the LTKB structural similarity scores, we then determined that at least 23 of the hepatotoxic drugs have a structurally similar partner with significantly less hepatotoxic potential. Of these 23 pairs, 13 have both partners included in the ToxCast program, which will provide the data for comparative analysis of each pair. Three data sets will be compared: the output from the human in vitro assays employed in the ToxCast program, with the existing results obtained through preclinical animal tests as well as those from human clinical experience. Additionally, in silico data will be integrated into the analysis, as explained below.

The human in vitro data are currently being generated as part of the ToxCast program by technology providers including Attagene, BioSeek, Cellumen and others. The technologies are all human tissue-based and rely on multiple readout parameters that include transcription factor assays, cell-based protein level assays, cytokine levels cell imaging and general cytotoxicity markers that measure a variety of scientifically validated mammalian biological stress responses.

Since the identity of the compounds must be blinded to those conducting the analysis, they cannot be named in the open publications until the results of analysis are unblinded. Additionally, the choice of compounds is not yet finalized: we hope to expand the range of negative controls to include compounds with functional rather than structural similarity, which would produce further compound pairs and also some triplets, comprising a toxic test compound, together with both a structurally similar and a functionally similar negative control. To identify functionally similar compounds will require mechanistic information, which does not currently exist in LTKB, so we hope to include further bioinformatics expertise in this growing partnership.

We are delighted that two respected organisations with complementary analytical capabilities: FRAME and OpenTox, will provide their expertise for this exciting study. OpenTox project experience with the SEURAT-1 (Safety Evaluation Ultimately Replacing Animal Testing-1) research cluster is going to be relied upon in the analysis. Importantly, this new strategy will rely on biological mode of action of the molecules, rather than solely on statistical analyses. Fundamentally, it uses a mode-of-action framework to describe repeat-dose toxicity to derive predictions of human in vivo toxic responses. Resources built by the OpenTox project, such as ToxBank, will be used in data analysis. ToxBank is the cross-cluster infrastructure project, which provides a web-accessible shared repository of research data and protocols (details available at wiki.toxbank.net). This complex and heterogeneous data will be consolidated and harmonised through the ToxBank data warehouse in order to perform an integrated data analysis. The meta-analysis of multiple types of time-dependent dose-response functional data will be combined with preclinical and clinical information, publicly available for each reference compound. Moreover, adverse events will be sub-classified by the mechanism of DILI such as: fibrosis, steatosis, cholestasis and phospholipidosis. Pathway enrichment analysis of the compounds will be aimed at identifying key DILI pathology pathways in relation to dose-response and time of exposure.

Ultimately, the output of the analyses will be used to design a prospective blinded study to further test those in vitro technologies that demonstrated the best predictive abilities, as well as additional technologies that were not included within the ToxCast program. These studies are complementary to ToxCast and SEURAT and other large-scale initiatives and, by virtue of their much smaller scale and duration, will produce results long before the conclusion of the larger initiatives. Furthermore, the results could feed back into the larger studies, where they could be used in data analysis.

This novel approach has great potential for application in evaluation of many new technologies in real time, and we are very fortunate to be receiving advice and guidance from many of the leading lights in the field, including Dr. Weida Tong (NCTR/FDA), Dr. Michael Merz (Novartis and IMI SAFE-T co-ordinator), Dr. Gladys Ouédraogo (L’Oréal), Dr. Ann Daly (Newcastle University), Dr. Alexander Tropsha (University of North Carolina) and Dr. Barry Hardy (OpenTox and SEURAT). We are still accepting expert input into the design of the study, and would welcome comments or suggestions on any aspect.

We believe that studies such as this, designed for other toxicities in addition to the liver, constitute a particularly powerful approach to assessing the ability of human in vitro test methods to improve on current methods of determining potential toxicities of new medicines, thus reducing the risk to human subjects. Furthermore, by linking the data to those generated in clinical safety biomarker studies and adverse outcome pathways, there is a real opportunity for a powerful 21st Century approach to developing safer medicines.

©2014 Robert Coleman, Katya Tsaioun, & Kathy Archibald.

References
Chen, M., Vijay, V., Shi, Q., Liu, Z., Fang, H., & Tong, W. (2011). FDA-approved drug labeling for the study of drug-induced liver injury. Drug Discov Today, 16, 697-703.

Chen, M., Zhang, J., Wang, Y., Liu, Z., Kelly, R., Zhou, G., Fang, H., Borlak, J., & Tong, W. (2013). The liver toxicity knowledge base: a systems approach to a complex end point. Clin Pharmacol Ther, 93, 409-412.

Clotworthy, M., & Archibald, K. (2013). Advances in the development and use of human tissue-based techniques for drug toxicity testing.  Expert Opin Drug Metab Toxicol, 9, 1155-1169.

Coleman, R.A. (2011). Efficacy and safety of new medicines: a human focus. Cell Tissue Bank, 12, 3-5.

Collins, Francis S. (2011). Reengineering Translational Science: The Time Is Right.  Sci Transl Med, 3 (90):90cm17.

Dix, D.J., Houck, K.A., Martin, M.T., Richard, A.M., Setzer, R.W., & Kavlock, R.J. (2007). The ToxCast program for prioritizing toxicity testing of environmental chemicals. Toxicol Sci, 95, 5-12.

European Consensus Platform for Alternatives (ecopa). (2008). 8th Annual Workshop, “Cosmetics Directive, REACH legislation and novel Directive 86/609: realistic 2009/2013 – factual status?,” Brussels, November 29-30, 2008.  Retrieved from http://www.ecopa.eu/wp-content/uploads/9th_ecopa_workshop_minutes.pdf

Firestone, M., Kavlock, R., Zenick, H., & Kramer, M. (2010). The U.S. Environmental Protection Agency strategic plan for evaluating the toxicity of chemicals.  J Toxicol Environ Health B Crit Rev, 13, 139-162.

Food and Drug Administration (FDA). (2012, May). S6 Addendum to Preclinical Safety Evaluation of Biotechnology-Derived Pharmaceuticals. Retrieved from http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM194490.pdf

Krewski, D., Andersen, M.E., Mantus, E., & Zeise, L. (2009) Toxicity Testing in the 21st Century: Implications for Human Health Risk Assessment. Risk Anal, 29, 474-479.

Leist, M., Hasiwa, N., Daneshian, M., & Hartung, T. (2012). Validation and quality control of replacement alternatives—current status and future challenges. Toxicol Res, 1, 8–22.

Mak, N., & Perry, N. (2007). ICCVAM: A missed opportunity or potential for progress?  AV Magazine (Fall).  Retrieved from http://www.aavs.org/site/c.bkLTKfOSLhK6E/b.6457311/k.B675/ICCVAM.htm#.UvjebPm0iSo

Verma, S., & Kaplowitz, N. (2009). Diagnosis, management and prevention of drug-induced liver injury. Gut, 58, 1555-156.

Comments
    pingbacks / trackbacks

    Leave a Comment


    *