The Way Forward in Using (Q)SAR in Toxicology – A Personal Commentary

Home / New Perspectives / Emerging Technologies / The Way Forward in Using (Q)SAR in Toxicology – A Personal Commentary

Emerging Technologies

The Way Forward in Using (Q)SAR in Toxicology – A Personal Commentary

Mark Cronin, Liverpool John Moores University

Published: December 6, 2007

About the Author(s)
Mark Cronin is Professor of Predictive Toxicology in the School of Pharmacy and Chemistry at Liverpool John Moores University, England. He has over twenty year’s expertise in the development of in silico models for the prediction of toxicity and fate. This is founded upon a PhD in using quantitative structure-activity relationships ((Q)SARs) for environmental toxicity and post-doctoral research in the development of (Q)SARs for human health effects. Particular current interests include the use of chemical reactivity-based (in chemico) alternatives for the prediction of effects such as acute environmental toxicity and skin sensitisation. He has published over 150 papers in the area of in silico toxicity assessment. More details can be found here.

Dr. Mark Cronin
School of Pharmacy and Chemistry
Liverpool John Moores University
Byrom St.
Liverpool L3 3AF
UK
E-mail: m.t.cronin@ljmu.ac.uk

As I write this essay (September 2007), I realise it is exactly twenty years since I started my PhD in the prediction of environmental toxicities by the use of (quantitative) structure-activity relationships ((Q)SARs) – what have been more recently termed in silico techniques. Since that time my research interests have expanded into the prediction of human health effects, ADME, physico-chemical and pharmaceutical properties as well as pharmacological activities. I will, however, concentrate my opinion here on the possibilities for using (Q)SARs for prediction of environmental toxicity and human health effects. It is interesting to consider what we have achieved in the past two decades and, more importantly, where the next two may take us.

Plus ça change, plus c’est la même chose?

It would probably be too easy to say that nothing has changed in the past two decades. When I started my PhD in 1987, I walked into an area of science that had recently been stimulated by regulatory needs, that was taking advantage of computational advances both in hardware and software, but was struggling to be accepted in the wider community. Sound familiar? The end of the 1980s saw the culmination of considerable effort as a reaction to the TSCA legislation in the 1970s; SMILES, ClogP and molecular graphics were only just becoming available. Now we cannot conceive a world without personal computing and a myriad of computational chemistry approaches supported by the instant global communication of the internet. Currently one stimulant to the real progress in the past five years has been (and will continue to be) the European Union’s White Paper on a strategy on future chemicals policy. The White Paper published in 2001 has of course led to the EU’s REACH legislation which came into force in June 2007. This is turn has re-kindled interest in the use of alternative strategies to predict toxicity, including (Q)SARs.

For the future application of in silico techniques in toxicology, there are many factors that will need to be addressed to increase use and acceptance. I have chosen to concentrate my opinions in this commentary on two of the biggest challenges: data availability and strategies for use. Going back to when I started in the field twenty years ago, I was struggling to find data to model; the situation now is no different and possibly even more acute. Further, whilst software and hardware have moved on, we are still struggling with how to accept (Q)SAR predictions, and how useful they will really be.

A Castle Built on Sand?

Making any prediction or estimate requires some prior information – otherwise it is a guess relying purely on fortune. In the development of (Q)SARs for toxicity, there will never be enough data. Sophisticated and visually impressive software will remain a castle built on sand unless it is developed from sufficient, high quality data with a firm mechanistic basis. There is also an uncomfortable truth that creation of any alternative method (in vitro or in silico) will require toxicological information (usually in vivo). NGO’s campaigning for cessation in animal testing favour expanded access to existing animal data over conducting new in vivo tests.

The late 1980’s saw the completion of the fathead minnow (Pimephales promelas) acute toxicity database with acute toxicity data for over 600 chemicals, and chronic toxicity data for a smaller number. These data were measured by the US EPA in their Duluth laboratory. This remarkable achievement is one, if not the only, high quality in vivo database that has been created specifically for the purpose of developing (Q)SARs. Since that time there has been no systematic building of in vivo databases, for any endpoint, for the development of (Q)SAR models. By this we mean that databases built to cover physico-chemical and mechanistic space as efficiently as possible. The reality we must face is that there is unlikely to be the political (in any sense of the word) or financial motivation to produce further in vivo data. Therefore, we must rely on the laborious (and at times painful) process of compiling existing data, or producing non-animal data by other means.

The goal of retrieving existing data is by no means a lost cause. Recent efforts by the US FDA, as well as a number of other consortia have proved this. However, expectations must be realistic. With regard to non-animal test data, no-one would ever suggest that they will be as informative as in vivo test data for individual chemicals. However, the creation of in vitro databases can rapidly and inexpensively provide considerable knowledge across a broad range of chemistries and mechanisms. One excellent example of how this can be achieved is that for an in vitro endpoint, namely 40 hour cytotoxicity to the ciliate protozoan Tetrahymena pyriformis, the database having been created by Prof. Terry Schultz at the College of Veterinary Medicine, University of Tennessee (Knoxville, TN). The database comprises high quality toxicity data for more than two thousand chemicals collated in the same laboratory and measured by the same protocol. Chemicals have been selected to provide maximal information, particularly on mechanisms of action. This database is a perfect example of what can be achieved and what is required; it will no doubt provide valuable information for many years to come. Similar, systematically developed (in vitro) databases would provide a great service to toxicology in the future. This does raise an intriguing question – can we use “non-target” data? Our experience has shown that these data can be very useful, for instance in the identification of “reactive chemicals”, establishing domains for (Q)SARs and allowing extrapolations of predictions to higher species.

A further interesting question is, given the paucity of toxicity data, are small amounts of data useful? Regardless of what may be considered, even small amounts of relevant data may be of use. This requires a leap of faith away from traditional (Q)SAR development into the area of structure-activity relationships, the formation of categories and the application of read-across and analogues. Tools such as the soon to be released OECD (Q)SAR Application Toolbox will facilitate the formation of categories, as well as the use of read-across. These are little used, but powerful techniques to provide supporting information for toxicological assessment. The OECD Toolbox will be a widely used piece of software, and provides a novel and highly functional approach to produce a decision support system. The crux of these approaches, however, will rely on an appreciation of the role of mechanisms of action in making predictions. This may require further effort to develop this information.

Lies, Damn Lies and Statistics – Accepting and Utilising a (Q)SAR Prediction

When I started my PhD, one of the easiest ways of damning the (Q)SAR approach was to invite a vendor or researcher to a corporate laboratory, ask them to make inappropriate predictions for new chemistries, ignore the warnings that the prediction was meaningless, accept a correct prediction as a fluke and celebrate an incorrect prediction as evidence that the techniques do not work. Using this evidence, the sceptics could tell their managers and peers that (Q)SAR doesn’t work. I hope we have become more sophisticated in using (Q)SARs now!

Many (Q)SAR developers and users have long known the tricks of the trade for applying their tools. However, whilst much has been written since the seminal publications by Hansch, Fujita and co-workers in the 1960s, there are no formal rules, regulations or protocols for (Q)SAR use or development. One of the positive aspects that is being developed from the recent proposed use of (Q)SAR, e.g. for REACH, is that the problems of applying (Q)SARs have been recognised. Key amongst these is the issue of validation of (Q)SARs – principles developed in a Workshop in Setubal, Portugal (in 2002) were later formalised by the OECD. The OECD have also been instrumental in providing guidance for the use of (Q)SARs.

The OECD Principles for Validation of (Q)SARs, in addition to the associated Guidance, provide valuable information for users of these models. However, debate, confusion and divided opinion will continue to exist on the subject of validating a (Q)SAR. The Principles were created to address a particular need and raise a number of issues relating to the use of (Q)SAR. They were originally conceived to provide a framework of validation comparable to that, for instance, for an in vitro test. There is, however, no likelihood that there will be large scale “validation” of (Q)SARs, mainly for practical reasons e.g. resources for this process. In addition, simply because a (Q)SAR is validated, does not mean it will provide an adequate prediction. In particular, with regard to the use of (Q)SAR, the most important consideration will be to have a valid prediction, rather than the use of a validated (Q)SAR. The trick, of course, therefore becomes determining whether a prediction is valid, and this will require the expertise of the user.

In terms of using all alternative techniques (including in silico, in chemico and in vitro information) strategies will be required to replace animal testing successfully. A single (Q)SAR prediction, in isolation, is unlikely to be of much use. However, imagine a scenario when valid (Q)SAR predictions are made from a variety of (different) techniques, providing a consensus. The consensus is supported by read across from an appropriate category for this chemical and for similar chemicals in relevant endpoints. In combination with in vitro data, this (highly speculative!) scenario could provide a framework of information that could be used to make a reasoned assessment of toxicity without the need for animal testing.

Conclusions – Put on Your Rose-Tinted Glasses

Naturally over the past 20 years much has changed, undreamt of globalisation is a reality, and with it possibilities for replacing animals in toxicity testing across the world rather than restricting ourselves to local concern. In the West we now can draw on the data, expertise and knowledge of the former Eastern Europe, Russia, India and, increasingly, China. The key to success will be the strategies developed to use the non-test information that may be available. (Q)SAR models will need to be developed on mechanistic grounds and utilised appropriately. Software such the OECD (Q)SAR Application Toolbox will be invaluable in providing a platform to collate data and allow for categories to be formed. Clearly within all these frameworks, imagination and co-operation between toxicologists will be required, although the possibilities for optimisation of non-animal information in risk assessment will make the short-term effort more than worthwhile.
Disclaimer: This article is purely the personal views and opinions of the author. It does not represent any endorsement by the author’s institution, funding bodies, collaborators or any other third parties.
©2007 Mark Cronin

Leave a Comment

*