NAS Workshop on Computational Toxicology

Home / In the Spotlight / NAS Workshop on Computational Toxicology

In the Spotlight

NAS Workshop on Computational Toxicology

Published: October 23, 2009
The NAS Committee on Emerging Science for Environmental Health Decisions held a workshop in late September on the subject of computational toxicology. Computational toxicology is the application of high-level computing power as a means of managing and detecting patterns and interactions in large biological data sets. Current research in toxicology increasingly involves the generation of large data sets from high-throughput assay systems or high-information content procedures such as gene expression analysis. High throughput assays that were originally developed for drug discovery are now being applied to toxicity screening.

Perhaps the largest publicly available data sets of this type are emanating from EPA’s ToxCast program and the Tox21 consortium sponsored by EPA, the National Toxicology Program, and NIH Center for Comparative Genomics (NCGC). These programs are evaluating the effects of hundreds of chemicals in hundreds or thousands of biochemical or simple cellular assays that can be run in multi-well format, using a broad range of concentrations. The results are being compared with in vivo data for these chemicals, in the hopes that the suites of assays provide enough biological complexity that patterns that are diagnostic of particular toxicological outcomes can be determined. There are other streams of high-throughput toxicity data to which computational methods can be applied, notably the rapidly developing embryos of the nematode C. elegans and zebrafish.

Toxicogenomics, proteomics and metabolomics generate very large, information-rich data sets that are fodder for computational methods, although they provide different information. These approaches provide a comprehensive assessment of gene expression, protein expression, or metabolite generation in a particular tissue, organ or organism in response to a perturbation. This information has also been used as a means of identifying pathways of response, at the molecular level, that are responsible for toxic outcomes.

Computational approaches that were described at the workshop that take advantage of these (and other) data streams include the development of improved QSAR methods and systems-level understanding of biological responses and the interaction of pathways in producing a response.

It was clear that these are still early days for the application of computational toxicology to risk assessment and chemical regulation, but there are already examples where it is taking place. For example, relational databases that can be searched by chemical substructure are being used to make predictions about the toxicity of new chemicals based on their similarity to chemicals for which the toxicity potential has been evaluated. EPA has recently evaluated the use of toxicogenomics data to support its risk assessment approaches for phthalate esters. Because the desire for computational approaches exceeds the pace at which practical applications are coming on-line, many of the participants at the workshop from regulatory agencies expressed frustration that their expectations were not being met. It will be important in the future to manage these expectations, and to make sure that there are enough short-term applications such that the long-term research that will be necessary for computational toxicology to meet its full potential can be supported.