Looking Beyond Replacements: One company’s approach for continuing the progress

Home / New Perspectives / Overarching Challenges / Looking Beyond Replacements: One company’s approach for continuing the progress

Overarching Challenges

Looking Beyond Replacements: One company’s approach for continuing the progress

Mark Lafranconi, The Procter & Gamble Company Published: October 2, 2012
About the Author(s)
Mark Lafranconi is a Section Head in the Central Product Safety organization at Procter & Gamble. He currently has responsibilities for programs fostering acceptance of alternatives to animal testing, developing policy and practices for nanotechnology, and promoting the use of sound science for regulatory and public policy decision making.

He joined Procter & Gamble in 1986 where he has developed programs to evaluate the safety of a wide range of ingredients and products used in Procter & Gamble’s health care, beauty, foods, laundry, and diaper businesses.

There has been tremendous progress over the past twenty years to develop methods to replace animal testing for evaluating safety. Despite this, the pace of development of replacement methods has slowed because the remaining endpoints are difficult to characterize and the path to developing replacements is uncertain. They are biologically complex and/or the mechanisms are poorly understood. Reproductive and developmental tests, and repeat-dose chronic tests, are examples of the kind of endpoints that require significantly more effort before replacements will be available.

While development of replacement methods continues to be an essential piece of the solution, there is another parallel strategy that we found to be productive in the consumer products categories. In this article I describe approaches we have used to continue the progress through more effective utilization of existing information. This is not a new idea but the tools available to find and make better use of existing information have been growing in leaps and bounds as advances in computer science, informatics, and our understanding of fundamental biology come together to provide capability that was unavailable even five years ago. What is even more encouraging is this capability is growing at an accelerated pace.

Effective utilization of existing information and tools is appealing because it is does not rely on development of the next generation of alternative methods nor does it require a company to make significant investments in research programs for developing such methods.

Instead, this is an approach that can be employed by most companies with modest investments in three areas:

  1. Capability for organizing and retrieving relevant information from past studies (Knowledge Index)
  2. Expanding the applicability of existing information to appropriately bridge gaps in information (Risk Assessment Methods)
  3. Developing capability to make informed decisions about the most appropriate study when testing is necessary (Integrated Testing Strategies).
Knowledge Index

As a place to start, there is the matter of organizing existing information from “in-house” sources. This could include past safety assessments, studies and literature used to support those risk assessments, and a searchable inventory of ingredients and uses that have been evaluated. This may seem to be too obvious to even mention but we found that when we looked at our existing collection of information there were many gaps in our ability to use this information effectively. For instance, indexing of ingredients evaluated was not robust. In our database there are records dating back more than 50 years. Over that time many things have changed including the way we identified ingredients. Sometimes materials were identified by internal company codes or lab-book references. Sometimes they were identified by trade names, common names, CAS numbers, and sometimes, but rarely, by widely accepted naming conventions such as IUPAC. This inconsistency in how we identified ingredients made for spotty search results. Often the identifier entered for a search would bring back just a few records while there were many other records in the database on the same material but identified with an alternative name.

This limitation can be resolved by developing a synonym database to index the records then utilizing a thesaurus-aided search engine. Thesaurus-enabled search engines are now common and can be readily incorporated into database search capabilities. In our case, the results were dramatic. Creation of a synonym library and a thesaurus-aided search engine typically increased retrieval of relevant records 3-4 fold. This took much of the “art” out of searches and resulted in more uniform retrieval of information by both experienced investigators and those new to this task.

Another boost in efficiency comes from augmenting text-based searching with molecular structure searching. It is now commonplace to uniquely describe the chemical structures utilizing a variety of available tools to generate a descriptor such as the Simplified Molecular Input Line Entry System. Using a standardized format for describing the structure, assessors can query the databases by means of structural elements rather than relying on the often imprecise text identifiers. Here again, the capability is increasing rapidly with many databases utilizing such capability and more are coming on line at a rapid pace. By utilizing searches based on chemical structures, the assessor can develop search strategies that are more inclusive of relevant information than simple text-based searches, even those that are aided by thesaurus capability.

That is not to say that structural searches are without limitations. Certain classes of materials with poorly defined structures, such as polymers, can be difficult to retrieve efficiently but even these can often be identified with a search on their repeating units. Fatty acids are another class of materials that can present challenges when searching based on structure. Naturally occurring fatty-acids often have a distribution of chain lengths and locations of unsaturation that are difficult to describe.

The ability to index information based on structure also adds another level of capability not possible when using descriptive terms alone. Searching on the basis of structural similarities enables the identification of molecules with common features and opens up the ability to identify molecules with similar sub-structural elements that are biologically relevant. With this capability, the number of useful records brought back by a search increases even further. While the results will vary considerably depending on the complexity of the candidate molecule, we find that sub-structure searching often increases the recovery of relevant records by another 3-10 fold and sometimes more.

This increase in retrieval of records is welcomed in most cases but it can also result in too much of a good thing with search results sometimes bringing back an overwhelming amount of information that has to be sorted out into that which is relevant and that which is noise. The key to successfully using this type of capability is investing effort up front in the creation of the search strategy. Sometimes the biological effect of a molecule is dominated by other features and the sub-structure element becomes irrelevant. Recognizing those instances and crafting the search to eliminate them increases the efficiency of the search. Unfortunately there is no quick and easy way to address this. The appropriate selection and use of the information that is retrieved will depend on the experience of the assessor and his or her understanding of the potential chemical-biological interactions that are relevant.

Up to this point, we have been discussing capability to better utilize information already available in-house. For companies such as ours, this represents a significant source of information. However, there is information from government databases, industry associations, and commercial sources that can be tapped at well. A partial list of those sources is included in Table 1. These resources further expand the type and quantity of existing information available for assessing ingredients and most have structure search capabilities.

Of course, once the assessor starts looking into external databases for information, the number of records and the type of information retrieved increases dramatically. For instance, our internal database of information spans more than 50 years and includes nearly 50,000 records on approximately 2,000 ingredients. If we expand our search to include those databases listed in table 1, the pool of available information available increases to nearly 500,000 records on more than 80,000 chemicals. Creating appropriate search criteria and sorting out the relevant information on such a large pool of information is a significant task. However there is also help here as well from numerous sources. For instance, the Joint Research Center recently published guidelines to help researchers find and retrieve information on the use and applications of alternative methods to testing with animals (Roi et al., 2011). While these guidelines emphasize searches for alternative methods, they also provide sound guidance for search strategies for safety information in general and identify available sources of this type of information.

Risk Assessment Methods

Developing capability to effectively gather the relevant information is an important step that, in and of itself, can deliver significant gains in reducing the likelihood of additional testing with animals. However, what happens when there is still not enough information to fill all the gaps, even after applying these expanded information gathering techniques? This is where a more synthetic approach is necessary to complete the risk assessment. Fortunately, there has been a lot of progress in this area as well. For the purposes of this discussion, I will comment on two that are of particular importance to our work; Read Across and Threshold of Toxicological Concern (TTC).

Read Across

Once the available information is identified and retrieved how can this be applied to assess a candidate ingredient? Often the available information is incomplete and does not cover the range of questions to be answered. This commonly happens when assessing new ingredients, or new uses of existing ingredients. In this situation, the assessor can often develop answers by utilizing information from appropriately similar molecules (e.g. similar structure, physicochemical properties and/or biological activities) and projecting an informed decision about the candidate ingredient.

Read Across, and other methods based on structural similarities, can be powerful tools for bridging gaps in information by assembling knowledge from related materials and using that information to predict the behavior of an unknown chemical. The approaches rely on structural similarities and some broadly applicable concepts about the biological profiles attributed to certain structural features.

While this approach has been used successfully for many assessments, it does have to be applied cautiously and with a clear appreciation that the structural elements alone do not determine the biological effects. Other factors such as metabolism and mechanism of action can have profound effects on the biological consequences. This caution was highlighted in a recent review of REACH submissions where the quality of the Read Across assessments varied substantially based on how well the assessors selected the structural analogs (Rodiva et al., 2011).

With that in mind, there is considerable work underway to bring these factors into account to improve the predictability of structural-derived approaches. Structural analogs can be grouped into more predictive subsets on the basis of toxicological and physiochemical properties using expert judgment (OECD, 2007). A further refinement in capability comes from applying an explicit framework for applying expert judgment to account for biotransformation and factors that impact toxicokinetics, and ultimately, toxicity (Wu et al., 2010; Blackburn et al., 2011).

These approaches can be applied to organize and evaluate the suitability of analogs for projecting the effects of a candidate molecule. The analogs are sorted into four categories – suitable, suitable with interpretation, suitable with pre-condition, or unsuitable. Suitable means the analog can be applied directly in a read across scheme with few reservations because the molecule is nearly identical to the target molecule on the basis of structure, reactivity, metabolic pathways and physiochemical properties. Suitable with interpretation means the candidate analog has most of the features relevant for biological reactivity but may have different physiochemical properties, such as a longer chain length, that could alter bioavailability and kinetics. Suitable with pre-condition means the candidate analog could be appropriate for Read Across if certain condition(s) are met. For instance, if the candidate molecule quickly generates the analog as part of biotransformation, or vice versa, then the analog would be suitable with pre-conditions.

Using these approaches for selecting appropriate analogs for Read Across not only produces more transparent/consistent results in the selection of analogs, but going through the sorting process using the defined criteria also provides the justification for each selection that can be used in the documentation of the assessment.

Threshold of Toxicological Concern

Another useful approach for making risk decisions about low level exposures in the absence of a full data-set is the TTC. This takes a statistical approach to identify a boundary, below which the probability of an adverse health effect is judged to be sufficiently low such that no further evaluation is needed. It is based on the concept of the Threshold of Regulation, which was established by the US Food and Drug Administration for assessing indirect food additives (Food and Drug Administration, 1995). Since its introduction, the TTC concept has been developed further for assessment of cancer and non-cancer endpoints across numerous domains of chemicals and applications (Kroes et al., 2004; Blackburn et al., 2005; Felter et al., 2009; Laufersweiler et al., 2012). Favorable opinions on TTC have recently been published by EFSA (2012) for broad application to food, and by the three EU non-food scientific committees for broad application to cosmetics and consumer products (SCCS [Scientific Committee on Consumer Safety], 2012).

TTC relies on grouping of molecules based on structural features that have been associated with chemical reactivity or toxicological potential. Within a grouping, the TTC approach makes an assumption that the target molecule is as toxic as the most toxic molecules in the group. This is an inherently conservative approach and well suited for screening level assessments or for assessments of materials at very low levels in products. When TTC values have been reassessed using newer or expanded datasets and compared to the original values, the analyses consistently demonstrate that the initial assumptions were appropriately conservative for both cancer (Kroes, et al, 2004) and non-cancer endpoints (EFSA, 2012).

While TTC can be a useful risk assessment tool for many applications, it does have limitations. The TTC was developed to assess systemic effects. Work is underway to expand the application to local effects.

We have incorporated both of these approaches, Read Across and TTC, into our evaluation process for new ingredient assessments. Each has had a significant impact on our ability to evaluate safety without resorting to additional testing. In 2011 nearly 90% of our safety evaluations world-wide were completed based on utilization of existing information and advanced risk assessment methods like Read Across and TTC with the remaining assessments completed primarily with replacement methods and clinical studies.

Integrated Testing Strategies

There is a lot that can be achieved using the approaches just described but there are still circumstances when more information is necessary and studies are used to generate that information. For those situations the assessor is faced with deciding which study, or studies, would yield the most relevant information for the risk decision. A poorly informed decision could result in unnecessary testing with animals.

Compounding the difficulty in this decision is an ever increasing body of knowledge about the biology involved and data from accompanying assays that can identify molecular events within a pathway leading to toxicity. As our understanding of the mechanism of toxicity and events leading to a biological effect increases, we are faced with an ever increasing array of type of information that could potentially be useful. In addition to the array of potential experiments is the equally complex problem of deciding how data from a new experiment can be incorporated into the existing body of knowledge about the candidate chemical. How does one fit information from fundamental biology, gene expression for instance, into the body of existing information from in vivo studies, or incorporate results from different test systems for the same endpoint?

Fortunately, a lot of new capability is available in this area as well. Thanks to progress in our understanding of pathways leading to adverse outcomes, there are now more instances where we have a conceptual framework for piecing together information from a variety of sources to make a risk decision. Each of those pieces may not, in and of itself, be sufficient to make a decision, but when assembled within the framework, a decision can often be made. The Integrated Testing Strategy ITS) is an approach for accomplishing this (Institute for Health and Consumer Protection, 2005; Jaworska et al., 2010b).

There are many approaches used to develop an ITS. We have chosen a Bayesian Network model (Jaworska et al., 2010a; Jaworska and Hoffmann, 2010b). This is a probabilistic modeling approach that incorporates the available understanding of biological mechanisms, interactions and dependencies to form testable hypotheses. Information from a variety of sources is first incorporated into the model then evaluated for its relative contribution to hypothesis. Information from one source that overlaps with information from another source is examined. The source with the most influence on the model is identified and its contribution to the decision is quantified. This enables the assessor to identify where the next round of information should come from to be most useful to the risk decision. The new information is then incorporated into the model. The contributions from the various sources of information are again evaluated to identify the new “next” experiment that would generate information to further reduce uncertainty in the hypothesis.

The advantage of this type of approach for integrated testing is that not only is the model rooted in a fundamental understanding of the biology, but it is self-improving with each cycle and enables the identification of the best next experiment. As a demonstration of this concept, Jaworska et al (2011) developed a Bayesian Network that models allergic contact dermatitis and used it in a case study approach to project the most relevant next studies for refining the predictability of the model for a series of chemicals classified as non-sensitizers, weak, moderate, and strong sensitizers.

While it is still too early to know how impactful this approach will be for reducing animal testing, it is an approach that we think will result in meaningful reductions and improve the efficiencies of toxicology programs.

Conclusions

All of the approaches discussed can be applied to reduce testing on animals. They are available now and advances in science and technology will only increase the capability and predictability. They are approaches that both large and small capital companies can develop and use and represent a legitimate way forward towards the reduction of testing with animals.

Table 1.

Searchable Chemicals Databases

DatabaseSponsorAccess
Distributed Structure-Searchable Toxicity Database Network (DSSTox)US-EPA National Center for Computational ToxicologyOpen
Aggregated Computational Toxicology Online Resource (ACToR)US-EPA National Center for Computational ToxicologyOpen
Toxicity Reference Database (ToxRefDB)US-EPA National Center for Computational ToxicologyOpen
AMBITCEFIC Long-range Research InitiativeOpen
ChemIDPlusUS National Library of MedicineOpen
SciFinderAmerican Chemical SocietySubscription
eChemPortalaOECDOpen

a No structure search capability but extensive synonym search capability.

©2012 Mark Lafranconi

References
Blackburn, K., Bjerke, D., Daston, G., Felter, S., Mahony, C., Naciff, J., et al. (2011). Case studies to test: A framework for using structural, reactivity, metabolic and physicochemical similarity to evaluate the suitability of analogs for SAR-based toxicological assessments. Regul. Toxicol. Pharmacol. 60, 120-135.

Blackburn, K., Stickney, J.A., Carlson-Lynch, H.L., McGinnis, P.M., Chappell, L. & Felter, S.P. (2005). Application of the threshold of toxicological concern approach to ingredients in personal and household care products. Regul. Toxicol. Pharmacol. 43, 249-259.

EFSA Scientific Committee. (2012). Scientific Opinion on Exploring options for providing advice about possible human health risks basedon the concept of Threshold of Toxicological Concern (TTC). EFSA Journal. 10, 2750-2853.

Felter, S., Lane, R.W., Latulippe, M.E., Craig Llewellyn, G., Olin, S.S., Scimeca, J.A. & Trautman, T.D. (2009). Refining the threshold of toxicological concern (TTC) for risk prioritization of trace chemicals in food. Food Chem. Toxicol. 47, 2236-2245.

Food and Drug Administration. (1995). Food Additives: Threshold of regulation for substances used in food-contact articles. Final Rule. 21 CFR 60, 36595.

Institute for Health and Consumer Protection. (2005). REACH and the Need for Intelligent Testing Strategies. JRC 1-35.

Jaworska, J., Gabbert, S. & Aldenberg, T. (2010a). Towards optimization of chemical testing under REACH: A Bayesian network approach to Integrated Testing Strategies. Regul. Toxicol. Pharmacol. 57, 157-167.

Jaworska, J., Harol, A., Kern, P.S. & Gerberick, G.F. (2011). Integrating non-animal test information into an adaptive testing strategy – Skin sensitization proof of concept case. ALTEX. 28, 211-225.

Jaworska, J. & Hoffmann, S. (2010b). Integrated Testing Strategy (ITS) – Opportunities to better use existing data and guide future testing in toxicology. ALTEX. 27, 231-242.

Kroes, R., Renwick, A.G., Cheeseman, M., Kleiner, J., Mangelsdorf, I., Piersma, A., et al. (2004). Structure-based thresholds of toxicological concern (TTC): Guidance for application to substances present at low levels in the diet. Food Chem. Toxicol. 42, 65-83.

Laufersweiler, M.C., Gadagbui, B., Baskerville-Abraham, I.M., Maier, A., Willis, A., Scialli, A.R., et al. (2012). Correlation of chemical structure with reproductive and developmental toxicity as it relates to the use of the threshold of toxicological concern. Regul. Toxicol. Pharmacol. 62, 160-182.

OECD. (2007). Guidance on Grouping of Chemicals. OECD Environmental Health and Safety Publications 80.

Rodiva, C., Longo, F. & Rabbit, R.R. (2011). How are reproductive toxicity and developmental toxicity addresed in REACH dossiers? ALTEX. 28, 273-294.

Roi, A.J., Richmond, J. & Grune, B. (2011). The ECVAM Search Guide – Good Search Practices on Animal Alternatives. JRC 1.

SCCS [Scientific Committee on Consumer Safety]. (2012). SCCS/SCHER/SCENIHR opinion on the use of the Threshold of Toxicological Concern (TTC) Approach for Human Safety Assessment of Chemical Substances with focus on Cosmetics and Consumer Products. SCCP 1171/08.

Wu, S., Blackburn, K., Amburgey, J., Jaworska, J. & Federle, T. (2010). A framework for using structural, reactivity, metabolic and physicochemical similarity to evaluate the suitability of analogs for SAR-based toxicological assessments. Regul. Toxicol. Pharmacol. 56, 67-81.

Leave a Comment

*