Programs & Policies - US
Some Considerations as ICCVAM Moves Forward
Published: December 6, 2007
Martin Stephens, Ph.D.
Johns Hopkins Bloomberg School of Public Health,
Center for Alternatives to Animal Testing
615 N Wolfe St.
Baltimore MD 21205 USA
The feedback from all quarters was remarkably consistent. Interested parties felt that the draft plan was largely a catalogue of ongoing activities that assumed ICCVAM would continue to operate primarily as an organization that responded to the activity of others, rather than as a driver of change. Commentators wanted ICCVAM to craft more of a strategic plan with clear goals and objectives, with enough information on “deliverables” to enable future observers to determine the extent to which ICCVAM succeeded in realizing its vision.
I had the privilege of serving on the SACATM Working Group on the plan and in that capacity, I attended the Town Hall meeting and the subsequent SACATM meeting that was devoted, in part, to discussion of the draft plan. I also was honored to have served on SACATM and various expert panels of ICCVAM and its operational arm, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM).
From those vantage points, I want to offer several of my own recommendations and observations relevant to ICCVAM’s future activities, as well as underscore a few points that others have made. My intent is to offer constructive criticism that can strengthen the good work carried out by the dedicated staff of ICCVAM/NIECEATM. I focus primarily procedural issues because, in my view, these did not received the attention they deserved during the commentary on the draft five-year plan.
Don’t Give the Reference Test a Free Pass
ICCVAM/NICEATM have done a commendable job in pulling together existing data on proposed alternative methods under review. These data are then analyzed by ad hoc expert panels. However, ICCVAM/NICEATM have fallen short in providing these expert panels with performance data on the in vivo reference methods that the alternatives are intended to partially or fully replace. Granted, good quality data on the relevance of the reference methods are often lacking. However, even in the extreme case in which no data on relevance are available, data on reliability should be compiled, analyzed, and made available. Does the reference method yield reproducible data within and between laboratories? In other words, how well does it predict itself? And what is known about its other limitations, such as subjective scoring, irrelevant dosing, over- or under-prediction, etc.?
In my experience, lack of attention to these issues has led ICCVAM review panels to uncritically accept the reference standard as the “gold standard”. Yet validation typically should be a comparative process, namely, how good is the new test compared to the old one? Alternative test methods will never been given a fair assessment if the performance of the reference method is arbitrarily set at 100%; data from the new method will inevitably show a less than perfect correlation to data from the reference method. The deck should not be stacked against the alternative tests. We need fair and thorough assessments of the strengths and limitations of the reference method as well as those of the alternative method.
Don’t Let Assessments Turn into a Misguided Quest for Perfection
Just as evaluations of new methods can fall short by ignoring the strengths and limitations of the reference methods, they can also fall short by insisting that the new method be close to perfection, rather than simply being as good as the reference method. I have witnessed this process repeatedly. Given the absence of solid information on the performance of the reference methods, expert panels naturally focus their critical eye on the strengths and limitations of the new methods. Such panels, especially those dominated by academics poorly briefed on the practical aspects of validation, act as if it is open season on the alternative methods, pointing out limitation after limitation, recommending improvement after improvement. Such further work inevitably takes time and money, and could further delay implementation. The quest for perfection should not tie our hands for the present.
Let Test Developers Advocate for Their Methods
One of the best validation assessments carried out by ICCVAM was the one involving the Local Lymph Node Assay (LLNA). The test developers played an active role in the deliberations, making presentations, answering questions from the review panel, fielding requests for additional information, etc. It was a bit like a court proceeding in which the plaintiffs were allowed to make their case to the jury. The ICCVAM’s LLNA proceedings had an honest give and take to them, and the people who knew the most about the methods were heavily involved. In contrast, later assessments of other methods resemble a criminal trial in which the defendant may make an opening statement but is then barred from further involvement in the proceedings. No doubt there are sound reasons not to have the proceedings dominated by the test developers. Nonetheless, the current situations means that reviews get bogged down in details that could be readily resolved by the developers. We need to find a way to preserve objectivity and independence yet leave room for more involvement by the advocates for the tests.
Don’t Reinvent the Wheel, Get MAD
The far-reaching impact of the Organisation for Economic Cooperation and Development (OECD) in the chemical testing arena stems largely from the Mutual Acceptance of Data (MAD) treaty obligations among OECD member countries. That is, data generated pursuant to an OECD test guideline in one OECD member country (if conducted according to standards) must be accepted by the regulatory authorities of other member countries. Sadly, no comparable principle operates in the field of validation, between ICCVAM and other validation centers, such as the European Centre for the Validation of Alternative Methods (ECVAM) and Japanese Center for the Validation of Alternative Methods (JacVAM). To be sure, representatives from these centers can participate in each other’s deliberations, and there is some level of expedited review of methods already reviewed by other validation centers. Nonetheless, observers of ICCVAM/NICEATM have been disappointed in the lack of ready translation of methods accepted as valid overseas to the United States. What a shame given the stakes for animals and science, and the limited resources of all centers, especially ICCVAM. This issue must be a top priority for ICCVAM, given that ECVAM clearly is far out in front of the US in validating alternative test methods and officially judging these methods to be valid. I would go as far as saying that if ICCVAM could only accomplish one thing, it should be to move heaven and earth so that whatever methods were declared to be valid in Europe would automatically be declared valid in the US.
Don’t Settle for Guesswork When Setting Priorities Based on Animal Welfare
All agree that the pursuit of alternative methods should be undertaken in order to advance animal welfare as well as science. In this field as well as others, limited resources force difficult decisions about priorities. Priorities grounded in animal welfare should take account of not only the degree of suffering associated with a particular test, but also the numbers of animals used in a test over a given period of time, say a year. This presents a problem in the United States and in some other parts of the world, because, while we can make informed judgments about levels of suffering, it is much harder to come up with good data on animal numbers. Rather than accept this state of affairs and rely on guesses instead of data, ICCVAM/NICEATM and the ICCVAM member agencies should make a concerted effort to compile good statistics on animal numbers, or at least develop measures of the relative numbers of animals subjected to various tests per year. It is noteworthy that one such member agency is the U.S. Department of Agriculture, which is charged with gathering statistics on animal usage under the Animal Welfare Act. The USDA, among others, should work with ICCVAM/NICEATM to improve the information basis of priority settings.
The above recommendations are largely procedural in nature; most can be implemented no matter what substantive priorities emerge from the five-year plan process. Many observers have privately concluded that ICCVAM/NICEATM is broken and needs to be fixed. No doubt some of the dissatisfaction results ultimately from the limited number of ICCVAM’s successes (i.e., methods reviewed and judged to be valid). However, I also believe that much of the dissatisfaction stems from the sorts of procedural issues identified here. Increased attention to these issues in ICCVAM’s next five years should improve the process and the payoff.
©2007 Martin Stephens