A brief introduction to Systematic Review

Home / In the Spotlight / A brief introduction to Systematic Review

In the Spotlight

A brief introduction to Systematic Review

by Marilyn Matevia, Humane Society of the United States
Posted: February 13, 2015

The exponential proliferation of biomedical data in scientific literature and public databases make systematic reviews more necessary than ever. At the same time, a corresponding proliferation of powerful computing aids to search, aggregate, and evaluate this literature make systematic reviews increasingly thorough and useful. In the field of toxicology especially, systematic reviews are showing promise as a tool for identifying knowledge gaps, accounting for lack of concordance between species, and reducing unnecessary additional animal testing.

What is a systematic review, and how does it differ from a narrative literature review?

A systematic review of literature evaluates and summarizes the body of evidence for a specific treatment effect, chemical exposure, or any number of other well-defined research questions (Petticrew, 2001). It is a comprehensive survey of relevant studies (both published and, when publicly accessible, unpublished), and a detailed, critical examination of the methods and findings of each study. For transparency and reproducibility, the authors of a systematic review should follow an established review protocol (such as those provided by the Cochrane Collaboration, the Campbell Collaboration, or the Center for Evidence-Based Medicine), or include details about their own procedure within the review article. Either way, another set of investigators should be able to follow the same steps and reach the same conclusion.

In contrast, a narrative literature review is rarely as comprehensive, tends to emphasize recent and/or well-known studies, and might not be transparent about the criteria for inclusion or exclusion of particular studies (Bettany-Saltikov, 2010; Khan et al., 2003).

While protocols for conducting systematic reviews may vary in particulars, there is general agreement on the broad outlines (Garg et al., 2008; Guzelian et al., 2005; Khan et al., 2003; Leenaars et al., 2013; Moher et al., 2007; Rooney et al., 2014). A systematic review involves:

  • Formulating a clear research question: e.g., did published animal studies indicate that administering nimodipine (a calcium channel blocker) could reduce cell death following ischemic stroke (Horn et al., 2001); are patients who take lithium for bipolar disorder at risk for renal failure (McKnight et al., 2012); do animal studies indicate any potential for carbon nanotube toxicity (van der Zande et al., 2011)?
  • Establishing procedures with specific, reproducible parameters for identifying relevant studies from publicly available databases (e.g., specifying which literature databases will be targeted, which languages included, and the range of publication years admitted)
  • Establishing additional well-defined and reproducible criteria (“filters”) for determining which studies from the initial screen will be included or excluded (e.g., experimental designs; number of subjects; species)
  • Clearly defining criteria for categorizing study types, describing methods, and evaluating the methodological quality of each study selected
  • Extracting, evaluating, and summarizing the findings

A systematic review might also incorporate meta-analysis – a statistical procedure for pooling the results of two or more studies in order to more precisely determine the overall strength and reliability of the findings (Garg et al., 2008; Haidich, 2010).

Because these structured methods evaluate the largest body of evidence available, and can provide bias-limited effect sizes based on that large data pool, systematic reviews and meta-analyses are generally considered to be the most reliable and best-documented form of evidence for medical or scientific decision-making (Burns et al., 2011; Evans, 2003). In the health sciences, systematic reviews are cited more often than any other kind of study (Bhandari et al., 2007; Patsopoulos et al., 2005).

Traditional uses of systematic reviews

The broad scope and relative objectivity of systematic reviews gives them great practical value to healthcare providers and policymakers (Cook et al., 1997; Bero & Jadad, 1997; Mulrow, 1994). Physicians/clinicians use systematic reviews to digest the increasingly enormous body of literature in their fields. Policymakers use evaluations and summaries of studies to support recommendations about appropriate standards of care and best practices. In fact, systematic reviews constitute the evidence summaries on which evidence-based medicine (EBM) and other evidence-based approaches (EBA) in science and social science are based (Claridge & Fabian, 2005; Dirkx, 2006; Rosen, 2003; Stevens, 2001; Tranfield et al., 2003).

In addition, funding agencies often require that grant applicants submit systematic reviews to demonstrate the scientific necessity and methodological soundness of a proposed study (Chalmers & Nylenna, 2014; Sandercock & Roberts, 2002). Another advantage for researchers is the use systematic reviews to identify and avoid experimental design elements that produced biased or misleading results in previous studies (Pound et al., 2004; Sandercock & Roberts, 2002). And at least two international networks – the Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies (CAMARADES) and the Systematic Review Centre for Laboratory Animal Experimentation (SYRCLE) – are using systematic reviews to examine the validity of animal experiments as models for human diseases.

An emerging use of systematic reviews – Evidenced Based toxicology

The usefulness of systematic reviews in medicine and other scientific fields has led to their application to the field of toxicology, as well. The emerging discipline of “evidence-based toxicology” (EBT) uses systematic reviews to assess the evidence on a number of toxicological issues, such as the association between chemical exposure and hazard, the performance of a test method, and the quality of data for application to large-scale chemical risk and hazard programs such as REACH (Buckley & Smith, 1996; Guzelian et al., 2005; Hoffmann & Hartung, 2005, 2006; Silbergeld & Scherer, 2013; Stephens et al., 2013).

Systematic review is being incorporated into government agencies, including the US National Institute of Environmental Health Sciences (NIEHS), the Environmental Protection Agency (EPA), and the European Union’s European Food Safety Authority (EFSA) to facilitate safety assessments. NIEHS’s Office of Health Assessment and Translation (OHAT) recently published its framework and procedures in the Handbook for Conducting a Literature-Based Health Assessment Using OHAT Approach for Systematic Review and Evidence Integration.* These and similar programs were highlighted in a November 2014 workshop hosted by the EBTC, where presenters and panelists also discussed “opportunities and challenges” to expanding the use of systematic reviews in toxicology.

Using systematic reviews to improve research

Systematic reviews are not just useful for summarizing evidence; they are helpful in efforts to determine the extent to which the evidence from animal studies translates to human outcomes (Hartung, 2013; Hooijmans & Ritskes-Hoitinga, 2013; Knight, 2007; Pound et al., 2004; Roberts et al., 2002; Sandercock & Roberts, 2002). They are also important sources of insight about proper study design and procedure, though they are rarely consulted this way (Jones et al., 2013). But overall, systematic reviews of animal studies are still relatively rare (Korevaar et al., 2011; Mueller et al., 2014), and of uneven quality – often failing to include important details about literature search strategies or evaluative criteria, including (especially) assessment of study bias (Bafeta et al., 2013; Garg et al., 2008; Jorgensen et al., 2006; Lundh et al., 2009; Mignini & Khan, 2006; Mohrer et al., 2007). Cochrane reviews have been found to exhibit more methodological rigor, reproducibility, and controls against bias (Jorgensen et al., 2006; Moseley et al., 2009).

A more fundamental concern is that until the scientific community effectively addresses publication bias – the tendency to publish only (or mostly) studies with positive findings – then even rigorous systematic reviews and meta-analyses will tend to overstate effect sizes (Higgins et al., 2011; Mueller et al., 2014; Sena et al., 2014; Van der Worp et al., 2010).

A recent paper by de Vries, et al. (2014) points to another important application for systematic reviews: as a tool for implementing the “3Rs” of animal testing (replacement, reduction, refinement). Systematic reviews can be used to assess the range and performance of available replacement methods (perhaps contributing to validation efforts, as well), to aggregate and evaluate existing data – potentially reducing the number of animals used in unnecessary additional experiments, and to show where refinements (e.g., using alternative species, reduced exposures, etc.) are possible in study designs. In their call for evidence-based toxicology, Hoffman and Hartung (2006) urged a re-examination of the “toxicological toolbox.” It could be that systematic review is toxicology’s equivalent of a Swiss Army Knife.

*OHAT’s systematic review program is the subject of a recent Journal Watch post.

Additional resources:

Guides to conducting systematic reviews:

Organizations/Projects:

Have we missed any useful resources on systematic review? Please mention them in the comment section below.

Comments

Leave a Comment

*