Systematic Review and Meta-Analysis: A Step-by-Step Guide
What is a Systematic Review and Meta-Analysis?
A systematic review identifies, appraises, and qualitatively or quantitatively synthesises evidence that meets specified criteria, to answer a research question. A meta-analysis is often incorporated into a systematic review study and, statistically attempts to quantify an overall effect for a particular research question, using study effects identified by the systematic review. Researchers may want to undertake a systematic review to develop a more in-depth understanding of an area of interest, summarise key findings in a particular research area, appraise the quality of evidence in a particular field of study and identify gaps in the literature for future research. Although systematic reviews can be lengthy, challenging and time-consuming, the pros outweigh the cons. Insights from high quality systematic reviews and meta-analyses can inform health care decision-making (Mittman, 2004), making them extremely beneficial for both research and practice. Based on our experiences of undertaking systematic reviews and meta-analyses as doctoral students, we recommend the following steps to undertake a systematic review and meta-analysis:
Step 1: Pre-register
Pre-registration of a protocol outlining how you intend to undertake your review has many benefits (see here for more on that). Pre-registration is becoming increasingly important in systematic review studies to promote transparency of methods, reduce potential for bias and prevent duplication of reviews topics (Stewart, Moher & Shekelle, 2012).
Step 2: Define Research Question
As with any research, the research question(s) for systematic reviews are often narrow in focus. Frameworks such as PICOs: Population, Intervention, Comparison, Outcomes and study design (Methley et al., 2014), can assist with generating research questions by defining key concepts relevant to the research question.
Step 3: Develop a Search Strategy
The research question(s) will influence your inclusion and exclusion criteria for your literature search. Simply, these criteria are used to narrow down relevant evidence (studies, grey literature etc.) that will be included in the synthesis – for example, if you are looking at therapies used to treat anxiety you are not going to include papers that are focusing on therapies for other disorders. It is also important to select relevant databases and registers; one way to do this is to look at similar systematic reviews in your field and note the databases and registers that they used to search for evidence.
To find relevant papers you need to devise specific search terms based on your frameworks’ concepts – for example, ‘anxiety’, ‘intervention’, ‘anxiety treatment’ ‘adolescents ‘, ‘youth’ etc. These search terms help to find evidence for your review in literature databases. It is important to use the same search terms in all the relevant databases. This will yield duplicates, which will later be removed using study selection guidelines such as the PRISMA guidelines (Page et al., 2021). A software like EndNote is helpful for keeping track of articles, references and citations.
Step 4: Appraise Searches
One way to appraise your identified studies is screening by the title, abstract and full text. At each of these stages, it is important to assess whether the identified studies match the concepts in your framework and your inclusion/exclusion criteria. A useful tool for screening studies is Rayyan (Ouzzani et al., 2016).
Step 5: Synthesis, Analysis, and Presentation
A quality assessment should be conducted on each study included to evaluate potential sources of bias, for example the Downs and Black (1998) Checklist for the assessment of the methodological quality of health care interventions. Studies with poor quality assessment scores should be included, one of the main aims of a systematic review is to present and synthesise ALL the relevant literature about a topic, therefore literature in favour or against, or of ‘lower quality’ still warrants inclusion.
A summary table including each study should be written up, including (but not limited to) headings such as; study design, sample characteristics, measures used and results. This will also help you write and structure your analysis and results section. Analytic techniques in systematic reviews can include qualitative (i.e. narrative synthesis) and quantitative analysis, otherwise known as meta-analysis.
Step 1: Type of Software
Non-coding-based software used to conduct the meta-analyses includes the Cochrane library’s Review Manager (Rev-Man), which has online or desktop version available. Coding-based software includes R which uses statistical software packages (e.g. Metafor) to undertake meta-analysis.
Step 2: Types of Modelling
Types of modelling for meta-analysis include random and fixed effects (a useful breakdown by Prof Wolfgang Viechtbauer can be found here). It is also important to consider whether uni-level or multi-level modelling will be employed. Multi-level modelling accounts for clustering, which refers to commonality in a sample of study effects (i.e., effects from the same paper, the same district, and the same research group; see Noortgate and colleagues for more information).
Step 3: Outcome Measures
Outcome measures are the overall effect based on the study effects outputted from the meta-analysis model. There are various types of outcome measures such as odds ratios and risk ratios. It is important to note that continuous and categorical data cannot be combined into an overall effect (see here for a handy tool to convert effect sizes into categorical or continuous forms).
Step 4: Quantifying Heterogeneity
Heterogeneity refers to the variability between the study effects. I2 is a measure of consistency commonly used as marker of the amount of heterogeneity in the overall effect (Higgins & Thompson, 2002). Sources of heterogeneity can be investigated using meta-regression.
Step 5 – Meta-Regression
Meta-regression is a statistical technique which can indicate how much of the heterogeneity in the overall effect can be explained by study characteristics modelled as moderators. Meta-regressions can also tell us which particular categories of a moderator resulted in larger or smaller combined effects.
Step 6 - Sensitivity Analyses
Sensitivity analyses attempt to clarify our certainty in the inferences drawn from the meta-analyses. Historically, small studies have had a greater chance of publication if their results were statistically significant, which results in studies included in meta-analyses being potentially impacted by bias (Copas & Shi, 2000). There are a variety of sensitivity analyses techniques, including Leave-one-out analysis, Egger Regression Test and Funnell plots.
In collaboration with the Teaching for Inclusion Research Lab (iTeach)
About the authors
Ms. Eibhlín H. Walsh is a National Institute of Studies in Education doctoral candidate in the Department of Psychology, University of Limerick. Her doctoral research primarily focuses on exploring contextual and intervention factors to further understand effectiveness and implementation capacity of post-primary school-based suicide prevention, using mixed methodologies. Prior to her beginning her doctoral research Ms. Walsh had roles in the Central Statistics Office, Ireland’s National statistical agency, and in the Department of Psychology, University of Limerick, as a research assistant. Ms. Walsh is an investigator for Co-SPACE Ireland and a member of the i-Teach Lab.
Ms Amanda (Mandy) O’Dwyer is a doctoral researcher in the Department of Psychology, University of Limerick. Her doctoral research primarily focuses on the nature of social support and social identity in relation to children and adolescent mental health, in order to promote positive mental health outcomes within mental health services. Prior to beginning her doctoral research, Amanda had roles in various social care settings including disability and homeless services. Amanda is a researcher for the HRB funded rapid response project investigating the psychological responses to COVID-19 in health care workers and a member of i-Teach Lab at the University of Limerick.
Copas, J., & Shi, J. Q. (2000). Meta-analysis, funnel plots and sensitivity analysis. Biostatistics, 1(3), 247-262.
Downs, S. H., & Black, N. (1998). The feasibility of creating a checklist for the assessment of the methodological quality both of randomized and non-randomized studies of health care interventions. Journal of Epidemiology Community Health, 52, 377-384.
Higgins, J. P., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta‐analysis. Statistics in Medicine, 21(11), 1539-1558.
Methley, A. M., Campbell, S., Chew-Graham, C., McNally, R., & Cheraghi-Sohi, S. (2014). PICO, PICOS and SPIDER: a comparison study of specificity and sensitivity in three search tools for qualitative systematic reviews. BMC health services research, 14(1), 579. https://doi.org/https://doi.org/10.1186/s12913-014-0579-0
Mittman, B. S. (2004). Creating the Evidence Base for Quality Improvement Collaboratives. Annals of Internal Medicine, 140(11), 897-901. https://doi.org/10.7326/0003-4819-140-11-200406010-0001115172904
Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan — a web and mobile app for systematic reviews https://doi.org/10.1186/s13643-016-0384-4
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P., & Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
Stewart, L., Moher, D., & Shekelle, P. (2012). Why prospective registration of systematic reviews makes sense. Systematic Reviews, 1(1), 7. https://doi.org/10.1186/2046-4053-1-7
Van den Noortgate, W., Opdenakker, M.-C., & Onghena, P. (2007). The Effects of Ignoring a Level in Multilevel Analysis. School Effectiveness and School Improvement, 16(3), 281-303. https://doi.org/10.1080/09243450500114850
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1-48. https://lirias.kuleuven.be/1059637?limo=0