top of page

Extending Your Practice

​

ROB and Study Quality Assessments 

​

Risk of Bias (RoB) assessments are one method of examining the quality of primary studies included in a systematic review. RoB assessments can be conducted as part of the study coding and data extraction stage of a systematic review. This includes coding and extracting information related to the design of the primary study. A common misconception in systematic reviews is that study quality can be determined by the publication status of a study. For example, a reviewer may assume that studies published in peer-reviewed academic journals are of higher quality than studies that are not included in the academic literature -- also known as unpublished or grey literature. However, the evidence synthesis community's modern understanding of publication bias demonstrates that academic journals tend to favor studies that report significant and positive results. This poses a threat to the methodological validity of a meta-analysis by overestimating effect size values. 

​

More methodologically sound methods of assessing study quality involve the inclusion of randomized control trials (RCTs); particularly when evaluating intervention effectiveness is the primary aim of the meta-analysis. The Evidence Pyramid tells us that RCTs are a gold standard study design in primary research. For additional study designs like quasi-experimental studies, quality checks may include assessing the researcher's approach to handling baseline equivalence between treatment and control groups. Furthermore, there are instruments that can be used as templates for assessing (RoB) and its potential effects on study quality. 

​

Lastly, it is also important to assess the quality of the meta-analysis itself. Cochrane and Campbell's protocol registration process and MECCIR standards are both used as quality checks for ensuring the rigor and validity of systematic reviews and meta-analyses. Additionally, our Coding the Literature video guides include information for how to approach study quality in a systematic review and meta-analysis. 

​

​

Robust Variance Estimation (RVE) 

​

Robust variance estimation (RVE) is a statistical method that allows a meta-analyst to take into account the data structure of effect sizes in a meta-analytic dataset. While univariate effect size data structures lend themselves to independent effects models, multivariate and multilevel effect size data structures require a more nuanced approach to handle dependent effect sizes. RVE is a specific strategy that can more accurately estimate the underlying standard errors in dependent effect size models. Additionally, RVE functions in are provide adjustments for small to moderate sample sizes in meta-analytic datasets. The clubSandwich and robumeta packages in R contain functions that can be used for robust variance estimation. Our Video Guides include a lecture from Dr. Beth Tipton that provides an introduction of Robust Variance Estimatiion in meta-analysis models. 

​

Single-Case Experimental Designs (SCED) 

​

Meta-analysis methods can be used to synthesize intervention effects of multiple single-case experimental design studies (SCEDs). Fields where where meta-analysis of SCEDs (SCED-MA) have been useful include clinical psychology, behavioral analysis, school psychology, and special education on populations with low-incidence conditions such as autism, attention deficit hyperactivity disorder (ADHD), as well as deafness and other hearing impairments. SCED-MAs allow for statistically sound inferences to made on the intervention effectiveness of individuals as compared to groups that are usually seen in general experimental design studies. SCEDs introduce rigor that is not normally feasible in case study or open study research designs Compared to experimental studies looking to observe differences between groups, SCEDs observe differences within individuals. Participants become their own comparison when they receive the control condition in the first phase of an experiment and then receive the intervention in the second phase of the experiment. Typical experimental modalities seen in SCEDs include multiple baseline and reversal designs. Multiple baselines establish a baseline measure using the individual as a unit of analysis for the control, then the individual receives the treatment condition, and then finally the baseline control is reintroduced to observe changes. Reversal designs start by introducing individuals to the intervention as a treatment condition, then introducing the baseline at a later date to determine changes between treatment and control conditions. MATI provides an overview of meta-analysis methods for SCEDs as a specialized course option during the workshop  

​

Power Analysis and Missing Data 

​

Statistical power and missing data go hand-in-hand when considering how the amount of information available from primary studies can influence the methodological quality of a meta-analysis. In primary research, statistical power refers to the minimum sample size required to avoid a Type II Error. In other words, power analysis helps determine how much data is needed for a researcher to be confident that they are correctly rejecting the null hypothesis in favor of the alternative hypothesis. Remember that for primary research, the unit of analysis is usually the individuals included in the sample. In a systematic review and meta-analysis, the unit of analysis is the primary study. So statistical power analysis for these studies helps to determine the minimum number of primary studies required to support the hypothesis that is claimed in the research question and problem statement. 

​

Even after determining the minimum number of primary studies needed for a meta-analysis sample, an analyst may run into the issue of missing data. In a meta-analysis, the problem of missing data usually arises from the reporting characteristics of primary studies included in the sample. This happens when information necessary for the successful computation of an effect size (e.g. sample sizes, means, standard deviations, and/or convertible hypothesis test values) is omitted from the results of the primary study. One way of handling missing data is through the use of Exploratory Missing Data Analysis (EMDA). The purpose of EMDA is to understand the mechanisms that explain patterns of missingness within the dataset. There are three primary mechanisms for missingness patterns:

​

  1. Missingness-completely-at-random (MCAR): the probability that observed values are independent of all observed and observed data

  2. Missingness--at-random (MAR): the probability that observed values are dependent only on observed data 

  3. Missingness-not-at-random (MNAR): the probability that observed values are related to both observed and unobserved data

​

Once the underlying missingness pattern has been determined, then statistical options for handling the missing data become more available. Current methods focus on the utility and feasibility of using the following missing approaches to handling missing data: 

​

  1. Using complete cases to produce unbiases estimates with the tradeoff of lower statistical power 

  2. Multiple Imputation (MI) methods that that replace missing data with predicted values based on the existing quantitative relationship among values in the dataset 

  3. Full Information Maximum Likelihood (FIML) that uses complete cases to impute values as a substitute for missing data

​

​

Caution should be used when considering these approaches as MI and FIML methods are still in development, may need specialized software to use, and require an in-depth understanding of the sample size distribution to employ effectively. 

​

Evidence Gap Maps 

​

Evidence gap mapping (EGM) began in 2003 in the health sector as a method for providing a high-level view of intervention effectiveness before gaining traction in educational research and practice. Since its inception, it has been used as a method for systematically assessing the evidence base to inform policy-making decisions EGMs lend themselves to systematic reviews and meta-analyses in that their results can be used to strategically guide and focus research topics and questions that can be answered by meta-analysis methods. The Campbell Collaboration provides methodological guidance for conducting systematic, transparent, and replicable EGMs for synthesis research. MATI covers evidence gap maps as a specialized research method during the workshop. For more information on Evidence Gap Maps with Campbell, you can visit their EGM resource page

​

Meta-Analysis Structural Equation Modeling 

​

Meta-Analysis Structural Equation Modeling (MASEM) is a statistical method that combines the synthesis capabilities of meta-analysis with the path analysis capabilities of structural equation modeling. Dr. Mike Cheung has contributed heavily to the develop of MASEM methods with his pioneering work on metaSEM, a package that contains functions for performing MASEM in R. The goal of MASEM is to synthesize the structural path between latent and observed variables to explain how constructs within a conceptual framework, theory, or phenomena are related. In the social and behavioral sciences, this method is mostly used in the fields of item response theory and factor analysis. MASEM is particularly useful in explaining how measurement error impacts the structural relationships between constructs and variables within a theory or framework. MATI offers a MASEM workshop as an exploration of specialized methods in meta-analysis research. For more foundational information on MASEM, it is encouraged to read Cheung's seminal article on Meta-Analysis Structural Equation Modeling. 

​

​

bottom of page