Date of Award

4-2019

Degree Name

Doctor of Philosophy

Department

Educational Leadership, Research and Technology

First Advisor

Dr. Jessaca Spybrook

Second Advisor

Dr. Gary Miron

Third Advisor

Dr. David Reinhold

Abstract

Recently, higher education has started to place a premium on rigorous research that uses randomized controlled trials (RCTs) to test the impact of educational interventions. This may be due in part to concerns about a deficiency of high-quality evidence of the effectiveness of programs, policies, and practices to improve undergraduate students’ outcomes. Given the naturally nested structure in higher education, e.g., students nested in colleges/universities, researchers in higher education start considering a specific type of RCT called a cluster randomized trial (CRT), which have been frequently used in K-12 impact research. In a CRT, whole clusters, such as colleges/universities, are assigned to treatment or control conditions. Just like in RCTs, it is critical that CRTs are designed with adequate power to detect a meaningful treatment effect. However, the multilevel nature of CRTs makes the power analyses more complex than in a RCT. Two key design parameters that are necessary in order to calculate the power for a CRT are the intraclass correlation coefficient (ICCs), or the percent of variance in the outcome that is between clusters, and the variance in the outcome that is explained by covariates (R2). So far, a rich body of evidence of empirical estimates of these design parameters is available in K-12 settings. However, these design parameters are context-specific and there is a lack of empirical evidence of estimates of these design parameters in high education settings.

The purpose of this study is to empirically estimate ICCs and R2 values for planning CRTs aimed at evaluating the efficacy of collegiate cognitive skills interventions in higher education. This study uses data from the Collegiate Learning Assessment (CLA), which is a standardized test measuring students’ cognitive ability in higher education. A series of two-level hierarchical linear models were employed to calculate the design parameters. The unconditional model, or model with no covariates, was used to calculate the ICCs. Models with student level and school level covariates were then used in order to calculate the R2 values. The influence of these design parameters on statistical power was examined by calculating the minimum detectable effect size under various sample sizes using the estimated design parameters.

Across all samples and outcomes, the ICC estimates ranged from 0.194 to 0.353. That is, between 19 and 35 percent of the variance in test scores was between colleges/universities. The proxy variables for the student level pretest and school level pretest had the greatest explanatory power of the covariates considered and in most cases explained between 60 and 86 percent of the between school variance in the outcomes. This suggests that including a proxy for pretest, either at the student or school level, is critical in designing a CRT as it will greatly increase the statistical power of the study to detect a meaningful effect. The empirical estimates of design parameters in this study represent the beginning of a collection of design parameters relevant for those planning CRTs to test interventions in higher education and extending this work to other outcome domains in higher education would be useful.

Access Setting

Dissertation-Open Access

Share

COinS