Date of Award


Degree Name

Doctor of Philosophy




Often the singular goal of an evaluation is to render a summative conclusion of merit, worth or feasibility that is based on multiple streams of multidimensional data. Exacerbating this difficulty, conducting evaluations in real-world settings often necessitates implementation of less than ideal study designs. This reality gets further complicated by the standard method for estimating the precision of results via the confidence interval (CI). Traditional CIs offer a limited approach for understanding the precision of a summative conclusion. This dissertation develops and presents a unified approach for the construction of a CI for a summative conclusion (SC).

This study derived a formula for estimating the SC and CI that unpacks the multiple pieces of the summative conclusion and accommodates the following study elements: the Type I Error rate; the number, variance, and correlation among the values used to formulate the conclusion; the performance benchmarks for critically important values; the sample size and the amount of measurement error for each value; and the amount of weight accorded to each value, all of which are under varying levels of control by the evaluator. Statistical and psychometric proofs for each of the underlying theories were presented along with Monte Carlo simulations demonstrating how each affect SC.

Methods were derived to fill gaps in the literature for removing sampling error and measurement error from a composite variable, constructing CIs for ordinal variables, determining the distribution of a composite variable generated from variables measured with different scales or that conform to dissimilar distributions, expanding the law of total covariance to accommodate two predictors, and computing a nonparametric reliability estimate and CI. SAS code is presented for generating non-normal correlated data and constructing CIs for ordinal variables. As a result, evaluators can now construct CIs for their summative conclusions, which will help the field of evaluation gain wider acceptance in the scientific community.

Access Setting

Dissertation-Open Access