Date of Award

4-2016

Degree Name

Doctor of Philosophy

Department

Evaluation

First Advisor

Dr. Lori Wingate

Second Advisor

Dr. Michael Scriven

Third Advisor

Dr. E. Jane Davidson

Fourth Advisor

Dr. Marianne Di Pierro

Keywords

Rubrics, evaluation-specific methodology, program evaluation, logic of evaluation

Abstract

Evaluation is the systematic determination of merit, worth, or significance. A core professional evaluator competency is to provide transparent and explicit evaluative conclusions. Yet, “understanding the reasoning process to establish evaluative conclusions drawn in practice has to be the field’s greatest unmet challenge” (Fournier, 1995, p. 1). This three-article dissertation studies rubrics as a tool that can assist in meeting the stated challenge when used by program evaluators to reach explicitly evaluative conclusions.

Study 1 provides an account of the history and etymology of the term rubric and provides an analysis of peer-reviewed program evaluation literature, specific to the extent and how rubrics are portrayed in program evaluation. The results of the literature review produced few examples of the use of rubrics in program evaluation to reach explicitly evaluative conclusions.

Study 2 investigates the ways that evaluators use rubrics as evaluation-specific tools in program evaluation, and explicates how they learned to do so. Study 2 presents results of interviews with practitioners identified in Studies 1 and 2 as users of rubrics to reach evaluative conclusions. Interviewees found rubrics to be useful in multiple ways, including reaching explicitly evaluative conclusions, but they rarely publish their experiences in the peer-reviewed literature. Guidance about this practice is, instead, typically shared through mentoring.

Study 3 fills a major gap in the program evaluation literature by explaining how the form (characteristics and configuration) and function (the natural purpose) of rubrics exemplify the core logic and nature of evaluation. This explanation can also promote movement toward a shared language that will enable theorists, researchers, trainers, and practitioners, who often hail from disparate academic backgrounds, to more effectively further theory, training, and practice of rubric use by program evaluators to make evaluative reasoning explicit. Fournier, D. (1995). Editor’s notes. New Directions for Evaluation, 1995(68), 1-4

Access Setting

Dissertation-Open Access

Share

COinS