Date of Award
Doctor of Education
Educational Leadership, Research and Technology
Dr. Mary Anne Bunda
Dr. James Sanders
Dr. Daniel Stufflebeam
Dr. Howard Poole
The purpose of this study was to advance the body of knowledge about how practicing evaluators can use microcomputer programs to obtain reliable and valid content analyses of responses to open-ended survey questions. This is important because practitioners often analyze such responses as part of larger evaluation efforts. To address this general problem, content analysis experts have developed methods of survey and discovery to help delineate new category systems and obtain codes for responses based on existing categories. Fortunately, some of the methods can be implemented with general purpose microcomputer programs. For example, key words out of context--word lists--can be generated using some spelling checker programs. In addition, information retrieval--sorting responses by category--can be accomplished with data base management programs.
Reliability and validity of the categories developed or codes assigned were the dependent variables for two experiments. It was hypothesized: (a) participants who used the specialized output would create more reliable and valid category systems than those who did not use it (Experiment 1); and (b) participants who used the specialized output would produce more reliable and valid response codes than those who did not use it (Experiment 2).
The experiments were based on a simulated evaluation effort in which fictitious teachers responded to an open-ended survey question about their school's controversial accountability system. The participants, College of Education students, were asked to first create a category system for the responses and then code them into the final category system.
The Experiment 1 null hypotheses were retained and the Experiment 2 null hypotheses were rejected. In other words, the specialized output did not appear to help experimental participants create more reliable and valid category systems; but it did help them obtain more reliable and valid codes for the responses.
The results of Experiment 2 support using specialized output to help inexperienced analysts code relatively long responses using an established category system. Future studies can focus on factors possibly related to retention of the Experiment 1 null hypotheses. These factors include the types of specialized microcomputer output used, responses analyzed, and participants used.
Frisbie, Richard D., "The Use of Microcomputer Programs to Improve the Reliability and Validity of Content Analysis in Evaluation" (1986). Dissertations. 2294.