The decade of the 70's has seen the appearance of a number of publications in the area of "evaluation research", the effort to systematically apply social science research methods to the evaluation of action programs set up for the purpose of helping to solve social problems. Evaluation research is thus one area in which social scientists can be of direct aid in setting public policy about social welfare services.

An excellent primer on the problems that are likely to arise in the course of an evaluation effort and the "conventional wisdom" that has been developed thus far is Carol Weiss' Evaluation Research: Methods

of Assessing Program Effectiveness (1972). Had her work been available when the research reported here was designed, some of the problems encountered might have been foreseen and dealt with more wisely. There are also a number of readers which have appeared recently, including Caro's Readings in Evaluation Research (1971) and Weiss' Evaluating Action Programs (1972). As the fine 24 page bibliography in the latter volume shows, however, there is a lot more published material about the conceptual and methodological issues which arise in evaluation research, treated in the abstract, than there are case studies which illustrate the fact that evaluation research is often an essentially political process of conflict and bargaining among the researcher, the staff members whose program is under scrutiny, and the funding agencies. To paraphrase a famous aphorism, the sociologist who is not aware of previous research problems and mistakes is condemned to repeat them. This paper is an attempt to summarize some of the specific research procedures and research problems that arose in evaluating a three-year pilot social service for widows, related from the obviously biased position of the evaluator.