•  
  •  
 

Abstract

Excerpt from the full-text article:

Much is being written these days about the role of evaluation in the formulation of social policy. While few writers question the need for basing policy on systematic evaluation a good deal of the literature appears to focus on the obstacles in Larrying out as well as applying evaluative research. By contrast, the number of studies which in the eyes of critics measure up to minimum standards of scientific adequacy appears to be exceedingly small. Regardless of the problems inherent in the use of research data for policy formulation, the dearth of good studies constitutes the main reason why social policy is made, by and large, without reference to information secured with the aid of systematic research.

The present paper endeavors to show how a set of empirical data, collected at four casework agencies, can serve as aids in choosing among policy alternatives. The size of the sample and problems in design make this study a demonstration in the use of policy-relevant research rather than a substantive contribution to knowledge in agency policy formulation. The data were produced as part of an effort to evaluate the outcome of services to clients. Whereas the agency executives, who encouraged and supported the study, were mainly concerned with the results of services, the researchers in this study were of the opinion that evaluation of outcome extends beyond a determination of whether treatment was or was not helpful to most clients. Questions that loomed large pertained to differences in criteria of outcome, effectiveness of techniques of service, effect of client characteristics on outcome, and others. Evaluation, in this study, was intended to encompass several areas of concern to agency decision-makers.

Off-campus users:

You may need to log in to your campus proxy before being granted access to the full-text above.

Share

COinS