Defining the Benefits, Outputs, and Knowledge Elements of Program Evaluation

The Canadian Evaluation Society (CES) has undertaken a project to explore the benefits that can be attributed to program evaluation, the outputs necessary to achieve those benefits, and the knowledge and skills needed to produce the outputs. Benefits, outputs, and knowledge elements were articulated and confirmed through a number of consultations with CES members and the international evaluation community. The consultation process was also successful in encouraging dialogue about the nature of evaluation and in raising considerations about the definition and promotion of program evaluation. The findings of the project can be used by the CES, and indeed by other evaluation organizations, to support their advocacy and professional development initiatives, and by individual evaluators to guide their own professional development and evaluation practice.

Epilogue: Comments on the Special Issue


Understanding Ecc Evaluations: A Guide for Health and Human Services

There is a growing interest in the use of ecc methods to evaluate our investments in public and charitable sector programming. Calculating the costs and consequences of differing interventions provides new and better information on the relative cost effectiveness of competing alternatives. More commonly used to consider and evaluate health options, ecc evaluations are equally useful in prevention programs and community based services. This article reviews the potential application of ecc evaluations, describes the basic methodologies that are used, and discusses some of the best strategies for disseminating results. Providing a balanced perspective on the methodological limitations, the article encourages evaluators and program sponsors to carefully consider the use of ecc evaluation in their field.

Comprehensive Costing of Support Services for Vulnerable Populations: A Case Study

Comprehensive costing of human services remains an understudied issue in evaluating health and social services. Evaluations done to date have considered either only some of the services offered to clients or restricted the examination to costs borne by programs. Most studies to date have been conducted in the United States and Britain, countries that have different systems of health and social services than Canada. This article presents a case study of the use of a “comprehensive costing approach" in a Canadian context. The approach examines the full range of costs of health and social services and other supports associated with assisting a person with severe and persistent mental illness to live in the community. The case study represents a pilot program (funded by the Ministry of Health in Ontario) to provide specialized support services for a consumer to live in the community. Cost comparisons developed around the initiation of the pilot program in the present evaluation examined the initial months of the consumer in the program and a period prior to entering the program when the consumer was receiving standard care in the community including a period of hospitalization. Costs were compared according to different domains such as accommodation, social benefits and health and social services. Perspectives of program planners on the impact of costing evaluations for a case study are provided, followed by limitations and future directions for the methodology.

The Pitfalls And The Potential Of Early Evaluation Efforts: Lessons Learned From The Health Services Sector

Evaluators often find themselves assuming a variety of roles as they examine programs and interact with the people who are connected to programs. The present article proposes that this is especially true when attempting to conduct an impact evaluation very quickly after a new program is initiated. Given the increasing trends toward program accountability, administrators will often undertake evaluations very quickly after new programs are initiated, and evaluators are increasingly asked to determine the impact of program that is not yet fully functioning. Using examples drawn from the experience of conducting an outcome evaluation of a major reorganization of a health service delivery system very soon after the changes were implemented, the unique challenges and benefits of evaluating a complex program in the early phases following implementation will be highlighted. Specifically, the varied roles that the evaluators were required to assume, and the lessons that they learned from expanding their professional boundaries will be outlined. In addition to the diverse roles that evaluators often occupy (such as educator, consultant, researcher), those conducting early impact evaluations may find themselves acting as protocol trainers, mediators and/or therapists for program staff and administration as they attempt to evaluate the outcome of a program that has not been fully implemented.

Validation d'une version française du Outcome Questionnaire et évaluation d'un service de counselling en milieu clinique

The first purpose of this study was to assess the psychometric quality and validity of the Mesure d'Impact-45 (MI-45) and the Mesure d'Impact-22 (MI-22), French translations of 45-item and 22-item versions, respectively, of the Outcome Questionnaire (Lambert & Burlingame, 1996a, 1996b). The second purpose was to evaluate, by means of the MI-22, a French-language counselling program located in a clinical setting in Quebec, which provided additional information on the sensitivity to change and clinical utility of the MI-22. Eleven counsellors served a total of 216 clients (80% women) during the period of the study. The MI-22 had good internal consistency (coefficient alpha = .88) and correlated well with a criterion measure, the SCL-10, a short form of the SCL-90-R (Derogatis, 1993). Ninety clients who had taken part in at least 8 counselling sessions made clinically and statistically significant progress. Of 107 clients completing counselling during the study period, 37% recovered, 29% improved, 24% experienced no change in functioning, and 10% deterioriated. Seventy clients who had completed a post-counselling telephone interview expressed a high level of satisfaction with the program, while also making suggestions for service improvement. The MI-22 was seen as useful by both clients and counsellors.

Evaluation Capacity Building in the Voluntary/Nonprofit Sector

The purpose of this article is to provide an overview of priorities for evaluation capacity building in the voluntary/nonprofit sector and to raise awareness among evaluation professionals of the key issues for nonprofits that may have an effect on evaluations. There are various challenges for nonprofit organizations in the evaluation of their programs, projects and activities that include the availability of resources, evaluation skill levels, the design of evaluations and the nature of nonprofit work. Among the priorities for evaluation capacity building in the nonprofit sector that emerge from these challenges are: fostering collaboration; addressing resource and skill needs; exploring methodological challenges; and building a feedback loop into evaluation. There is presently a large opportunity to open the dialogue process in evaluation, for nonprofits to work together and for nonprofits to work with funders and evaluators to address evaluation challenges. Evaluators have a role to play in meeting evaluation challenges in nonprofit organizations by helping to find strategies for affecting change, exchanging information with the nonprofit sector community on advances being made in this area and ensuring that efforts are sustainable.

Preparing Non-Profits For New Accountability Demands

While the role of non-profits in Canadian society has always been important, the sector now plays a greater role as more and more government services have been transferred to the sector as part of the move towards governance over government. Complementing this changing role is the need, within both government and the sector itself, to enhance accountability and transparency based on evidence. While program evaluation offers a viable tool to achieve these ends, there is a great deal of apprehension that must be overcome, as must the lack of a sound infrastructure of technical leadership capacity within the non-profits sector. This article examines these challenges within the context of non-profits in the social/health or human services areas. It suggests building evaluation capacity through a particular approach to evaluation courses. It examines the role of the non-profits, an approach to teaching, the role of funders and educational institutions in developing this capacity. The capacity to conduct evaluation has been ignored by funders who may mandate an evaluation with the unrealistic intent of it providing accountability.

Introducing Program Teams to Logic Models: Facilitating the Learning Process

Logic models are an important planning and evaluation tool in health and human services programs in the public and non-profit sectors. This Research and Practice Note provides the key content, step-by-step facilitation tips, and case study exercises for a half-day logic model workshop for managers, staff and volunteers. Included are definitions, explanations and examples of the logic model and its elements, and an articulation of the benefits of the logic model for various planning and evaluation purposes for different audiences. The aim of the Research and Practice Note is to provide a starting point for evaluators developing their own workshops to teach program teams about logic models. This approach has been evaluated with hundreds of participants in dozens of workshops.