The Lay of the Land: Evaluation Practice in Canada in 2009
A group of 12 evaluation practitioners and observers takes stock of the state of program evaluation in Canada. Each contributor provides a personal viewpoint, based on his or her own experience in the field. The selection of contributors constitutes a purposive sample aimed at providing depth of view and a variety of perspectives. Each presentation highlights one strength of program evaluation practiced in Canada, one weakness, one threat, and one opportunity. It is concluded that Canadian evaluation has matured in many ways since 2003 (when a first panel scan was conducted): professional designation is a reality; the infrastructure is stronger than ever; organizations are more focused on results. Still, evaluation is weakened by lacunas in advanced education and professional development, limited resources, lack of independence, rigidity in evaluation approaches, and lack of self-assessment. While the demand for evaluation and evaluators appears on the rise, the supply of evaluators and the financial resources to conduct evaluations are not. The collective definition of the field of evaluation still lacks clarity. There is also reassurance in looking toward the future. With increased appetite for evaluation, evaluators could make a real difference, especially if evaluators adopt a more systemic view of program action to offer a global understanding of organizational effectiveness. The implementation of a Certified Evaluator designation by CES is a major opportunity to position evaluation as a more credible discipline.
Learning Through Evaluation? Reflections on Two Federal Community-Building Initiatives
In recent years, the federal government has launched numerous pilot projects to tackle complex, localized policy problems through new modes of governance involving vertical engagement with community-based organizations and horizontal collaboration across departments. A key purpose of these time-limited projects is policy learning, with an emphasis on action research and stakeholder dialogue to inform future innovation. However, realizing the possibilities for learning through pilot projects requires evaluation frameworks sensitive to the particular challenges of collaborative and community-based policy making. Through comparative case study analysis of two recent federal pilot projects, we highlight tensions in prevailing approaches and explore strategies for better alignment of federal evaluation frameworks with the needs and capacities of local communities.
Décentralisation et renforcement des capacités de suivi-évaluation des collectivités territoriales: expériences de l'Afrique de l'Ouest
Since the mid-1980s, there has been a growing interest in performance management and measurement tools for local authorities. Today, the debate on such tools also extends to developing countries, including the countries of West Africa. While the challenges faced when designing and using appropriate tools for performance measurement are of a somewhat different nature than in the industrialized countries, they are nevertheless very relevant for the international debate.
L'évaluation des programmes de développement en Afghanistan. étude de cas: une évaluation participative du programme de solidarité nationale
This case study focuses on the evaluation and monitoring process implemented for a development program in Afghanistan, the National Solidarity Program (NSP), and a pilot methodology for participative evaluation at the local level. Given the weaknesses of the current evaluation system, which can produce contradictory results, the proposed participative evaluation methodology is based on gift-giving and exchange anthropology theory and attempts to study program impacts through experience and analysis shared by the partners. This methodology raises many operational issues and issues involving the link between evaluation and program processes. The tool presented may be considered a lever for social change through the evolution in perception among various participants.
Designing and Applying Project Fidelity Assessment for a Teacher Implemented Middle School Instructional Improvement Pilot Intervention
This article argues that intervention pilot test evaluations have focused insufficient attention on the measurement of project fidelity and the subsequent use of fidelity results for (a) interpreting variations in project outcomes and (b) understanding the rationale for teachers' deviations from implementation protocols. The authors report on the establishment and application of an evaluation methodology for measuring and analyzing implementation fidelity for a middle school instructional improvement pilot project. The authors found that the highest implementation fidelity scores were not correlated with the most desirable project outcomes, as lower fidelity scores—in the 70–79% range—produced the most favourable gains on pre-post student outcomes. Moreover, application of the fidelity evaluation methodology provided insight into teacher deviation from implementation protocols; such deviation from the implementation protocols typically reflected meaningful professional classroom judgements.
Evaluating the Early Implementation of a Community Crisis Bed Program for People Experiencing a Psychiatric Emergency Using Key Component Scaling
This article describes an evaluation of the early implementation of a community crisis bed program for people experiencing a psychiatric emergency. A Key Component Scaling approach was used (a) to provide detailed and specific data on the implementation of particular program components, (b) to facilitate a comparison of perceptions of program implementation across program partners, and (c) to increase program understanding by these various partners. Three stakeholder groups (mobile crisis team staff, bed provider staff, and community partners) provided quantitative and qualitative data that helped to identify well-implemented program components, challenges in implementation, and suggestions for improvements.
BOOK REVIEW: Richard A. Krueger and Mary Anne Casey. (2009). Focus Groups: A Practical Guide for Applied Research
COMPTES RENDUS DE LIVRES: Valéry Ridde & Christian Dagenais (Éds.). (2009). Approches et pratiques en évaluation de programme