Quand une république s'expose au regard de citoyens-évaluateurs
In 1995, the Canton of Geneva inaugurated an External Public Policy Evaluation Commission, a unique body in Switzerland. Both powers, Parliament and Government, share an independent body to evaluate the effectiveness and relevance of public intervention. This committee, composed of 16 citizen-evaluators, can choose its subjects and focus and has the ability to contribute some unexpected critical elements. Since its establishment 11 years ago, 19 evaluation reports have been published. They describe effects on target groups and beneficiaries. The themes covered are important and diverse: housing, social security benefits, unemployment, tax systems, public transport, and the like. Recommendations have been followed by the Government fairly closely.
Developing a tool to measure knowledge exchange outcomes
This article describes the process of developing measures to assess knowledge exchange outcomes using the dissemination of a best practices in type 2 diabetes document as a specific example. A best practices model consists of knowledge synthesis, knowledge exchange (dissemination/adoption), and evaluation stages. Best practices are required at each stage. An extensive literature review found no previous knowledge syntheses of concrete tools and models for evaluating dissemination or exchange strategies. This project developed a practical and usable tool to measure the reach and uptake of disseminated innovations. The instrument itself facilitates an opportunity for knowledge exchange to occur between producers and adopters. At this point the tool has a strong theoretical basis. Initial pilot-testing has begun; however, the accumulation of evidence of validity and reliability is only in the planning stages. The instrument described here can be adapted to other areas of population health and evaluation research.
Conceptualizing research impact: the case of education research
This qualitative study aims at conceptualizing research impact generally by studying the specific case of research impact in the field of education. An analysis process akin to grounded theory was applied to the analysis of sections of reports provided by educational researchers. Literature on the subject of research impact was used to substantiate and complete the portrait of educational research impact that emerged from the data. The resulting conceptual framework proposes five interdependent stages, each one characteristic of certain categories of research impact that are typically interrelated in time and in terms of researcher control. It is hoped that this conceptual framework will help program evaluators and researchers tackle the larger task of uncovering and arguing the meaningfulness of alternative ways of measuring the impacts of research in the social sciences and humanities.
A participatory approach to the development of an evaluation framework: process, pitfalls, and payoffs
Much literature exists on participatory approaches to developing and implementing program evaluation. Little is documented, however, about participatory approaches to developing an evaluation framework. This article reports a case study of implementation of a participatory evaluation approach and examines the results in light of participatory evaluation theory. A participatory approach was used to develop a provincial evaluation framework for a unique, collaborative community/provincial/federal funding program for community-based HIV/AIDS service organizations in Alberta, Canada. The participatory process resulted in significant capacity building, mutual learning, and relationship development, as well as a comprehensive and user-friendly provincial evaluation framework. The purpose of this article is to share our process, the pitfalls, and the payoffs to our participatory approach in developing an evaluation framework.
Youth voices: evaluation of participatory action research
When conducted with sensitivity and reflexivity, participatory action research (PAR) can be an empowering process that is particularly relevant for engaging young people in reflection and dialogue for social change. As the theory and practice of PAR evolve, researchers have evaluated the experiences of community participants, using both qualitative and quantitative approaches. However, only a limited number of evaluations have focused on PAR processes undertaken with youth, and few published papers have reported on involving youth in the evaluation. This article addresses the process of enabling youth to participate to their fullest ability in an evaluation of a PAR project called Youth Voices. The analysis draws on feedback questionnaires from community evaluators, minutes and notes from team meetings, and the researchers' experiences and observations. The authors reflect on lessons learned that can be helpful to others considering participatory evaluation research with youth. The study revealed limitations in employing participatory evaluation with at-risk youth, including challenges posed by their psychosocial development and maintaining participants' engagement throughout the processes of participatory evaluation. These lessons shed light on key tensions in using participatory evaluation and challenge the implicit assumption that a higher level of participation is necessarily better when working with youth. A central question is posed: What level of participation is optimal to ensure authentic community decision-making in a PAR project without overwhelming youth participants?
Considérations théoriques et méthodologiques lors de l'évaluation de programmes d'intervention de crise
Intensive crisis programs have been the object of numerous evaluations and evaluative inquiries. Nevertheless, experts and researchers are unable to make clear statements about the effectiveness of these programs in helping families and children in crisis. Methodological bias and confusion in the definition and implementation of these programs may have contributed to the issue. This article identifies the main stakes in the evaluation of these programs and suggests solutions such as the early involvement of the evaluator in the program implementation process and new definitions of objectives and success criteria.
Navigating uncharted waters: project monitoring at CIDA
The Canadian International Development Agency (CIDA) has employed external monitors for many years to assist in measuring the performance of its projects. At first, this role was one of surveillance, with monitors expected to keep a distance from the implementing organizations. Today, in keeping with international trends in monitoring and evaluation, the monitoring role is, in theory, more participatory and improvement-oriented, requiring of monitors a different set of knowledge, skills, and attitudes. The role is nevertheless poorly defined, open to individual interpretation and made even more challenging when the monitoring involves measuring performance in relation to cross-cutting themes such as gender equality. This article presents many of the challenges inherent in monitoring and makes a case for a participatory approach aimed at learning and at program improvement. The authors call upon the evaluation community to undertake research and scholarly discourse in this area to guide successful practice.
What an eight-year-old can teach us about logic modelling and mainstreaming
This article presents a short case narrative, the purpose of which is to illustrate that complex evaluation methodologies such as logic modelling can be simplified to the point where a child can be guided through the process quickly. However, the case narrative also serves to highlight the potential consequences to program development and evaluation activities when the process is oversimplified. Like a double-edged sword, simplifying the process encourages more organizations to use a logic model to develop and evaluate programs, but, in hindsight, the simplicity may lead to program architectures that have little opportunity of demonstrating success or to evaluations that may be off the mark.
Value-for-money analysis of active labour market programs
Accountability requirements by central agencies in government have imposed expectations on management to show results for resources used — in other words, "value for money." While demonstrating value for money means showing that the program has relevance and a rationale and that the program logic and theory make sense, the core of value for money lies in showing that a program is cost-effective. Unfortunately, many public programs and policies do not provide quantifiable outcomes, and this limits conclusions on value for money. However, labour market training programs are amenable to cost-effectiveness analysis (CEA), provided that the evaluation methodology meets certain conditions. This article reviews CEA in the context of labour market training, especially programs directed to eccally disadvantaged groups. After reviewing the data availability and the analytical methods commonly used to support value-for-money analysis of training programs, the authors present several practice improvements that would increase the "value" and validity of value-for money analysis.