Introduction du rédacteur invité (Segment thématique : Programmes communautaires et innovations méthodologiques : participation, accompagnement, et empowerment)
The author draws connections across the three articles in this thematic segment and poses a series of questions on the future of the evaluation function. He concludes that an enduring role for evaluation within Canadian public administration requires the adoption of a culture change agenda within the evaluation community and, by extension, within the organizations in which we work. Central to such a change agenda is sustained movement toward increasing the professionalism of the function.
The authors contend that developments in the public and notfor- profit sectors over the last decade or so have profound implications for the profession of evaluation, implications that are not being adequately addressed. They argue that evaluation needs to transform itself if it wishes to play a significant role in the management of organizations in these sectors.
This article examines how an evaluation unit in a federal government department transformed itself from a traditional function focusing on a few time-consuming and resource-intensive evaluation studies per year to a more service-oriented unit that has added results measurement to its core functions. It went down this road to increase the impact it was having on individuals, programs, and the department as whole. Along the way it discovered a new breed of evaluator: outgoing people interested in teaching, coaching, and facilitating.
The main thesis of this article is that program evaluation and program evaluators have largely missed out on the movement, now into its second decade, to make performance measurement the centrepiece of public sector management and accountability. If these developments are not strategically faced by evaluators, program evaluation runs the risk of becoming less and less relevant to public sector and nonprofit organizations.
This article reports on the results of a national survey that describes the professional and practice profiles of program evaluators in Canada, their views of their working conditions, and their sense of belonging to the field of evaluation. The data were collected between May and July 2005 via a Web survey, and 1,005 respondents filled out questionnaires. Among them, 647 indicated that they were internal or external evaluation producers, the others being evaluation users, students, or researchers. The results raise several issues.
Typically, a good measurement strategy to support results-based management includes both ongoing performance measures and periodic evaluations. It is argued that this is a too limited set of measurement tools, resulting, not infrequently, in less useful and costly ongoing performance measures.
Simulating or imputing non-participant intervention durations using a flexible semi-parametric model
In the evaluation of labour market training programs using matching, evaluators must decide when to start comparing participant outcomes against non-participant outcomes. Measurement relative to an intervention period permits the separation of training opportunity costs from possible benefits, but an equivalent period for the comparison group must be determined. One method imputes the timing of the intervention for comparisons from that of the participant match. However, with Propensity Score Matching, this may produce biased outcome estimates.