Volume 32, 2017 - Fall

Editor's Remarks / Un mot du rédacteur

Authors :
Pages :
v - viii

It gives me great pleasure to introduce my last issue as Editor-in-Chief of CJPE. I feel that with this issue, I am going out with a nice splash.

Theory of Change Analysis: Building Robust Theories of Change

Authors :
Pages :
155-173

Models for theories of change vary widely as do how they are used. What constitutes a good or robust theory of change has not been discussed much. This article sets out and discusses criteria for robust theories of change. As well, it discusses how these criteria can be used to undertake a vigorous assessment of a theory of change. A solid analysis of a theory of change can be extremely useful, both for designing or assessing the designs of an intervention as well as for the design of monitoring regimes and evaluations. The article concludes with a discussion about carrying out a theory of change analysis and an example.

Théories du changement : comment élaborer des modèles utiles

Authors :
Pages :
174-201

Although theories of change are frequently discussed in the evaluation literature and there is general agreement on what a theory of change is conceptually, there is actually little agreement beyond the big picture of just what a theory of change comprises, what it shows, how it can be represented, and how it can be used. This article outlines models for theories of change and their development that have proven quite useful for both straightforward and more complex interventions. The models are intuitive, flexible, and well-defined in terms of their components, and they link directly to rigorous models of causality. The models provide a structured framework for developing useful theories of change and analyzing the intervention they represent.

Advancing Patient Engagement in Health Service Improvement: What Can the Evaluation Community Offer?

Authors :
Pages :
202-221

Despite efforts for greater patient engagement in health care quality improvement, evaluation practice in this context remains mostly conventional and noncollaborative. Following an explication of this problem we discuss relevant theory and research on patient-centred care (PCC) and patient engagement and then consider potential benefits of collaborative and participatory approaches to evaluation of such initiatives. We argue that collaborative approaches to evaluation (CAE) are logically well-suited to the evaluation of PCC initiatives and then suggest contributions that the evaluation community can offer to help advance patient engagement. Finally, we outline a research agenda that identifies important areas that are in need of further examination.

L'Ombudsman face aux défis de l'évaluation : est-il possible d'évaluer l'intangible?

Authors :
Pages :
222-243

Owing to its procedural mission and the specific characteristics of rights protection organizations, the evaluation of an Ombudsman's performance is a complex task. This article presents a knowledge synthesis of performance measurement for Ombudsman's Offices and opens the way to new reflections that are both theoretical and practical. In addition to presenting the strengths and weaknesses of the various evaluation models identified in the literature, this study clarifies the purpose of the Ombudsman and the purposes for which it is evaluated. Finally, this article formulates propositions for an integrated assessment of the management of the Parliamentary Ombudsman.

Making Evaluation More Responsive to Policy Needs: The Case of the Labour Market Development Agreements

Authors :
Pages :
244-253

This note describes how Employment and Social Development Canada evaluation staff transformed the Labour Market Development Agreement (LMDA) evaluation process to make it more timely, cost-effective, and relevant for policy development. The note provides background on the LMDAs and discusses key drivers for changing the evaluation approach. In particular, it describes the benefits of using small targeted studies, rich administrative panel data, and building in-house evaluation capacity. It concludes with some lessons learned for the evaluation practice.

Using Rubrics for an Evaluation: A National Research Council Pilot

Authors :
Pages :
254-265

Rubrics are commonly used in the education sector to assess performance, products, or processes of student learning. Rubrics are gaining importance in organizational performance and program evaluation practice. According to several evaluation practitioners, rubrics can make transparent how excellence and value are defined and applied to evaluation questions or indicators in a given context. This practice note summarizes a pilot project of the National Research Council Canada (NRC) using evaluative rubrics for characterizing relevance and generating conclusions in an evaluation.

Moving Beyond the Buzzword: A Framework for Teaching Culturally Responsive Approaches to Evaluation

Authors :
Pages :
266-279

The terms cultural responsiveness and cultural competence have become ubiquitous in many fields of social inquiry, including in evaluation. The discourse surrounding these issues in evaluation has also increased markedly in recent years, and the terms can now be found in many RFPs and government-based evaluation descriptions. We have found that novice evaluators are able to engage culturally responsive approaches to evaluation at the conceptual level, but are unable to translate theoretical constructs into practice. In this article we share a framework for teaching culturally responsive approaches to evaluation. The framework includes two domains: conceptual and methodological, each with two interconnected dimensions. The dimensions of the conceptual domain include locating self and social inquiry as a cultural product. The dimensions of the methodological domain include formal and informal applications in evaluation practice. Each of the dimensions are linked to multiple domains within the Competencies for Canadian Evaluation practice. We discuss each and provide suggestions for activities that align with each of the dimensions.

The Impact of Practice on Pedagogy: Reflections of Novice Evaluation Teachers

Authors :
Pages :
280-287

In this practice note two novice evaluation teachers share their findings from research conducted with students who were enrolled in a theory and practicum course in evaluation. The study focused on understanding how and in what ways students navigate between the world of theory and the world of practice. The findings from this study subsequently led to a re-envisioning of the course offerings to provide a more nuanced transition between two dichotomized conceptualizations of evaluation (theory and practice), revised syllabi, and the addition of a third course. The implications of this research (and subsequent pedagogical revisions) raise important issues for evaluation teachers and practitioners, as we continue to debate the relationship between theory and practice in evaluation.

Book Review: S. G. Chaplowe and J. B. Cousins (2016). Monitoring and evaluation training: A systematic approach. Thousand Oaks, CA: SAGE.

Authors :
Pages :
288-292

Book Reviews / Comptes rendus de livres
Chaplowe, S. G., & Cousins, J. B. (2016). Monitoring and evaluation training: A systematic approach. Thousand Oaks, CA: Sage. ISBN 978-1-4522-8891-8.

Book Review: Donaldson, S. I., Christie, C. A., & Mark, M. M. (2015). Credible and actionable evidence: The foundations for rigorous and influential evaluations. Thousand Oaks, CA: Sage.

Authors :
Pages :
293-296

Donaldson, S. I., Christie, C. A., & Mark, M. M. (2015). Credible and actionable evidence: The foundations for rigorous and influential evaluations. Thousand Oaks, CA: Sage.
ISBN 978-1-4833-0625-4.