Isabelle Bourgeois

Spring

Editor’s Remarks

Authors:

This issue of the Canadian Journal of Program Evaluation ( CJPE) is one of our most comprehensive to date. Not only does it include five full articles, five practice notes, and two book reviews, but it also covers a wide range of evaluation-related topics, practices, and studies. I am pleased to note that our editorial team continues to receive high-quality submissions, and I encourage you to keep thinking of the CJPE as an outlet for your work.

The articles and practice notes included in this issue focus on four recurring themes that reflect current topics in our field. First, evaluative thinking and capacity building in non-governmental organizations is the subject of articles by Rogers, Kelly, and McCoy, as well as by Lu, Elliot, and Perlman. Both articles provide insights into the facilitators of, and barriers to, evaluation capacity building as well as the multiple roles played by evaluators in fostering evaluative thinking amongst organizational staff members. Second, process evaluation appears to be of interest to many evaluators and researchers: Leblanc, Gervais, Dubeau and Delame focus on process evaluation for mental health initiatives, while Parrott and Carman provide an example of how process evaluation can contribute to program scaling-up efforts. Chechak, Dunlop, and Holosko also focus on process evaluation and its utility in evaluating youth drop-in programs. Teachers and students of evaluation may be interested in our third theme, which focuses on student contributions to evaluation, both through peer-mentoring—as described in the practice note written by LaChenaye, Boyce, Van Draanen, and Everett—and through the CES Student Evaluation Case Competition—described in a practice note written by Sheppard, Baker, Lolic, Soni, and Courtney. And fourth, we continue to advance our methodological approaches to evaluation, and this is reflected in an article on evaluation in Indigenous contexts by Chandna, Vine, Snelling, Harris, Smylie, and Manson, as well as in an article on the use of an outcome monitoring tool for performance measurement in a clinical psychology setting by Rosval, Yamin, Jamshidi, and Aubry. Czechowski, Sylvestre, and Moreau also feature methods in their practice note on secure data handling for evaluators, a key competency that continues to evolve as our data collection and storage mechanisms adapt to new technology.

In addition to these articles and practice notes, this issue also features two book reviews that are sure to interest our readers. First, Bhawra provides an account of Developing Monitoring and Evaluation Frameworks , by Patrick Markiewicz (2016), and, second, Sellick reviews Collaborative, Participatory, and Empowerment Evaluation: Stakeholder Involvement Approaches , by David Fetterman, Liliana Rodriguez-Campos, Ann Zukoski, and other contributors (2018).

On behalf of the entire editorial team, I hope that these papers stimulate discussion and reflection and support the advancement of our collective knowledge and practice. As always, if you have feedback on this issue, please contact me. I would love to hear your thoughts!

Fall

Editor's Remarks / Un mot de la rédactrice

Authors:
Pages:
v-vii

In reviewing all of the papers included in this issue of the CJPE, I am struck by the fact that all of them, in their own way, focus on people, organizations, and groups. This is not surprising, given that our work as evaluators requires constant contact and communication with stakeholders, clients, managers, and benefi ciaries. This link to others is often what defines our practice and sets us apart from other disciplines. To start us off, Carman and Fredericks co-author a paper on social network analysis, an approach that is gaining traction in our field. They not only provide a description of how, when, and under what conditions social network analysis can be applied in an evaluation context, but they also summarize useful, practice-based examples to illustrate its potential and its challenges.

I am also pleased to introduce a thematic segment that I co-edited with Marthe Hurteau on stakeholder involvement in evaluation, following a colloquium on this topic held in 2016. As evaluators, we are constantly learning how to best involve stakeholders in our work, and the four papers included in this thematic segment provide new insights from research and practice.

L’implication des parties prenantes dans la démarche évaluative : facteurs de succès et leçons à retenir

Authors:
Pages:
236-246

Our cross-cutting overview of the three papers that make up this thematic segment shows that each of the papers addresses the issue of stakeholder involvement quite differently from the others. We focus here on the key messages from each of these papers in order to highlight success factors and lessons learned for stakeholder participation in evaluation. Success factors related to collaborative approaches to evaluation are also presented throughout the analysis, based on the “Principles guiding collaborative approaches to evaluation” recently published by Shulha, Whitmore, Cousins, Gilbert et Al Hudib (2016).

Special Section: Stakeholder Involvement in Evaluation / Implication des parties prenantes en évaluation

Authors:
Pages:
188

En mai dernier, l’Association francophone pour le savoir (ACFAS) était l’hôte d’un colloque sur la place et le rôle des parties prenantes au sein de la pratique évaluative (Montréal, mai 2016). Plusieurs théoriciens et praticiens ont répondu à l’appel et leurs diverses présentations ont été l’occasion de s’informer sur leurs dernières recherches et réflexions ainsi que d’engager un riche dialogue. Trois présentateurs ont décidé de donner suite à cette journée en soumettant un article que nous regroupons dans le présent numéro de la Revue. Il s’agit de Marie-Pier Marchand, de Diane Dubeau et collaborateurs ainsi que de Sylvain Houle et collaborateurs.

Plus précisément, Marie-Pier Marchand revisite le thème de l’implication des parties prenantes en effectuant un survol de la littérature existante à ce jour. Pour sa part, Diane Dubeau et ses collaborateurs documentent les conditions gagnantes favorisant cette participation en décrivant deux modalités différentes, soit la recherche-action et l’accompagnement soutenu et efficace. Finalement, Sylvain Houle et ses collaborateurs traitent de la dimension relationnelle au sein de la démarche évaluative en introduisant le concept de sagesse pratique.

Nous espérons que cette brève présentation des trois articles saura susciter votre curiosité.

Bonne lecture!

Spring

Editor's Remarks / Un mot de la rédactrice

Authors:
Pages:
v-vii

This issue of The Canadian Journal of Program Evaluation will interest evaluators from many different sectors and with many different interests. The articles and practice notes featured in these pages focus on innovative methodological approaches, applied to various practice settings, such as health and education. First, the article by Rusticus, Eva, and Peterson argues for construct-aligned rating scales as one of the evaluator’s tools, specifically in the area of medical education. The article makes an important contribution by helping us to conceptualize scale development to collect data most efficiently. Next, Rosella and her colleagues show that a team-based knowledge brokering strategy was effective in supporting the use of the Diabetes Population Risk Tool (DPorT) in public health settings. The following article, presented by Chen and his co-authors, summarizes the findings of an empirical comparative study of evaluation models using a large-scale education initiative in Taiwan. This article focuses specifically on the usefulness of evaluation models for planning and development purposes. The article by Contandriopoulos, Larouche, and Duhoux will be of interest to evaluators who work closely with research granting institutions or universities. Using social network analysis methods, these authors have found a positive correlation between collaborations and research productivity, and they pushed their investigation further to consider the role played by formal networks in academic collaborations. The following article, by Mediell and Dionne, presents an evaluation design quality control checklist, developed and validated empirically. The checklist will certainly be of interest to novice and experienced evaluators alike as they design and implement future evaluation studies.

Special Issue

Strategic Evaluation Utilization in the Canadian Federal Government

Authors:
Pages:
327-346

Given the potential of the federal program evaluation function to inform decision-making at the highest levels of government, this project sought to investigate the nature and extent to which program evaluation findings are used as part of spending reviews and other reallocation exercises in selected government organizations. The multiple case study design used in this investigation included a qualitative content analysis of evaluation reports published between 2010 and 2013, as well as a series of key informant interviews conducted with evaluation staff and program managers. The findings show very little evidence of strategic evaluation utilization by organizational leaders. This is thought to be due to a few key factors: (a) the requirements of the 2009 Policy on Evaluation that was in effect at the time of the study; (b) the program-level focus of the evaluations; and (c) the public nature of the evaluation reports.

Editor's Remarks / Un mot du rédacteur

Authors:
Pages:
v-vii

I am honoured to address you for the first time as Editor of the Canadian Journal of Program Evaluation (CJPE). As a former Book Review Editor and Associate Editor of the journal, I am tackling my new responsibilities with a solid sense of all that has been accomplished thus far and great enthusiasm for what is to come next. I want to thank Robert Schwartz for his leadership during his 7-year tenure; his influence and impact on the Journal’s quality and reach will be felt for years to come. I have the privilege of being accompanied on this journey by a skilled and dedicated editorial team made up of Emily Taylor, Astrid Brousselle, Jill Chouinard, and Jane Whynot, four women with considerable combined experience in evaluation, academia, government, and consulting. Two student volunteers have recently joined our team: Hélène Lévesque and Michelle Naimi. Their contributions are already much appreciated. We are grateful for the support and advice of our continuing and new Editorial Board members who have all made a 3-year commitment to the CJPE starting this year. Thank you!

Fall

Measuring Evaluation Capacity in Ontario Public Health Units

Authors:
Pages:
165-183

This article presents a study of organizational capacity to do and use evaluation, conducted in 32 public health units in the province of Ontario. Methods include an organizational self-assessment using an instrument developed by Bourgeois, Toews, Whynot, and Lamarche (2013) as well as key informant interviews. Overall, our findings point to the fact that evaluation capacity is still developing in Ontario public health units; factors that support evaluation capacity in these organizations include the presence of an organization-wide evaluation policy, the availability of full-time evaluation staff in a supporting role, greater staff involvement in evaluation, and a standardized evaluation process. These findings highlight the importance of organizational structures and systems to evaluation utilization and provide potential areas of improvement for organizations wishing to improve their evaluation capacity.

Fall

Measuring Organizational Evaluation Capacity in the Canadian Federal Government

Authors:
Pages:
1-19

The development of organizational evaluation capacity has emerged in recent years as one mechanism through which evaluators can extend their influence and foster evaluation utilization. However, organizational evaluation capacity is not always easy to define, and internal evaluators sometimes struggle with the identification of concrete activities that might increase their organization's evaluation capacity. This article describes an organizational self-assessment instrument developed for Canadian federal government organizations. The instrument is presented and described, and further details regarding its use and next steps for this area of evaluation research are also provided.

Neutral Assessment of the National Research Council Canada Evaluation Function

Authors:
Pages:
85-96

Federal government departments and agencies are required to conduct a neutral assessment of their evaluation function once every five years under the Treasury Board Secretariat's Policy on Evaluation (2009). This article describes the National Research Council's experience conducting the first neutral assessment of its evaluation function. Based on learning from this first assessment, best practices that NRC intends on replicating, as well as lessons learned for future assessments, are discussed. This article may be of interest to both federal and non-federal organizations seeking to conduct a neutral assessment in an effort to improve their evaluation services and products.