Instructions to Authors

SUBMISSION GUIDELINES AND EVALUATION CRITERIA Articles Submissions of up to 7,000 words on evaluation theory and practice, including innovative methodological approaches, standards of practice, strategies to enhance the implementation, reporting and use of evaluations. Articles reporting original empirical research on evaluation are of particular interest. Submitted manuscripts will be evaluated through double-blind peer review in relation to:

Editor’s Remarks

The entire CJPE editorial team is pleased to note the quality and calibre of the submissions that we continue to receive from the evaluation community, and the papers included in this issue illustrate this quite well. Th e first four articles published in this issue focus on current topics that will be of interest to both researchers and practitioners, such as evaluation capacity building (LaMarre, D’Avernas, Riley, Raffoul, & Jain), modeling program outcomes (LaVelle & Dighe), program complexity and theory of change (Douthwaite, Ahmad, & Shah), and trends in the CES credentialing program (Lawson, Hunter, & McDavid). A fi ft h article, written by Dussault and Duquet, focuses on the lessons learned through an evaluation conducted in Quebec schools. Our four research and practice notes are also sure to generate thoughtful discussion in our community: Roy and Searle move our thinking about developmental evaluation further by discussing scope creep; McDavid, Shepherd, and Morin share their experience with inter-university collaboration in the area of evaluation education; Lahey and his collaborators present us with a description of how evaluation is conducted in provincial and territorial governments across Canada; and Gauthier discusses the professionalization of evaluation practice and its dimensions. Finally, we are pleased to present two book reviews, one by Gómez-Ramírez and one by Sellick, that provide us with a good sense of what new resources are available to us and how they might inform our practice and thinking.
One quick correction, my previous Editor’s Remarks misspelled the name of one of the special issue editors. My thanks to Katherine Graham and Rob Shepherd for their important contribution to the CJPE, and apologies to Katherine for the spelling oversight.
As always, please share your thoughts with us, and continue to submit your work to the CJPE. We have made great strides in reducing our publication wait times, thanks to the support of CES and the Social Sciences and Humanities Research Council. We look forward to reading your papers soon! 
Isabelle Bourgeois

The Canadian Journal of Program Evaluation 35.1 Spring 2020

The Canadian Journal of Program Evaluation 35.1 Spring 2020 


Putting Theory of Change into Use in Complex Setting

This paper argues that theory of change can be used to help stakeholders
in agricultural research for development projects collectively agree on problems and
visions of success. This helps them feel greater ownership for their project, motivation
to achieve outcomes, and understanding of how to do so. However, the dynamic is
damaged if projects are pushed to be too specific too early about the outcomes for
which they are to be held accountable. This is most likely to happen when system
response to project intervention is uncertain, as opposed to projects that work with
existing pathways and partnerships where the role of research is well established.


Évaluation des effets dans le domaine de l’éducation à la sexualité au primaire : Exemple tiré d’une évaluation d’un programme de prévention de la sexualisation précoce

In Quebec, sexual education contents were made mandatory for students
from preschool to Secondary V in 2018. Sexual education programs whose effi ciency
has been evaluated should be implemented in the context of informed health promotion
practices. However, few of these programs have actually been evaluated, with
published results, particularly within an elementary school framework. Th is article
reflects on the evaluation of elementary sexual education programs, based on a case
relating to the evaluation of an early sexualization prevention program.



Une analyse engagée de la professionnalisation des pratiques d’évaluation

Despite a great deal of discussion about the notion of professionalizing
evaluation practice around the world, many associated concepts are not clearly defi
ned. This practice note provides operational definitions of the concepts of profession,
professionalism, professional and professionalization, and presents methodology
designed to reflect on the ins and outs of a national process of professionalization.
In conclusion, the author analyzes the prospects for the professionalization of the
practice of evaluation at an international level.



Evaluation in the Provinces and Territories: A Cross- Canada Snapshot and Call to Action

Evidence-based decision-making and managing for results are terms oft en heard from politicians and senior government offi cials at both federal and provincial levels of government in Canada. But, while there is some level of under- standing at the federal level in terms of the role and use of evaluation in measuring results, there is significantly less information readily available about the extent to which evaluation is being used at other levels of government. This paper provides a cross-Canada synopsis on the capacity and use of systematic evaluation at the provincial and territorial levels of government. Authors from nine provinces and two territories provide a succinct analysis of the extent to which evaluation is being used in their provincial/territorial government, as well as a description of the challenges and opportunities that lie ahead for evaluation. There is a paucity of published information on this subject, but the paper uses research conducted in 2001 as a benchmark to compare the state of aff airs for evaluation within provincial/territorial governments. With limited progress over the past two decades, the paper offers an overview of findings and some proposed actions for the way ahead.


A Rapid Review of Evaluation Capacity-Building Strategies for Chronic Disease Prevention


There has yet, it seems, to be a review of the literature specifi cally explor-ing evaluation capacity building (ECB) for chronic disease prevention (CDP). To guide efforts to build evaluation capacity for CDP, a rapid review of the literature was undertaken using systematic methods. A search was conducted of the grey and academic literature to explore ECB strategies in CDP, and 14 articles were retained. CDP ECB strategies were similar to general public health ECB eff orts (multi-strategy, context-specific, experiential). Articles included a focus on how to maintain ECB over long periods and in light of staff turnover, both of which were described as being prevalent in the CDP context. Evaluating influence at multiple levels (individual, organizational, system) is also important. There is room for more clarity about the “how” of ECB strategies, and about specificity to CDP. Keywords: chronic disease prevention, evaluation capacity, evaluation capacity building.



A Transdisciplinary Model of Program Outcomes for Enhanced Evaluation Practice

Evaluation is a transdiscipline with a focus upon asking and answering
important questions about programs, policies, and interventions. A unifi ed taxonomy
of evaluation-specific program outcomes would be a helpful tool to evaluators,
program designers, implementers, and policymakers, but one has not yet been
proposed and validated in an evaluation context.This study builds from a grounded
analysis of 125 programs and over 850 individual program outcomes, from which
was developed a taxonomy of nine outcomes specific to evaluating programs and
interventions: attitude, affect, behavior, cognition, status, relationship, biological,
environmental, and economic. The article defines key terms, directionality, and flexible
timeframes for measurement, and suggests specifi c fields that provide a helpful
lens for understanding programs and improving evaluation practice.



Predicting Credentialed Evaluator Status: Characteristics, Comparisons, and Implications for the CE Program

The CES membership database was analyzed in order to determine the
socio-demographic profi le that best defi nes Credentialed Evaluators (CE). In general,
those currently holding CE status tend to be employed in the private sector, have a PhD-level education, have long-term experience in evaluation, have a major focus
on evaluation within their professional activities, and be located within the Atlantic region, Western Canada, or the National Capital Chapter region. Implications of
these trends for the sustainability of the CE designation program and options for
broadening the scope of uptake within the CES membership are discussed.



Reflections on Inter-University Collaboration to Deliver a Graduate Certificate in Evaluation to Government of the Northwest Territories Employees

This practice note describes and reflects on an inter-university collaboration
to deliver a graduate credential in evaluation to Government of the Northwest
Territories (GNWT) staff . The collaboration involved faculty members from two
member institutions of the Consortium of Universities for Evaluation Education
(CUEE). We describe the process of negotiating a contractual agreement, developing
and delivering the program, and offer our reflections and lessons learned. Overall
this collaboration is worthwhile but is challenging to do in an environment where
universities are focused on their own academic missions and programs.



Scope Creep and Purposeful Pivots in Developmental Evaluation

This practice note illustrates a situation where, as program evaluators,
we crept beyond the provisional boundaries set by our Developmental Evaluation
(DE) goals to facilitate learning. Our DE was initially focused on one program which
was being designed for online delivery in higher education. During the DE of this
program, questions and themes arose which had larger organizational applicability;
we were asked to help design a strategic learning session that addressed a large group
of stakeholders within which existed the tiny subset of stakeholders engaged in the
original DE. This practice note describes how we negotiated the emergent purposes
of the DE with the need for intentional pivots within the strategic learning session
to serve our intended subset of stakeholders with their project as well as stimulate
evaluative thinking within the larger stakeholder group.


Book Review: Designing Quality Survey Questions.

In this textbook, Robinson and Leonard take the reader on a deep dive into
current research and knowledge about best practice, as well as Likert’s foundational
work on scales, to correct common misconceptions about survey design. Th e
book is organized into three sections comprising eight chapters and supported by
a glossary, an appendix, and a comprehensive list of references.



Book Review: Evaluation Failures: 22 Tales of Mistakes Made and Lessons Learned.

Evaluation Failures is an edited volume detailing the tribulations and hard-earned
lessons of 23 evaluation professionals. Kylie Hutchinson has compiled a compelling
page-turner whose self-reflective spirit can be summarized as an antidote to
full-blown failure.
Divided into eight parts, the book takes the reader along a typical evaluation
cycle, from managing the evaluation and defining stakeholders to reporting useful