This issue of the Canadian Journal of Program Evaluation ( CJPE) is one of our most comprehensive to date. Not only does it include five full articles, five practice notes, and two book reviews, but it also covers a wide range of evaluation-related topics, practices, and studies. I am pleased to note that our editorial team continues to receive high-quality submissions, and I encourage you to keep thinking of the CJPE as an outlet for your work.
The articles and practice notes included in this issue focus on four recurring themes that reflect current topics in our field. First, evaluative thinking and capacity building in non-governmental organizations is the subject of articles by Rogers, Kelly, and McCoy, as well as by Lu, Elliot, and Perlman. Both articles provide insights into the facilitators of, and barriers to, evaluation capacity building as well as the multiple roles played by evaluators in fostering evaluative thinking amongst organizational staff members. Second, process evaluation appears to be of interest to many evaluators and researchers: Leblanc, Gervais, Dubeau and Delame focus on process evaluation for mental health initiatives, while Parrott and Carman provide an example of how process evaluation can contribute to program scaling-up efforts. Chechak, Dunlop, and Holosko also focus on process evaluation and its utility in evaluating youth drop-in programs. Teachers and students of evaluation may be interested in our third theme, which focuses on student contributions to evaluation, both through peer-mentoring—as described in the practice note written by LaChenaye, Boyce, Van Draanen, and Everett—and through the CES Student Evaluation Case Competition—described in a practice note written by Sheppard, Baker, Lolic, Soni, and Courtney. And fourth, we continue to advance our methodological approaches to evaluation, and this is reflected in an article on evaluation in Indigenous contexts by Chandna, Vine, Snelling, Harris, Smylie, and Manson, as well as in an article on the use of an outcome monitoring tool for performance measurement in a clinical psychology setting by Rosval, Yamin, Jamshidi, and Aubry. Czechowski, Sylvestre, and Moreau also feature methods in their practice note on secure data handling for evaluators, a key competency that continues to evolve as our data collection and storage mechanisms adapt to new technology.
In addition to these articles and practice notes, this issue also features two book reviews that are sure to interest our readers. First, Bhawra provides an account of Developing Monitoring and Evaluation Frameworks , by Patrick Markiewicz (2016), and, second, Sellick reviews Collaborative, Participatory, and Empowerment Evaluation: Stakeholder Involvement Approaches , by David Fetterman, Liliana Rodriguez-Campos, Ann Zukoski, and other contributors (2018).
On behalf of the entire editorial team, I hope that these papers stimulate discussion and reflection and support the advancement of our collective knowledge and practice. As always, if you have feedback on this issue, please contact me. I would love to hear your thoughts!
Evaluation Literacy: Perspectives of Internal Evaluators in Non-Government Organizations
While there is an abundance of literature on evaluation use, there has been little discussion regarding internal evaluators’ role in promoting evaluation use. Evaluation can be undervalued if context is not taken into consideration. Evaluation literacy is needed to make evaluation more appropriate, understandable, and accessible, particularly in non-government organizations (NGOs) where there is a growing focus on demonstrable outcomes. Evaluation literacy refers to an individual’s understanding and knowledge of evaluation and is an essential component of embedding evaluation into organizational culture. In recognition of the value of the internal perspective, a small exploratory exercise was undertaken to reveal internal evaluator roles and ways of engaging with colleagues around evaluation. The exercise examined a key question: What is the role of evaluation literacy in internal evaluation in the non-government sector? Three Australian auto-narrative examples from internal evaluators highlight evaluation literacy and locate it among the multiplicity of roles required for optimal evaluation uptake. Analysis of the narratives revealed the underlying issues affecting evaluation use in NGOs and the skills needed to motivate and enable others to access, understand, and use evaluation information. Responding to the call for expanded research into internal evaluation from a practice perspective, the authors hope that the findings will stimulate a wider conversation and further advance understanding of evaluation literacy.
Principles, Approaches, and Methods for Evaluation in Indigenous Contexts: A Grey Literature Scoping Review
This article describes findings from a scoping review of the grey literature to identify principles, approaches, methods, tools, and frameworks for conducting program evaluation in Indigenous contexts, reported from 2000–2015 in Canada, the United States, New Zealand, and Australia. It includes consultation with key informants to validate and enrich interpretation of findings. The fifteen guiding principles, and the approaches, methods, tools, and frameworks identified through this review may be used as a starting point for evaluators and communities to initiate discussion about how to conduct their evaluation in their communities, and which approaches, methods, tools, or frameworks would be contextually appropriate.
Evaluation of the implementation process of the strengths-based approach based on the theory of diffusion of innovation
Most policies in the area of mental health recommend the promotion of evidence-based practices. The force-based approach is recognized as a practice that can contribute to the recovery of people with severe mental disorders. However, several studies have raised numerous implementation challenges that limit its scope. An evaluation of the implementation process was conducted with stakeholders who received training on the strength-based approach. The results indicate that training is not sufficient and that institutional commitment is required to ensure the quality of the implementation.
Perceived Facilitators and Barriers to Evaluative Thinking in a Small Development NGO
The Global Goals come with challenging implications for non-governmental organizations (NGOs) in international development and their capacity for high-quality evaluation practice and evaluative thinking. NGOs are pressured to work efficiently, be accountable to donors and beneficiaries, and demonstrate impact. They must also critically examine the underlying assumptions behind their work, or else the sustainability of their work becomes jeopardized. Using previously collected evaluation data from a small NGO in water-based development, this paper highlights perceived facilitators and barriers to evaluative thinking and where they might occur in the evaluation process for an NGO constrained by time and resources.
Perceptions of the Use of an Outcome Monitoring Tool in a Clinical Psychology Training Centre: Lessons Learned for Performance Measurement
The purpose of this study was to examine the perceptions of the Outcome Questionnaire (OQ) following its implementation in a university-based psychological services training centre. Participants were doctoral-level student clinicians (n = 49), clinical supervisors (n = 17), and clients (n = 24). Data was collected through surveys, semi-structured interviews, and focus groups. Findings indicated that the majority of clinicians used the OQ to monitor outcomes and the majority of stakeholders perceived it as useful. However, the extent to which the information provided by the OQ was being used was variable. Lessons learned for implementation of performance measurement systems within mental health services are discussed.
Community, Theory, and Guidance: Benefits and Lessons Learned in Evaluation Peer Mentoring
The majority of evaluation practitioners begin their career in allied fields and stumble into evaluation. As such, university offerings and evaluation professional development sessions have become increasingly popular. As the field continues to professionalize and new mentoring programs emerge, empirical work examining teaching and training in evaluation has gained traction. However, little is known about the role that opportunities such as mentoring play in evaluation training. The purpose of this article is to explore the expected and unexpected benefits of our experiences as participants in an evaluation mentoring program, lessons learned, and logistical and structural promoters of success in peer mentoring.
20 Years Later: Reflections on the CES Student Evaluation Case Competition
Since 1996, the Canadian Evaluation Society (CES) has held an annual case competition for college and university students. By 2016, a total of 1,132 students had participated. An online questionnaire was sent to 768 participants with available email addresses; eight additional participants entered the study after viewing an online posting. The questionnaire was completed by 112 former participants (response rate: 14%). Findings suggest that participating in the case competition was a positive experience that led to an appreciation of evaluation, increased teamwork skills, and provided stronger résumés. Some indicated that participating influenced their choice of evaluation as a focus for their career.
Scaling up Programs: Reflections on the Importance of Process Evaluation
For more than a decade, policy makers and funding agencies have been focused on identifying innovative and successful programs and bringing them to scale. Evaluators play an important role in these scaling efforts by helping to document what works and by monitoring program implementation. They can also monitor the replication of taking innovative programs to scale. In this research and practice note, we reflect on our evaluation experiences with a public-private partnership designed to scale up a health and wellness program within a large, urban school district at ten elementary schools. In doing so, we highlight the importance of conducting a process evaluation at the beginning of the program to ensure that the program is being implemented as intended. We also describe how these early evaluation findings helped to improve the program during its second year.
Secure Data Handling: An Essential Competence for Evaluators
Since it is paramount that the rights and welfare of evaluation participants and stakeholders be respected, we argue that the abilities and knowledge necessary to appropriately safeguard data ought to be considered an essential competence for evaluators. Building from past contributions, and in consultation with research ethics and data security experts from our home institution, recommended practices in the collection, handling, and storage of evaluation data were identified. A three- dimensional framework for secure data handling was developed, considering type of information handled, harm posed by a potential confidentiality breach, and corresponding steps to securing confidential information.
Evaluating Youth Drop-In Programs: The Utility of Process Evaluation Methods
This ahead of print version may differ slightly from the final published version.
In North America, neighbourhood youth centres typically offer essential community-based programs to disadvantaged and marginalized populations. In addition to providing pro-social and supportive environments, they provide a host of educational and skill-development opportunities and interventions that build self-esteem, increase positive life relationships and experiences, and address social determinants of health. However, evaluators of such centres often have to work with moving changes in temporal components (i.e., service users, services, programs, and outcomes) that are unique and idiosyncratic to the mandate of the centre. Although there is an abundance of research on youth programs in general, there is a void in the literature on drop-in programs specifically, which this study aims to address. The lack of empirical research in this area inhibits knowledge about the processes of these centres. For this reason, the article concludes that process evaluation methods may be effectively used to substantiate the practice skills, knowledge, and managerial competencies of those responsible for program implementation.
Book Review: Anne Markiewicz and Ian Patrick. (2016). Developing Monitoring and Evaluation Frameworks. Thousand Oaks, CA: SAGE
Monitoring and evaluation have distinct, yet complementary roles in program planning and development. As Markiewicz and Patrick state, “Monitoring generates questions to be answered in evaluation, and evaluation studies identify areas that require future monitoring” (p. 13). While there is an overlap in information sources, organization, and methodology, there are key differences between monitoring and evaluation (M&E) with respect to the main stakeholders involved, purpose, timing, and scope. The complementarity between these two processes often results in confusion about their roles, so either M&E are sometimes conflated or core steps can be missed entirely. When M&E are undertaken by organizations thoughtfully, programs are able to obtain a complete picture of their performance and impact.
Book Review: David M. Fetterman, Liliana Rodríguez-Campos, Ann P. Zukoski, et al. (2018). Collaborative, Participatory, and Empowerment Evaluation: Stakeholder Involvement Approaches. New York, NY: Guilford Press.
Stakeholder involvement approaches in evaluation constitute an evolving field. In 1991, Scriven’s Evaluation Thesaurus offered definitions for “stakeholders” and “evaluation” but none for “approaches,” “stakeholder involvement,” or “collaborative, participatory, or empowerment evaluation.” In this new book, Fetterman, Rodríguez-Campos, Zukoski, et al. identify that 20% of American Evaluation Association (AEA) members belonged to the AEA’s Collaborative, Participatory, and Empowerment Topical Interest Group (CPE-TIG), from which this book emerged (pp. vii, 9). With a population of 7,300 members in over 80 countries (American Evaluation Association [AEA], 2018), this is significant.
The underlying research for the book is excellent, with 180 sources: 23% published prior to 2000 and 40% published since 2010. Curiously, only one Canadian Journal of Program Evaluation (CJPE) article reference appears. Does this mean that stakeholder participation approaches have been under-addressed in articles published in this journal? Certainly, Canadian theory and practice are well represented in the body of work by Canadian evaluators such as Chouinard, Cousins, Love, and Shula. Recent evidence of the importance of stakeholder involvement approaches to Canadian evaluators was reflected in the “co-creation” theme of the 2018 Canadian Evaluation Society (CES) conference; 40 presentation titles in the conference program (Canadian Evaluation Society, 2018) reference stakeholder participation, collaboration, and/or empowerment.