Volume 33, 2018 - Spring

Editor's Remarks / Un mot de la rédactrice

Authors :
Pages :
v-vii

This issue of The Canadian Journal of Program Evaluation will interest evaluators from many different sectors and with many different interests. The articles and practice notes featured in these pages focus on innovative methodological approaches, applied to various practice settings, such as health and education. First, the article by Rusticus, Eva, and Peterson argues for construct-aligned rating scales as one of the evaluator’s tools, specifically in the area of medical education. The article makes an important contribution by helping us to conceptualize scale development to collect data most efficiently. Next, Rosella and her colleagues show that a team-based knowledge brokering strategy was effective in supporting the use of the Diabetes Population Risk Tool (DPorT) in public health settings. The following article, presented by Chen and his co-authors, summarizes the findings of an empirical comparative study of evaluation models using a large-scale education initiative in Taiwan. This article focuses specifically on the usefulness of evaluation models for planning and development purposes. The article by Contandriopoulos, Larouche, and Duhoux will be of interest to evaluators who work closely with research granting institutions or universities. Using social network analysis methods, these authors have found a positive correlation between collaborations and research productivity, and they pushed their investigation further to consider the role played by formal networks in academic collaborations. The following article, by Mediell and Dionne, presents an evaluation design quality control checklist, developed and validated empirically. The checklist will certainly be of interest to novice and experienced evaluators alike as they design and implement future evaluation studies.

Construct-Aligned Rating Scales Improve the Reliability of Program Evaluation Data

Authors :
Pages :
1-20

In workplace-based assessment, research has suggested that aligning rating scales with how clinical supervisors naturally conceptualize trainee performance improves reliability and makes assessment more efficient. This study examined the generalizability of those findings for program evaluation by determining if construct alignment improves the reliability with which competencies are ranked as having been achieved in a medical education program. These results extend previous research into the benefits of construct-aligned scales by suggesting that aggregating students’ judgments of their abilities can be used to evaluate the relative successes of a program more efficiently when the scales used are aligned with the constructs of independence and sophistication rather than being phrased in terms of students’ performance expectations.

Evaluating the Process and Outcomes of a Knowledge Translation Approach to Supporting Use of the Diabetes Population Risk Tool (DPoRT) in Public Health Practice

Authors :
Pages :
21-48

To support the use of the Diabetes Population Risk Tool (DPoRT) in public health settings, a knowledge brokering (KB) team used and evaluated the Population Health Planning Knowledge-to-Action model. Participants (n = 24) were from four health-related organizations. Data sources included document reviews, surveys, focus groups, interviews, and observational notes. Site-specific data were analyzed and then triangulated across sites using an evaluation matrix. The KB team facilitated DPoRT use through planned and iterative strategies. Outcomes included changes in skill, knowledge, and organizational practices. The Population Health Planning Knowledge-to-Action model and team-based KB strategy supported DPoRT use in public health settings.

Using Logic Models and the Action Model / Change Model Schema in Planning the Learning Community Program: A Comparative Case Study

Authors :
Pages :
49-68

Recent interest has been noted in the evaluation community in expanding the focus from program implementation and outcomes to program design and planning. One important step for moving in this direction is to examine existing evaluation models and to assess their relative strengths and weaknesses for planning purposes. This article presents a comparative case study of applying logic models and the action model/change model schema for planning the Learning Community Program in Taiwan. Lessons learned from these applications indicate that logic models are relatively easy to learn and effective for identifying major program components and indicators, but not sufficient for articulating the theoretical significance of the program. On the other hand, the action model/change model schema requires more time to learn and practise, but it has relative advantages for providing theoretical insights into contextual factors and causal mechanisms of the program, unlike logic models. This comparison can serve as a guide for evaluation practitioners when selecting evaluation tools to apply in planning and/or evaluating their programs.

Evaluating Academic Research Networks

Authors :
Pages :
69-89

Funding agencies and universities are increasingly searching for effective ways to support and strengthen a dynamic and competitive scientific research capacity. Many of their funding policies are based on the hypothesis that increased collaboration and networking between researchers and between institutions lead to improved scientific productivity. Although many studies have found positive correlations between academic collaborations and research performance, it is less clear how formal institutional networks contribute to this effect. Using social network analysis (SNA) methods, we highlight the distinction between what we define as “formal” institutional research networks and “organic” researcher networks. We also analyze the association between researchers’ actual structural position in such networks and their scientific performance. The data used come from curriculum vitae information of 125 researchers in two provincially funded research networks in Quebec, Canada. Our findings confirm a positive correlation between collaborations and research productivity. We also demonstrate that collaborations within the formal networks in our study constitute a relatively small component of the underlying organic network of collaborations. These findings contribute to the literature on evaluating policies and programs that pertain to institutional research networks and should stimulate research on the capacity of such networks to foster research productivity.

Un outil à visée pédagogique pour discuter de méthodologie

Authors :
Pages :
90-113

In this article, we discuss the importance of communicating the evaluation approach (process, methodology, results, and limits) to promote the use of results and the implementation of recommendations. We present an education-focused, meta-evaluative training tool based on the methodological aspects of the evaluation process, and designed to support evaluators, particularly novice evaluators, in the rigorous planning, implementation and communication of methodology. We are focusing on communicating the program evaluation approach though evaluation reports (technical and final) that are usually the means offering the most information on both results and the evaluation process. We realize that there are other means of communication (i.e., journal-published articles), but their format doesn’t always allow the provision of all the information relevant to results and the approach chosen by evaluators, since they usually only present highlights and not methodological aspects. Since we zeroed in on methodological considerations, it seemed to us that it would be more relevant to base ourselves on the information included in reports. This is not an innocuous choice, as an evaluation’s quality depends, in part, on a recognized methodology that is able to provide the solid evidence needed to exercise judgment. Also, we discuss the role of meta-evaluation (MEV) in enriching evaluation practice and promoting program evaluation’s quality. Furthermore, we present a meta-evaluative tool we designed to encourage the operationalization of quality evaluative methodologies and the implementation of effective evaluative practices.

Expanding the Role of Digital Photographs in Evaluation Practice: Documenting, Sense-Making, and Imagining

Authors :
Pages :
114-134

Program stakeholders and evaluators routinely generate and share digital photographs. Three frameworks for using photographs in evaluation practice are discussed: documenting social change, facilitating sense-making, and inspiring and imagining social change. These are rooted in scholarship from arts-informed inquiry and visual sociology and anthropology. Using this framework, a review of existing literature demonstrates an extensive use of photographs for documentation and a growing use of photographs for sense-making and inspiring and imagining social change in evaluation practice. The paper concludes with a case example of how an evaluation team used digital photographs in an evaluation of a teacher professional development program.

NorthBEAT’s Capacity-to-Consent Protocol for Obtaining Informed Consent from Youth Evaluation Participants: An Alternative to Parental Consent

Authors :
Pages :
135-153

Ethical practice compels evaluators to obtain informed consent from evaluation participants. When those participants are minors, parental consent is routinely sought. However, seeking parental consent may not be appropriate in all evaluation contexts. This practice note presents one context (mental health services research in rural Canada) where seeking parental consent for youths’ participation in research was considered unethical and unfeasible. We present a two-step “capacity-to-consent” protocol that we developed to obtain consent from youth participants. This protocol offers an ethical and feasible alternative to seeking parental consent for youth. The implications for evaluation practice are discussed.

Building Community Capacity: Self-Assessment Performance Metrics for Canadian Microcredit Programs

Authors :
Pages :
154-167

Microcredit programs operating in high-income countries with welldeveloped banking systems present unique challenges for performance assessment that are addressed by neither professional microfinance institution evaluation systems nor social performance indicators. The potential contributions of microcredit programs designed to supply small loans for business incubation and development in Canada’s inner cities may extend beyond supplementing individual income to include community capacity-building outputs and impacts such as social capital development, business skills development, and promoting financial inclusion. This article shares our recommendation for a set of performance metrics that accounts for these additional contributions. Developed in partnership with a small inner-city program, these performance metrics are suitable for use by small community-based microcredit programs staffed largely by volunteers.

Book Review: Colin Robson. (2017). Small-Scale Evaluation: Principles and Practice. 2nd ed. London, UK: SAGE.

Authors :
Pages :
168-170

Book Reviews / Comptes rendus de livres
Colin Robson. (2017). Small-Scale Evaluation: Principles and Practice.
2nd ed. London, UK: SAGE. 168 pages. (ISBN: 978-0-7619-5510-8)

Book Review: Hallie Preskill and Darlene Russ-Eft. (2016). Building Evaluation Capacity: Activities for Teaching and Training. 2nd ed. Thousand Oaks, CA: SAGE.

Authors :
Pages :
171-173

Book Reviews / Comptes rendus de livres
Hallie Preskill and Darlene Russ-Eft. (2016). Building Evaluation
Capacity: Activities for Teaching and Training. 2nd ed. Thousand Oaks,
CA: SAGE. 418 pages. (ISBN: 978-1-4833-3432-5)