Measuring Organizational Evaluation Capacity in the Canadian Federal Government
The development of organizational evaluation capacity has emerged in recent years as one mechanism through which evaluators can extend their influence and foster evaluation utilization. However, organizational evaluation capacity is not always easy to define, and internal evaluators sometimes struggle with the identification of concrete activities that might increase their organization's evaluation capacity. This article describes an organizational self-assessment instrument developed for Canadian federal government organizations. The instrument is presented and described, and further details regarding its use and next steps for this area of evaluation research are also provided.
To Case Study or Not to Case Study: Our Experience with the Canadian Government's Evaluation Practices and the Use of Case Studies as an Evaluation Methodology for First Nations Programs
Canadian policy decision-making has utilized case studies extensively in recent years. Johnston Research Inc. (JRI) has completed more evaluation-related case studies over the past 4 years than in the previous 15 years of our evaluation work. To understand the growing application of case studies, we interviewed clients and contacts from First Nations that had been case study sites for our government clients, to understand what aspects of case study evaluation research had helped them share their opinions and improve their programs, and what aspects had not. We then interviewed our government clients, asking how well case studies served their evaluation purposes and their programs or policy development efforts. JRI conducted and financed this study to help us improve our own approaches for conducting case studies in Aboriginal populations and to share these findings with others. This article presents our interview findings on the value of case studies for Aboriginal evaluation projects and shares some best practices for conducting case studies within, and with, First Nations. Finally, we explore the impact case studies have had on Canadian policy.
Outsource Versus In-House? An Identification of Organizational Conditions Influencing the Choice for Internal or External Evaluators
An evaluation can be conducted in-house or can be outsourced to an external party. Yet organizations do not always have full discretion to decide on the locus for evaluation implementation. Certain attributes often push the organization in one direction or another. Via a systematic pairwise comparison of attributes of 18 organizations in the Flemish (Belgian) public sector, we were able to indicate the conditions that matter most in determining the locus of policy evaluation implementation. Our findings can thus enrich existing guidelines on the advantages and disadvantages of internal and external evaluations.
Informed Consent of 16- to 18-Yearold Participants in Evaluations
Ethics policies require parental consent for "children" less than 18 years old. This article examines whether parental consent should be required for youth aged 16 to 18 years. It examines the current position of youth vis-à-vis services, informed-consent requirements, the quality of parental consent, and youths' legal and developmental capacity to consent. It concludes that youth have the capacity to consent. Recommendations are made to revise the parental-consent policy and to address the impacts of stress, emotion, and inexperience on youth's decision making at all evaluation process.
Neutral Assessment of the National Research Council Canada Evaluation Function
Federal government departments and agencies are required to conduct a neutral assessment of their evaluation function once every five years under the Treasury Board Secretariat's Policy on Evaluation (2009). This article describes the National Research Council's experience conducting the first neutral assessment of its evaluation function. Based on learning from this first assessment, best practices that NRC intends on replicating, as well as lessons learned for future assessments, are discussed. This article may be of interest to both federal and non-federal organizations seeking to conduct a neutral assessment in an effort to improve their evaluation services and products.
Exemple d'application de l'évaluation formative centrée sur l'utilisation des résultats
The use of results produced through an evaluation process is a real concern for evaluators, especially when the process has a formative purpose. Utilization focused-evaluation seeks to ensure that several conditions must be met to optimize the use of knowledge generated during the evaluation process. This article describes this approach as well as the conditions of transfer of knowledge that optimize the use of evaluation results by users. It provides a concrete example of an applied utilization-focused evaluation and highlights the success factors.
The Five Cs for Innovating in Evaluation Capacity Building: Lessons from the Field
Innovation is essential in addressing complex evaluation capacity building (ECB) efforts that include a host of interacting, nonlinear, adaptive, and dynamical individual and organizational level factors. This article highlights five key ingredients in fostering innovation in ECB, based on evaluation capacity building efforts of the Ontario Centre of Excellence for Child and Youth Mental Health. For the past 5 years, 87 organizations have participated in an integrated ECB program combining funding, training, and coaching support. The five key ingredients to fostering innovation in ECB are curiosity, courage, communication, commitment, and connection.
BOOK REVIEWS: Barbier, J. C., & Hawkins, P. (Eds.). (2012). Evaluation Cultures: Sense-making in Complex Times. New Brunswick, NJ: Transaction Publishers. 256 pages. Available in hardcover (ISBN 978-1-4128- 4942-5) and e-book.
BOOK REVIEWS: Aldrick, J. O., & Rodriguez, H. M. (2013). Building SPSS Graphs to Understand Data. Thousand Oaks: Sage. 371 pages. Available in paperback (ISBN 978-1-4522-1684-3).
BOOK REVIEWS: King, J. A., & Stevahn, L. (2013). Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation. Los Angeles: Sage. 431 pages. Available in paperback (ISBN 978-0-7619-2673-3).