Addressing the challenges encountered during a developmental evaluation: Implications for evaluation practice

This article describes three challenges encountered during a developmental evaluation and explains how these were addressed from the evaluators' perspective. The evaluation was conducted to support the implementation of a three-year educational technology leadership project funded by the Alberta provincial government. The developmental evaluation responded to two purposes identified by the evaluation client: inform ongoing programmatic decisions and measure change in practice. The implications for evaluation practice related to the challenges of introducing a new evaluation approach, defining the boundaries of evaluator roles, and integrating technological resources are discussed and related to the Canadian Evaluator Competencies (Canadian Evaluation Society, 2010) and the Program Evaluation Standards (Yarborough, Shulha, Hopson, & Carruthers, 2011).

Development of a Classification System for Patients Referred to a Rehabilitation Program for Visual Impairment: A Method for Analysis and Budgetary Control

Program evaluation makes assessments from various perspectives. In health care areas, evaluation generally focuses on the relationship between the care process and the clinical results. Our study is of particular interest because it adds a cost dimension to that relationship, thus introducing a method for evaluating visual impairment rehabilitation programs that integrates full operating costs. Starting from the functional level of patients on admission to a clinical program, an experimental approach was used to divide them into five homogeneous groups according to their consumption of financial resources during the care process. This method could be used in measuring and evaluating the financial performance of all rehabilitation programs and help improve budgetary control. With the added dimension of costs per client profile, it could provide a framework for other areas requiring program evaluations.

Evaluation-capacity building: The three sides of the coin

I share my experience as a Canadian evaluator who is starting out in the field and new to the process of evaluation capacity building. I draw on three metaphors to describe what I believe are abilities of an effective evaluator. The first is an ability to be like playdough—to mould to external requirements. The second is an ability to be like a spider—to build webs or networks based on an understanding of the global context of the intervention. The third is an ability to be like Buddha—to cultivate a Zen-like attitude during stormy times. I illustrate each metaphor with some of my own evaluation experiences. I also discuss the possible disadvantages of each of these abilities.

BOOK REVIEWS: J. Fitzpatrick, C. Christie, & M. M. Mark. (2009). Evaluation in Action: Interviews with Expert Evaluators. Thousand Oaks, CA: Sage. xiv + 456 pages.


BOOK REVIEWS: Stake, R. E. (2010). Qualitative Research: Studying How Things Work. New York, NY: Guilford Press. 244 pages.


Advancing Empirical Scholarship to Further Develop Evaluation Theory and Practice

Good theory development is grounded in empirical inquiry. In the context of educational evaluation, the development of empirically grounded theory has important benefits for the field and the practitioner. In particular, a shift to empirically derived theory will assist in advancing more systematic and contextually relevant evaluation practice, as well as lead to the development of contingency theories that specify the conditions under which particular evaluation practices are optimal. In contrast to the historical outside-in stance, empirical research on the field of evaluation must acknowledge practitioners as "knowers," allowing for unique insights into the intersection of theory and practice.

Obstacles à l'implantation d'un programme de boire contrôlé dans le contexte organisationnel des services en dépendance

Implementation is a primary step in disseminating innovation because, without successful implementation, even the most effective program cannot show results. Many barriers were reported in the implementation evaluation of Alcochoix+, a Quebec program for managing alcohol consumption intended for non-dependent excessive drinkers. In addition to barriers already identified in the literature, such as lack of advertising, staff turnover, and financial resource problems, the results of this study point out a more specific barrier: a lack of understanding of the program's underlying approach. Moreover, the relative importance attributed to the barriers seems to change over time. These results illustrate the importance of conducting implementation studies for new programs.