BOOK REVIEWS: D. Russ-Eft & H. Preskill. (2009). Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance and Change (2nd ed.). New York, NY: Basic Books. 552 pages.


Gauging Alignments: An Ethnographically Informed Method for Process Evaluation in a Community-based Intervention

Community-based projects feature multidimensional interventions and interactions within unpredictable contexts. Process evaluations can shed light on variability in outcomes across sites and the reasons why some project outcomes fall short of expectations. The authors present an ethnographically informed study of the interactive project components in a pilot community-based falls prevention project that was implemented in 4 communities across Canada. Ethnographic descriptions and analyses of alignments between multilevelled project components allowed the researchers to better understand the mechanisms of project evolution at each site and variations in project momentum, mobilization, and sustainability across sites. Primary data sources consisted of project teleconference transcripts triangulated with log notes, field notes, and interviews. Descriptions and analyses of alignments may be instrumental to process evaluation. Project adjustments could then be made accordingly in propelling progress toward program objectives, informing program decisions, and in making sense of variability in program outcomes. Further exploration and operationalization of the alignment concept is recommended to advance knowledge about how to conduct process evaluations of complex interventions.

Contribution Analysis Applied: Reflections on Scope and Methodology

This article investigates contribution analysis, an analytical tool invented by John Mayne, from both a theoretical and an operational perspective. Despite the substantial attention that contribution analysis has received, few studies appear to have applied it in practice. The article discusses the broadened scope of contribution analysis by analyzing its theoretical and methodological tenets, and examines its practical applicability in relation to two evaluations. The authors find that contribution analysis has much to offer the current theory-based evaluation landscape, but that further elaboration of the theoretical and methodological framework of contribution analysis is also needed.

Pre-Measurement Triangulation: Considerations for Program Evaluation in Human Service Enterprises

The concept of incorporating multiple perspectives in measurement is a foundation of program evaluation in human service enterprises, but can place significant challenges on the feasibility and interpretation of projects. This article reviews triangulation methodologies and proposes a new approach to triangulation. It argues that, in order to address some of the burdens associated with triangulation in human services program evaluation, the simple notion of pre-measurement triangulation through the use of communimetric measurement theory may present an effective option. An example of such a tool is provided, along with a discussion regarding the utility and limitations of these strategies.

Jugement crédible en évaluation de programme: définition et conditions requises

Program evaluation aims at establishing the quality of a program in order to generate a judgment that is valid and credible for the stakeholders. The present research aimed to gain a better understanding of the term "credible judgment" and to identify the required conditions to generate it. Semi-structured interviews were conducted with six experienced evaluators and clients. The results highlight the required conditions to ensure credible judgment: rigorous methodology, quality information, representation of various viewpoints, and coherence in developing argumentation. The contribution of this study consists in validating the thinking of particular theorists, while pointing out the pitfalls that arise in the transition from theory to professional practice.

Managing Evaluation: Responding to Common Problems with a 10-Step Process

There is now a clear choice of frameworks for managing program evaluation—the managing of one or more studies or the managing of an evaluation capacity building structure and process. This is a distinction with a difference, and this article conceptualizes that difference and shows how the two frameworks understand three problems common to program evaluation: (a) lack of systematic integration within a larger program improvement process, (b) difficulty in finding an appropriate evaluator, and (c) lack of appropriate conceptualization prior to the inception of the evaluation study. Two practice-based approaches to these problems are presented and interpreted using the two frameworks. These frameworks show clear distinctions and differences between the two managerial approaches. These are practice-tested approaches developed over 30 years of doing and managing evaluations in an evaluation unit in the United States, where there are seemingly clear differences with Canada in at least the public sector and in practices around stakeholder participation in relation to use practices. Our experience shows that program managers and managers of program evaluation services have clear choices in how they manage program evaluation in the public and nonprofit sectors across public health and other human services, and these choices have implications for organizational development, managing an evaluation unit, and interorganizational relations.

BOOK REVIEWS: J. A. Morell. (2010). Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable. New York, NY: Guilford. 303 pages.


BOOK REVIEWS: S. Donaldson, C.A. Christie, and M.M. Mark. (2009) What Counts as Credible Evidence in Applied Research and Evaluation Practice? Thousand Oaks, CA: Sage. 265 pages.