BOOK REVIEW: Hedley Dimock. (2004). Outcome-based Program Development and Evaluation


BOOK REVIEW: Donna M. Mertens. (2009). Transformative Research and Evaluation


BOOK REVIEW: R. Pawson. (2006). Evidence-based Policy: A Realist Perspective


Aboriginal Ways of Knowing: Aboriginal-led Evaluation (Guest Editor’s Introduction) / Modes de connaissance autochtones: l’évaluation menée par des Autochtones (Introduction de la rédactrice invitée)


Reframing Evaluation: Defining an Indigenous Evaluation Framework

The American Indian Higher Education Consortium (AIHEC), comprising 34 American Indian tribally controlled colleges and universities, has undertaken a comprehensive effort to develop an "Indigenous Framework for Evaluation" that synthesizes Indigenous ways of knowing and Western evaluation practice. To ground the framework, AIHEC engaged in an extensive consultation process including conducting a number of focus groups in major regions of the United States. Cultural experts, Indian educators, and evaluators shared their concerns regarding evaluation and described how evaluation fits within a cultural framework. This article summarizes the focus group discussions and describes how the framework developed using the key principles of Indigenous ways of knowing and four core values common to tribal communities.

Challenges in Applying Indigenous Evaluation Practices in Mainstream Grant Programs to Indigenous Communities

How can indigenous evaluators implement culturally competent models in First Nations communities while ensuring that government grant evaluation requirements are met? Through describing the challenges in one tribal community in the United States, this article will discuss how American Indian/Alaska Native substance abuse prevention programs are evaluating the implementation and outcomes of Strategic Prevention Framework grants from the federal government's Center for Substance Abuse Prevention. Requirements for implementing evidence-based programs normed on other populations and for evaluating data based on quantitative methods add to the challenge. Throughout the process, much is being learned that it is hoped will strengthen indigenous grantees and increase the cultural competence of government evaluation requirements.

Using Technology to Enhance Aboriginal Evaluations

With a focus on the use of technology when evaluating programs for Aboriginal people, this article explores the possibility of using visual and oral computer technology to enhance the incorporation of Aboriginal worldviews in program evaluation. The author situates Aboriginal worldviews, including methods of communication and transmission of knowledge, within a unique evaluation framework that also considers Western methods of data collection. Examples of the author's framework are offered in the context of evaluations of Aboriginal programs. Based on her experiences, the author concludes that it is possible to join the traditional knowledge of Aboriginal people with digital technology in program evaluation.

Drawing on Indigenous Ways of Knowing: Reflections from a Community Evaluator

The clash between Western and Indigenous ways of knowing has been epitomized by the "parachuting model" of the Western researcher who drops onto the reservation, collects data, and leaves, never to be heard from again. The strengths of indigenous science, for example, observation and contextual factors, are either ignored or appropriated. These past (and sometimes present) wrongs committed by academic researchers continue to be a contentious issue in Native communities, where, despite the research dollars flowing into the community to "solve" health problems, disparities between Native health status and that of the general population persist. This article shares reflections from a community-based evaluator who, along with a Lakota health educator, served as "cultural translators" in a community participatory process led by a community agency. We recognized the need to work with/in two cultures — both the academic research world and the Native community—and drew on collaborative evaluation principles and indigenous ways of knowing to conduct formative evaluation research on smoking cessation issues for pregnant Native women.

Bureaucratic Competence as an Essential Factor in Cross-Cultural/Multicultural Program Evaluations

While we may all agree in principle that both implementers and evaluators should be culturally sensitive and ethical as well as instrumentally effective in their work practices, we often ignore the extent to which these practice goals may conflict with one another in achieving bureaucratic competence, particularly in a multicultural society. Reconciling them requires us to acknowledge the indispensable role of responsible program evaluation in this effort, one that addresses: both the employees and the recipients of programs; the need for evaluators to be open to both theoretical and operational contributions to the field; the signal role of bureaucratic, as well as electoral, modes of representation; the indispensable function of affirmative action and pay equity programs in reconciling diversity and fairness; and the principle that subjects in evaluation and implementation processes should play a more significant role than the passive status assigned them by traditional bureaucracy and applied social science.

Moments of Truth: an Unexplored Dimension to Communicate Effectiveness

The settings for this article are rural and remote communities in the province of Ontario, Canada, where the advent of high-speed Internet has brought about new opportunities for the provision of public health and information services. This article proposes that public funding agencies and service providers will gain planning and evaluation insight from the notion of "moments of truth" as an additional dimension to capture and communicate program effectiveness. This is an evaluation dimension that is seldom appreciated as valid in the public sector, and yet it is at the heart of private sector behaviour.

Une analyse du processus d'évaluation de la politique régionale de santé dans le Nord Pas-de-Calais (France)

In France, health policy evaluation is a recent development. The evaluation conducted in the ?????? Nord Pas-de-Calais?region is of particular interest in that it deals with a policy and its programs. The article presents an analysis of the various questions tackled in this evaluation and compares them to the questions in a program evaluation scale based on a sequential progression. Evaluation culture in France appears to be characterized by an approach that is both pluralist and evaluation criteria-based, which reflects the normative, or judgment-focussed, dimension of the activity. The comparison examines the relationship between the questions in the scale and the criteria approach taken in France to identify similarities between the two approaches and advance thinking on evaluation in France as well as in Canada.

A Journey Through Five Evaluation Projects with the Same Analysis Framework

Evaluation is increasingly called upon to act in support of the management and accountability process, regardless of the field of activity. Often conducted in a complex and changing environment, it must make allowances for different interests and challenges that might occur at a number of levels. In this context, the analytical framework developed by the author may prove to be an interesting tool to organize planning of the evaluation process and for the interpretation of related results. In addition to a description of the five dimensions of this framework, a number of examples of use will be presented. A critical analysis of the strengths and weaknesses of the analysis framework, along with its principal contributions to the evaluation processes where it is in use, will be conducted based on comments issued to date by some of its users.

L'examen de la qualité des évaluations fédérales: une méta-évaluation réussie?

Evaluation quality is a fundamental issue within the evidence-based management and policy movement. The Treasury Board Secretariat of Canada (TBS) conducted a wide-ranging review of the quality of federal evaluations in 2004. In terms of its relevance and methodology and the credibility of its conclusions, is the meta-evaluation a success? This article answers the question by presenting an evaluation of the TBS quality review. Despite serious shortcomings with respect to the quality theory and criteria, design, coding process, and data analysis, the meta-evaluation conclusions are relevant and credible overall. Lessons learned from this quality review are proposed as recommendations for evaluators and officials responsible for evaluation units who wish to conduct a similar review that does not suffer from the same shortcomings.

Latent Profiles of Evaluators' Self-Reported Practices

Presented are the results of a study using latent profile analysis to describe the self-reported practices of 138 evaluators. Four classes emerged and were labeled (a) indistinct pattern of practice, (b) method-focused, (c) user-focused, and (d) robust pattern of practice. Evaluators in the "indistinct pattern of practice" class had mean item responses closest to zero relative to the other three classes, suggesting relatively weak associations with the practices described by the study instrument. The "method-focused" class had strong and distinct preferences for using particular methods. The "user-focused" class placed high importance on the role of the evaluator as facilitator, and was concerned with attaining a high level of stakeholder/participant involvement. The "robust pattern of practice" class evaluators showed a high degree of rigor in their reported patterns of practice, specifically as they are prescribed by evaluation theorists. The four profiles were distinguished by years of evaluation experience, degree attainment, and internal/external evaluator status.

COMPTES RENDUS DE LIVRES: M.J. Bamberger, J. Rugh, and L. Mabry. (2006). RealWorld Evaluation: Working Under Budget, Time, Data, and Political Constraints


BOOK REVIEW: Carl F. Brun. (2005). A Practical Guide to Social Service Evaluation