BOOK REVIEWS: Chilisa, B. (2012). Indigenous Research Methodologies. Thousand Oaks, CA: Sage. 343 pages. Available in paperback (ISBN 978-1-4129-5882-0).


BOOK REVIEWS: Guest, G., MacQueen, K. M., & Namey, E. E. (2012). Applied Thematic Analysis . Thousand Oaks, CA: Sage, 365 pages. Available in hardcover (ISBN 978-1-4129-7167-6).


Evaluating the Identity of Program Recipients Using an Identity Exploration Instrument

This article argues that the self and identity of program recipients should be an important variable in program evaluations. In a smoking cessation program, for example, the aim should be to achieve a change in the way recipients view themselves and their identity as smokers or nonsmokers. The article identifies a gap in published studies that consider self and identity as an outcome or process measure in program evaluation. The potential of such an approach to give added depth to program evaluation is considered. Three studies in this area are identified and summarized: the identity of parents aft er child death; the professional identity of students aft er a program of interprofessional education; and the characteristics of male identity in Germany. To identify possible approaches, conceptualizations of self and identity and methods of exploring and measuring it are considered, culminating in the identification and description of a synthetic theory of identity, Identity Structure Analysis (ISA), and its associated measuring tool, Ipseus. The choice of this method as part of a program evaluation is justified. The use of ISA/Ipseus in three program evaluations—decision-making in community safety; student constructions of theory and practice in nurse education; and demands and tensions in nursing lecturing—is described. The strengths and weaknesses of this approach are considered.

Supporting Knowledge Translation Through Evaluation: Evaluator as Knowledge Broker

The evaluation literature has focused on the evaluation of knowledge translation activities, but to date there is little, if any, record of attempts to use evaluation in support of knowledge translation. This study sought to answer the question: How can an evaluation be designed to facilitate knowledge translation? A single prospective case study design was employed. An evaluation of a memory clinic within a primary care setting in Ontario, Canada, served as the case. Three data sources were used: an evaluation log, interviews, and weekly e-newsletters. Three broad themes emerged around the importance of context, eff orts supporting knowledge translation, and the building of KT capacity.

Assessing the Quality of Aboriginal Program Evaluations

Evaluations have gained in popularity in Canada since the 1990s, but statistical data indicate that the resources allocated to this management tool have not increased accordingly, despite the increased demand. During the same period, regardless of significant eff orts to optimize governance, the Canadian federal government’s management of issues related to Aboriginal peoples presents some weaknesses. Because evaluation may directly aff ect the administration of public programs, this study proposes a meta-evaluation of First Nations program evaluations. To do so, we replicate a methodology previously used by the Treasury Board Secretariat in 2004 to complete a vast study assessing the quality of evaluation in Canada. This article, based on the systematic analysis of a nonprobability sampling of more than 20 program evaluation reports, has applied the TBS’s meta-evaluation techniques to the Aboriginal context. The results show that the evaluation of Aboriginal programs is of good, and even excellent, quality and suggest that the TBS’s evaluation policy has had a definitive impact on evaluation quality.

Toward a Definition of Evaluation Within the Canadian Context: Who Knew This Would Be So Difficult?

This article describes the systematic examination and membership consultation process undertaken to define evaluation within the Canadian context. To that end, the article (a) presents the findings from a literature scan and analysis of social media postings, (b) considers the outcomes of the audience discussion during the presentation at the 2013 Canadian Evaluation Society conference, and (c) off ers ideas for next steps. Together, the literature scan results, social media analysis, and membership discussion reveal that no single definition currently exists. Further, there are indications that a shared definition would be difficult to achieve within the Canadian evaluation community. Among the potential implications discussed is that a single definition might restrict or oversimplify the current scope of practice, given the wide range of contexts and purposes for evaluation in Canada.

Contributing Factors to the Continued Blurring of Evaluation and Research: Strategies for Moving Forward

Despite many studies devoted to the diff erent purposes of evaluation and research, purpose-method incongruence persists. Experimental research designs continue to be inappropriately used to evaluate programs for which sufficient research evidence has accumulated. By using a case example the article highlights several contributing factors to purpose-method incongruence, including the control of the federal level evaluation agenda by researchers, confusion in terminology, and the credible evidence debate. Strategies for addressing these challenges are discussed. Keywords: barriers, credibility, discipline, evaluation, research.

Evaluating Humanitarian Action in Real Time: Recent Practices, Challenges, and Innovations

Real-time evaluations (RTEs) are formative, utilization-focused evaluations that provide immediate feedback. Within the humanitarian system, RTEs are an innovation for improved learning and accountability today. In fl uid and fast-changing environments they bridge the gap between conventional monitoring and evaluation, and infl uence policy and operational decision-making in a timely fashion. RTEs identify and propose solutions to operational and organizational problems in the midst of major humanitarian responses. The article (a) defines RTEs and situates them in the wider evaluation landscape; (b) examines RTEs' use and users; (c) focuses on current methodological approaches; (d) looks into challenges, opportunities, and limitations that condition uptake; and (e) draws lessons and recommendations.

BOOK REVIEWS: Edmonds, W. A., & Kennedy, T. D. (2012). An Applied Reference Guide to Research Designs: Quantitative, Qualitative, and Mixed Methods . Thousand Oaks, CA: Sage. 213 pages. Available in paperback (ISBN 978-1-4522-0509-0).