J. Bradley Cousins

Fall

Advancing Patient Engagement in Health Service Improvement: What Can the Evaluation Community Offer?

Authors:
Pages:
202-221

Despite efforts for greater patient engagement in health care quality improvement, evaluation practice in this context remains mostly conventional and noncollaborative. Following an explication of this problem we discuss relevant theory and research on patient-centred care (PCC) and patient engagement and then consider potential benefits of collaborative and participatory approaches to evaluation of such initiatives. We argue that collaborative approaches to evaluation (CAE) are logically well-suited to the evaluation of PCC initiatives and then suggest contributions that the evaluation community can offer to help advance patient engagement. Finally, we outline a research agenda that identifies important areas that are in need of further examination.

Special Issue

Reflections on the Meaning of Success in Collaborative Approaches to Evaluation: Results of an Empirical Study

Authors:
Pages:
328-349

How do evaluators using collaborative approaches to evaluation (CAE) define success? This is the core question being asked in a further analysis of data from our previous work (Cousins, Whitmore, & Shulha, 2013 ; Shulha et al., 2016) that developed a set of evidence-based principles to guide collaborative evaluation practice. Probing data from 320 responses to our (2012) survey, we examined what respondents considered "highly successful" and "less successful than hoped" in their collaborative evaluation projects. The results revealed that evaluation use, relationships, and information needs are key factors. We propose a conceptual framework as an aid to thinking about success in CAE.

Special Issue

A Cross-cultural Evaluation Conversation in India: Benefits, Challenges, and Lessons Learned

Authors:
Pages:
329-343

Through a guided discussion, this article explores a five-year cross-cultural evaluation relationship comprising multiple projects involving an evaluator from Canada and a group of Indian colleagues working on educational reform in India. The initiative was funded through a multilateral consortium of donors and involved Western evaluation specialists working in collaboration with Indian colleagues to (a) develop evaluation capacity within the country and (b) produce evaluative knowledge about education quality initiatives associated with large-scale educational reform. This article is based on a conversation between the principal investigator from Canada and three Indian colleagues who had been involved in all phases of the work. It focuses on their respective perspectives and experiences, including the benefits obtained and the challenges encountered in the process of bridging Western and Indian knowledge systems. The article begins with background about the initiative and continues with a conversation among the participants about their cross-cultural evaluation experience. It concludes with an analysis of the issues that emerged and generation of lessons learned for evaluators interested in cross-cultural evaluation.

Fall

Meeting at the Crossroads: Interactivity, Technology, and Evaluation Utilization

Authors:
Pages:
143-159
This article is a review and integration of evaluation utilization literature with a new focus on the use of technology to increase evaluation utility. Scholarship on evaluation utilization embodies one of the major and ongoing quandaries in the evaluation profession: What constitutes usefulness and relevance to stakeholders? We think that a constructivist lens is helpful in making sense of the trajectory this literature has taken, where what is "useful" and what culminates in "use" have become much more fl exible notions that are in a constant state of negotiation between evaluators and evaluation stakeholders. We posit that it may be important for evaluators who are closely engaged with stakeholders to pay greater attention to this interactivity to build a common vision of what is "useful" at that moment in time. While this is no small task, we posit that evaluators may have something to gain by exploring the wealth of digital technologies and social media tools that are available. The use of these tools in local level, participatory-oriented contexts may be valuable for encouraging interactivity, potentially encouraging learning, creativity, and ownership. This article aims to stress that integrating technology into everyday evaluation practice, where possible, may ultimately enhance evaluation usefulness and relevance.

Spring

Editor's Remarks / Un mot du rédacteur en chef

Authors:
Pages:
v-viii

Reconnecting knowledge utilization and evaluation utilization domains of inquiry

Authors:
Pages:
81-85

This article provides commentary for the thematic segment titled "Applying a variety of methods to the evaluation of various efforts aimed at transferring knowledge generated from research." The authors revisit arguments supporting inquiry that takes up the challenge of connecting cognate fields of evaluation utilization and the broader domain of knowledge utilization. The central contribution of each of the foregoing articles is identified and situated within the context of ongoing inquiry in this domain.

Special Issue

Organizational Capacity to Do and Use Evaluation: Results of a Pan-Canadian Survey of Evaluators

Authors:
Pages:
1-35

Despite increasing interest in the integration of evaluative inquiry into organizational functions and culture, the availability of empirical research addressing organizational capacity building to do and use evaluation is limited. This exploratory descriptive survey of internal evaluators in Canada asked about evaluation capacity building in the context of organizational characteristics (learning, support structures), evaluative activity and use, and variables that mediate use. We received a total of 340 usable responses to an online survey. This article provides a descriptive account of the findings with a cursory look at differences across respondent role, organization type, and self-reported perceived level of evaluation knowledge. Results showed a pattern of moderately high ratings of organizational learning and support functions, the extent to which evaluation is being conducted and used, and stakeholder involvement in evaluation. Some differences across respondent roles, organization type, and evaluation knowledge were observed. Results are discussed in terms of an agenda for future inquiry.

Informing Evaluation Capacity Building Through Profiling Organizational Capacity for Evaluation: An Empirical Examination of Four Canadian Federal Government Organizations

Authors:
Pages:
127-146

According to the literature published on the topic, the development of an organization's capacity to do and use evaluation typically follows four stages: traditional evaluation, characterized by externally mandated evaluation activities; awareness and experimentation, during which organizational members learn about evaluation and its benefits by participating in a number of evaluation-related activities; evaluation implementation, the stage at which the role of evaluation is more clearly defined in the organization; and evaluation adoption, which occurs when evaluative inquiry becomes a regular and ongoing activity within the organization through the allocation of continued financial and human resources. In this article we argue that this perspective is oversimplified and that it is essential to understand the complexity of an organization's evaluation capacity in order to better understand how it might proceed with evaluation capacity building (ECB). We present an analysis of four Canadian federal government organizations' self-assessment of their organizational evaluation capacity using a profile conceptual framework developed as part of our larger study. We then integrate the resulting multidimensional profiles of observed levels of organizational evaluation capacity with the aforementioned stages of ECB to provide added value in thinking about the direction of organizational ECB.

Understanding Organization Capacity for Evaluation: Synthesis and Integration

Authors:
Pages:
225-237

The special issue is devoted to the examination of organizational capacity for evaluation and evaluation capacity building (ECB) through empirical inquiry. The compilation consists of two quantitative surveys of evaluators and seven single or multiple case studies across a broad array of organizations in a diverse contexts (e.g., east-central Ontario, California, Hawaii, Minnesota, and Israel). In this final article, the authors look across the collection of studies to identify emerging themes and trends with implications for ECB. The emergent themes are defining ECB; conceptualizing ECB outcomes; organizational context; ECB implementation issues; and enabling factors and barriers to organizational evaluation capacity development.

Fall

Integrating evaluative inquiry into the organizational culture: a review and synthesis of the knowledge base

Authors:
Pages:
99-141

The purpose of this article is to explore, through an extensive review and integration of recent scholarly literature, the conceptual interconnections and linkages among developments in the domains of evaluation utilization, evaluation capacity building, and organizational learning. Our goal is to describe and critique the current state of the knowledge base concerning the general problem of integrating evaluation into the organizational culture. We located and reviewed 36 recent empirical studies and used them to elaborate a conceptual framework that was partially based on prior work. Methodologically, our results show that research in this area is underdeveloped. Substantively, they show that organizational readiness for evaluation may be favourably influenced through direct evaluation capacity building (ECB) initiatives and indirectly through doing and using evaluation. We discuss these results in terms of an agenda for ongoing research and implications for practice.