Volume 19, 2004 - Fall

Le benchmarking et l'amélioration continue

Authors :
Pages :
57-74

Benchmarking is driven by the trends toward continuous improvement and quality control and facilitates the identification of best practices imported and adapted by organizations that wish to improve performance. This article describes the benchmarking process illustrated by a case study. Its main contribution lies in its attention to the approach and the effort to clarify the objective of the study and identify relevant performance indicators. Finally, it offers an opportunity to reflect on a current preoccupation in evaluation literature: the impact of organizational context on the evaluation process.

L'évaluation des technologies de la santé: comment l'introduire dans les hôpitaux universitaires du Québec?

Authors :
Pages :
75-98

This article describes the level of implementation of health technology assessment in Quebec university teaching health centres and examines structures that may facilitate its development. The data are from a mail survey sent to respondents in all university teaching health centres (response rate of 83/149 = 56%). Interviews with key actors (n = 4) were also conducted to document the broader context in which health technology assessment is implemented. The main obstacles identified are the lack of human, material, and financial resources. Respondents pre- fer structures where the health technology assessment unit is under either the CEO or the Chief of Medical Staff, both closely linked to the research centre. The results reveal three key challenges: the need to clarify expectations about the role of health technology assessment, the need to seek consensus around health technology assessment, and the need for broad commitment to the selected structure.

Integrating evaluative inquiry into the organizational culture: a review and synthesis of the knowledge base

Authors :
Pages :
99-141

The purpose of this article is to explore, through an extensive review and integration of recent scholarly literature, the conceptual interconnections and linkages among developments in the domains of evaluation utilization, evaluation capacity building, and organizational learning. Our goal is to describe and critique the current state of the knowledge base concerning the general problem of integrating evaluation into the organizational culture. We located and reviewed 36 recent empirical studies and used them to elaborate a conceptual framework that was partially based on prior work. Methodologically, our results show that research in this area is underdeveloped. Substantively, they show that organizational readiness for evaluation may be favourably influenced through direct evaluation capacity building (ECB) initiatives and indirectly through doing and using evaluation. We discuss these results in terms of an agenda for ongoing research and implications for practice.

The analysis of focus groups in published research articles

Authors :
Pages :
143-164

This article examines 72 published research articles that utilize a focus group methodology from the fields of health, sociology, and education. The articles are assessed in terms of what type of focus group analysis is conducted on the transcripts, how the methodology is specified, and whether the coding schemes used were emergent or pre-ordinate. Fewer than half of the articles use a coding scheme in order to analyze the transcripts, while more than half simply utilize interesting quotations from the focus groups in order to represent the discussion or else to corroborate other quantitative findings. It was found that 14% of the articles utilize some sort of quality check such as interrater reliability in order to ensure accuracy in the focus group data analysis. Most of the articles utilizing a quality check are from the health field. Results are discussed in terms of implications for evaluation practice and ongoing research.

Pre- and post-scenarios: assessing learning and behaviour outcomes in training settings

Authors :
Pages :
165-176

This article discusses the use of pre- and post-scenarios to document training outcomes in learning and behaviour using Guskey's five critical levels of professional development evaluation and Kirkpatrick's four-level model for training evaluation as the backdrop for discussion. Two case examples are presented: clinical scenarios that were administered in an Alzheimer's disease training workshop and analyzed using statistical tests, and situational scenarios that were administered in an intergenerational early childcare training program and analyzed using qualitative methods. The author discusses creating and administering scenarios, preventing measurement bias, analyzing data from scenarios, and using the results of scenario analysis as the basis for designing a long-term behavioural change evaluation, and offers an evaluation of organizational change and support.

The role of the evaluator in a political world

Authors :
Pages :
1-16

Foreword by Alan G. RyanUniversity of SaskatchewanThroughout his long and distinguished career, Ernest House has continuously stressed the moral responsibility of evaluators. His social activist perspective has time and again alerted us to the dangers of being seduced by the agendas of those in power. (It was this stance that made him a particularly appropriate keynote speaker for Saskatchewan's first CES annual conference; he is well versed in Saskatchewan's history of co-operatives and social initiatives.) In his keynote address, he points out that the current political climate in the United States presents a threat to the independence and utility of evaluation, that is, the threat of becoming a servant of the power elite. Using Janice Gross Stein's analysis of the cult of efficiency, he shows how political fundamentalism and methodological fundamentalism are intimately linked. As he wrote over 25 years ago in his monograph The Logic of Evaluative Argument (1977): "There are those who try to force simplicity atop the complexities of life and thereby eradicate ambiguity... Often in positions of power, they impose arbitrary definitions of reality for the sake of action" (p. 47).But House has never been one to wring his hands at the hopelessness of a situation. His prescription for evaluators in the face of such forces is for us to confront the political exigencies head on. We need to challenge simplistic assumptions that impose artificial limits on evaluation questions and methodological approaches. In the extended example from his own work, he takes us through an alternative role for the evaluator, a sort of Transactional Evaluation approach (Rippey, 1973), but one that is updated through a sophisticated perception of the political forces at work in the study being undertaken. In so doing, he makes clear that the responsibilities of the profession of evaluation go far beyond any accountability model.

Using multi-site core evaluation to provide "scientific" evidence

Authors :
Pages :
17-36

Funders of educational and other social service programs are requiring more experimental and performance-oriented designs in evaluations of program effectiveness. Concomitantly, funders are exhibiting less interest in evaluations that serve other purposes, such as implementation fidelity. However, in order to fully understand the effectiveness of most complex social and educational programs, an evaluation must provide diverse information. This article uses the Core Evaluation of the Collaboratives for Excellence in Teacher Preparation Program as a case example of how evaluations might meet the requirements of objective scientific evaluation while at the same time valuing and incorporating other evaluation purposes. The successes and limitations of the case example in achieving this blending are discussed.

Stakeholder involvement in a government-funded outcome evaluation: lessons from the front line

Authors :
Pages :
37-56

This article describes the process behind a government-initiated outcome evaluation that involved stakeholders from funded projects in planning and decision making. It reviews the strengths, challenges, and limitations of the evaluation. Follow- up consultations with project staff involved in data collection for the evaluation identified strengths of the process as the opportunity for staff to get to know individual program participants through interviews and the good will established between staff, funders, and evaluators. Weaknesses included concerns about some of the questions asked of participants, the complexity of the evaluation design, and a failure to identify the implications of staff turnover and workload when deciding to employ front-line staff to collect evaluation data.