Un état des lieux théoriques de l'évaluation: une discipline à la remorque d'une révolution scientifique qui n'en finit pas
This article examines the present state of the theoretical bases of program evaluation by applying key analytical criteria proposed by Kuhn and by Lakatos for the study of scientific disciplines. The study identifies the main theoretical and empirical components of the discipline, clarifies their interactions, and discusses their influence on the development of the discipline. The application of the analytical model leads to the acknowledgement that evaluation is currently at the third stage of development of the disciplines, scientific revolution, which assumes an upcoming change in the discipline's dominant foundations. However, the structuring of knowledge, the relationship between theory and empirical research, and the epistemological schism prevailing in the social sciences contribute to keeping evaluation in a never-ending state of scientific revolution.
Ensuring quality for evaluation: lessons from auditors
This article addresses ways to enhance the quality of evaluations with weak designs through a variety of quality assurance practices. Many types of evaluations are restricted in the types of designs they can use. Evaluations of development programs with widely dispersed projects in different countries are often a case in point, where the design uses visits to a number of dispersed sites, interviews with staff and stakeholders, and reviews of documentation to draw conclusions. These interview-based evaluations are quite similar in methodological approach to many performance audits. National audit offices devote considerable resources to their quality assurance practices, and, for the most part, the quality of their performance audits is not questioned. It is argued that evaluations, and not only interview- based ones, could usefully adopt many of the quality assurance practices used by national audit offices to ensure the quality of their products.
Exploration of the validity and usefulness of an integrated performance indicator for postgraduate scholarship programs
Performance indicators used to evaluate postgraduate scholarship programs in the natural sciences and engineering are usually long-term in nature and focus on such measures as time to completion, employment status, employment sector, and annual income. These measures are time-consuming and costly to collect and analyze. The information provided by these measures, therefore, should truly be useful to both program staff and senior managers responsible for administering these programs. Based on an earlier study conducted by two Canadian researchers, this article presents the results of a study on the validity of an integrated indicator for postgraduate scholarship programs. The findings of the study reveal that the integrated indicator offers no added value compared to traditional survey analysis. The article concludes with some suggestions for future performance measurement studies.
The changing role of the evaluator in the process of organizational learning
In this article we examine the role of the evaluator in the process of organizational learning, and discuss the conditions necessary to facilitate the productive execution of such a role and the consequent ramifications for evaluation. First, we describe the process of organizational learning as presented in the literature of organizational learning. Second, we examine the demands that process presents to evaluators. Third, we discuss organizational learning within the context of participatory evaluation, and then explore the role of the external learning agent. Finally, we present some major changes in the role of the evaluator, changes that stem from the very nature of the organizational learning process. The focus on organizational learning transforms the role of the evaluator to one of knowledgeable facilitator who returns responsibility of the operation, development, and evaluation back to the project/program or organization. We conclude by acknowledging the difficulties involved in changing the traditional role of the evaluator, particularly in giving up control of the evaluation to the stakeholders and letting the organization become the "owner" of the evaluation process and knowledge, leaving the evaluator the important role of facilitator. The evaluator is responsible for the procedures of learning — providing tools and monitoring the learning that goes on. The learning content is the responsibility of the organization and not of the evaluator. While we do not preclude the traditional role of the evaluator, we do suggest a significant change in the procedures involved in evaluation, in the skills required to conduct effective evaluations within the organizational context, and in the ownership of the knowledge that emerges from such evaluation.
The Delphi technique as a method for increasing inclusion in the evaluation process
The question of how best to integrate the views of underrepresented and marginalized groups in the evaluation process is of critical importance to many evaluation theorists and practitioners. In this article the Delphi technique, a method used to achieve consensus on a set of issues with the participation of all interested parties without incident or confrontation that could compromise the validity of collected data, is offered as a procedure for enhancing marginalized group participation in the evaluation process. Demonstrated by a case example, the Delphi technique is used to help ensure that all relevant stakeholders have a voice and that sometimes-silenced voices have equal influence. As a result, it is suggested that this technique lends itself to implementation with social justice evaluation models. The benefits of and lessons learned when using the Delphi technique to promote marginalized group participation and representation in evaluations are discussed.
Participatory evaluation in the context of CBPD: theory and practice in international development
This article reviews current trends in community-based participatory evaluation (CBPE) and presents an overview of related evaluation tools. These approaches have been widely implemented in an international arena, including Canada and the United States and the developing world. The theoretical approach guiding this article stems from current trends in international development thinking. The author argues that participatory evaluation is the most effective means of assessing community-based development initiatives. A comparative examination of three evaluation methodologies, however, reveals that not all those claiming to support the central tenets of CBPE actually promote democratic participation. This article reflects a growing international interest in CBPE and Canada’s participation in development efforts of this nature, both locally and in the global South.
The sustainable ecc product: a way to measure and compare national sustainable development
Recently, evaluators and policy makers have turned their attention to societal indicators. In Canada the federal government's annual report, Canada's Performance, measures societal outcomes in the environment, the ec, and other areas. Other jurisdictions produce similar reports, and many international bodies collect supporting data. One area of continued attention is the measurement of sustainable development — the integration of ecc, social, and environmental factors. Although considerable international effort has been paid to the measurement of sustainable development, little consensus has emerged. This article advances a new approach to the measurement of sustainable development and calls for an informed debate.