Certification, credentialing, licensure, competencies, and the like: issues confronting the field of evaluation
After a brief review of recent history framing the debate about certification, the discussion begins by defining key terms (certification, credentialing, licensure, accreditation, and professional development). It then briefly summarizes the three main articles on the topic in this journal issue, analyzing their content in terms of similarities and differences. Lastly, implications are provided for the future of certifying or credentialing evaluators.
How can information about the competencies required for evaluation be useful?
This article explores the main themes and implications arising from the three articles in this thematic segment regarding evaluation competencies. It identifies five potential uses for competency information: (a) basic education and training, (b) selfevaluation, (c) professional development, (d) information and advocacy about the skills required for competent evaluations, and (e) assisting evaluation commissioners in choosing and managing evaluators. The article identifies the need for more attention to the competencies required by evaluation commissioners as well as those needed by evaluation practitioners. It also urges caution in moving toward accreditation or certification, suggesting that there are less drastic effective alternatives.
Le développement des capacités pour les évaluateurs de programmes: apporter une solution au mauvais problème?
Although the three articles in this segment propose well-founded systems for capacity development, they are fairly silent on the fundamental issue of how the development of the competencies of individual evaluators will improve the positioning of evaluation in public management and expand evaluation's contribution to societal change. Without disagreeing with the move toward capacity development of evaluators, this article notes that the evaluation function within public systems remains underdeveloped and underfunded and recommends parallel progress on other fronts toward more adequate resources and structures in evaluation.
Evaluability assessment of a survivors of torture program: lessons learned
An evaluability assessment (EA) framework was used to assess a survivors of torture program for which one of the authors had been coordinator. Staff and other stakeholders were interviewed and documents reviewed. Program logic models were developed and discussed. The results of the EA and the process are discussed in terms of the barriers to EA identified by Smith (2005). The article suggests that an EA can be done with limited resources and still be valuable in developing real knowledge of the program, ownership, management for success, and pathways to accountability.
A theory-driven approach to evaluating a communication intervention
Evaluating interventions in the practice setting calls for an alternative approach to the traditional randomized controlled trial (RCT), as its feasibility and generalizability, and clinical utility of its results are being questioned. The theory-driven approach (TDA) to evaluation, as an alternative to the RCT, attempts to account for the realities of practice. The TDA specifies the causal processes underlying the intervention effects, and identifies its expected outcomes as well as factors that affect treatment processes, such as patient, intervener, and setting characteristics. In this article, the TDA to intervention evaluation is presented as a means of designing and conducting evaluation. The TDA is discussed at the conceptual level and illustrated with examples from a pilot study that examined the effectiveness of a communication enhancement intervention designed to improve the communication skills of nursing staff in a complex continuing care (CCC) facility.
Réflexions autour d'une typologie des dispositifs institutionnels d'évaluation
Recent developments in research on the institutionalization of policy evaluation offer new perspectives for understanding evaluation capacity development. This theoretical article focusses on mechanisms that contribute to ensuring the development and the stability of evaluative practice and suggests a typology for institutional evaluation methods. By combining reflections borrowed from the comparative approach in the social sciences and new institutionalism theories, it opens new avenues for research in this field of analysis. It also identifies the major characteristics of an evaluation method, focusing on two dimensions (openness and results) to outline an ideal-typical model.
Thematic segment: Evaluator Competencies / Editor's introduction Segment thématique: Compétences de l'évaluateur / Introduction du rédacteur en chef
Evaluator competencies and performance development
Supporting professional development in evaluators can be challenging because evaluators come from varied backgrounds and conduct many different types of evaluations. Evaluation competencies are a means of determining the developmental needs of individual evaluators, and can be used as the foundation for a comprehensive performance development system within organizations that do evaluations. This article defines professional development in the context of strategic human resource development and outlines the elements of a human resource development system, showing how evaluation competencies can be used as a basis for the system. The article gives an example of the system's application, provides samples of tools that can be used for self-reflection and assessment, and outlines the benefits of a human resource development system.
Evaluator competencies in university-based evaluation training programs
In this article we revisit the comprehensive tax of Essential Competencies for Program Evaluators and explore its utility in university-based evaluation training programs. We begin by briefly summarizing the development of the tax, then elaborate on how it can be used to enhance evaluation training through six decision points in graduate degree or certificate programs. We then discuss the challenges of credentialing/licensure and accreditation within the field of program evaluation and end with a proposal for the development of standards for program evaluation training programs.
Preparing school evaluators: Hiroshima pilot test of the Japan Evaluation Society’s accreditation project
This article reports on the efforts of the Japan Evaluation Society (JES), in collaboration with the Canadian Evaluation Society, to develop and pilot test an accreditation and certification scheme for school evaluators. The purpose of the JES accreditation model is to support evaluation capacity building and promote high quality evaluation by developing functional evaluation competencies. The article describes the theory and practice of the JES approach to evaluation training and accreditation, including its overall rationale, the influence of Japan's socio-political context, the content of the school evaluator training program, and the findings of the initial "test of concept" pilot test in Hiroshima. Based on a six-month follow-up evaluation, the article also provides an assessment of the acceptance, early results, and potential sustainability of the evaluator training program. These findings have encouraged the JES to establish the accreditation scheme for school evaluation, followed by a similar system for the evaluation of international development assistance programs and government policy evaluation. The development of the JES accreditation scheme should be of interest to other evaluation societies and also to public/nonprofit organizations that must use brief training courses or evaluation "toolkits" for building evaluation competencies quickly among staff.