CES Online Learning Goes Live: Welcome to the CES e-Institute!

Volume 22, 2007 - Fall

Methodological and conceptual challenges in studying evaluation process use

Authors :
Pages :
1-19

This article discusses methodological and conceptual challenges in empirically studying process use. The main difficulty lies in disentangling cause (here the evaluation process) and effects (here indicators of process use). The evaluation researcher not only needs to take into account all relevant factors regarding the evaluator, the participants, the evaluation context, and the evaluation approach and implementation, but also needs to base the research on a valid operationalization of process use. The article was inspired by the author’s experiences in conducting an exploratory study of process use in the context of two expert-facilitated self-evaluation projects involving five program staff. Before larger-scale studies can establish more generalizable knowledge on process use, evaluation researchers should engage in high-quality, real-time, in-depth qualitative studies to better understand the complex interactions at play and to help build a solid operationalization of this relevant construct.

Collaborative Evaluation in a Community Change Initiative: Dilemmas Of Control Over Technical Decision Making

Authors :
Pages :
21-39

Collaborative evaluation brings community stakeholders and evaluators together with the opportunity to provide significant meaning to their work. In a community change initiative, collaborative evaluation also affords the opportunity to build community capacity. This article provides a framework for how collaborative evaluation can help achieve the goals of a community change initiative. The manuscript then explores one process dimension of collaborative evaluation — control over technical decision making — and explicates five dilemmas in its navigation. A case study is provided to illustrate the five dilemmas and to offer strategies for overcoming the pitfalls involved in the process of control over technical decision making. The article concludes with two primary lessons learned for evaluators trying to navigate the control over technical decision-making dilemmas in collaborative evaluation.

Increasing research skills in rural health boards: an evaluation of a training program from Wwestern Newfoundland

Authors :
Pages :
41-56

Rural health boards face barriers to increasing evaluation skills, such as fewer opportunities for continuing education and limited access to training resources. In this project, we set out to develop and evaluate a research skills training program suitable for a rural health board. Participants attended five one-day workshops in their home region and completed a research project with ongoing mentoring from their instructor. Post-workshop surveys found that the workshops were highly rated (mean: 4.5 out of 5). We surveyed participants before and after the training. We did not find an increase in participants’ knowledge scores but found significant increases in self-rated ability to carry out research tasks. Qualitative data suggest that, while most staff were familiar with research terminology, application-focused training was beneficial.

Using public databases to study relative program impact

Authors :
Pages :
57-68

Evaluators are under increasing pressure to answer the “compared to what” question when examining the impact of the programs they study. Program contexts and other restraints often make it impossible to study impact using some of our more rigorous methods such as randomized control trials. Alternative methods for studying impact under extreme contextual constraints should be explored and shared for use by others. This article presents a method for studying program impact using existing public datasets as a means for deriving comparison groups and assessing relative impact.

Thickening the plot: combining objectives- and methods-oriented approaches in the evaluation of a provincial superintendents’ qualification program

Authors :
Pages :
69-92

This article discusses the evaluation framework for an ongoing provincial superintendents' qualification program. Combining the standards-based Discrepancy Evaluation Model (DEM) with a methodologically intensive case study method, this evaluation assessed the program across three generally defined focus areas: program alignment, skills commensurability, and comparative training. These areas were grounded in formal legislative and regulatory standards commensurate with the procedures of the DEM, but evaluation findings for each of these areas were strengthened with the addition of the case study method. Theoretical, methodological, and utilization merits of this framework are discussed. Lessons for the evaluation of similar programs are also considered.

Developing a special education accountability framework using program evaluation

Authors :
Pages :
93-125

Accountability has been at the forefront of standards-based educational reform. Yet, as new-age accountability is upon us, driven in the vehicle of large-scale assessment by policy decisions, research indicates that within these accountability systems are organizational and technical barriers to educational improvement. What is presented in this article is a local-level accountability framework focused on special education programs in the context of a large, predominantly urban school district in Southern Ontario. Involving principles of participatory program evaluation, this framework looks at building organizational capacity for bureaucratic and professional accountability in efforts to overcome barriers to educational improvement through accountability.

Participatory impact pathways analysis: a practical application of program theory in research-for-development

Authors :
Pages :
127-159

The Challenge Program on Water and Food pursues food security and poverty alleviation through the efforts of some 50 researchfor- development projects. These involve almost 200 organizations working in nine river basins around the world. An approach was developed to enhance the developmental impact of the program through better impact assessment, to provide a framework for monitoring and evaluation, to permit stakeholders to derive strategic and programmatic lessons for future initiatives, and to provide information that can be used to inform public awareness efforts. The approach makes explicit a project's program theory by describing its impact pathways in terms of a logic model and network maps. A narrative combines the logic model and the network maps into a single explanatory account and adds to overall plausibility by explaining the steps in the logic model and the key risks and assumptions. Participatory Impact Pathways Analysis is based on concepts related to program theory drawn from the fields of evaluation, organizational learning, and social network analysis.