Participatory impact pathways analysis: a practical application of program theory in research-for-development
The Challenge Program on Water and Food pursues food security and poverty alleviation through the efforts of some 50 researchfor- development projects. These involve almost 200 organizations working in nine river basins around the world. An approach was developed to enhance the developmental impact of the program through better impact assessment, to provide a framework for monitoring and evaluation, to permit stakeholders to derive strategic and programmatic lessons for future initiatives, and to provide information that can be used to inform public awareness efforts.
Accountability has been at the forefront of standards-based educational reform. Yet, as new-age accountability is upon us, driven in the vehicle of large-scale assessment by policy decisions, research indicates that within these accountability systems are organizational and technical barriers to educational improvement. What is presented in this article is a local-level accountability framework focused on special education programs in the context of a large, predominantly urban school district in Southern Ontario.
Thickening the plot: combining objectives- and methods-oriented approaches in the evaluation of a provincial superintendents’ qualification program
This article discusses the evaluation framework for an ongoing provincial superintendents' qualification program. Combining the standards-based Discrepancy Evaluation Model (DEM) with a methodologically intensive case study method, this evaluation assessed the program across three generally defined focus areas: program alignment, skills commensurability, and comparative training.
Evaluators are under increasing pressure to answer the “compared to what” question when examining the impact of the programs they study. Program contexts and other restraints often make it impossible to study impact using some of our more rigorous methods such as randomized control trials. Alternative methods for studying impact under extreme contextual constraints should be explored and shared for use by others. This article presents a method for studying program impact using existing public datasets as a means for deriving comparison groups and assessing relative impact.
Increasing research skills in rural health boards: an evaluation of a training program from Wwestern Newfoundland
Rural health boards face barriers to increasing evaluation skills, such as fewer opportunities for continuing education and limited access to training resources. In this project, we set out to develop and evaluate a research skills training program suitable for a rural health board. Participants attended five one-day workshops in their home region and completed a research project with ongoing mentoring from their instructor. Post-workshop surveys found that the workshops were highly rated (mean: 4.5 out of 5). We surveyed participants before and after the training.
Collaborative Evaluation in a Community Change Initiative: Dilemmas Of Control Over Technical Decision Making
Collaborative evaluation brings community stakeholders and evaluators together with the opportunity to provide significant meaning to their work. In a community change initiative, collaborative evaluation also affords the opportunity to build community capacity. This article provides a framework for how collaborative evaluation can help achieve the goals of a community change initiative. The manuscript then explores one process dimension of collaborative evaluation — control over technical decision making — and explicates five dilemmas in its navigation.
This article discusses methodological and conceptual challenges in empirically studying process use. The main difficulty lies in disentangling cause (here the evaluation process) and effects (here indicators of process use). The evaluation researcher not only needs to take into account all relevant factors regarding the evaluator, the participants, the evaluation context, and the evaluation approach and implementation, but also needs to base the research on a valid operationalization of process use.
This article presents a short case narrative, the purpose of which is to illustrate that complex evaluation methodologies such as logic modelling can be simplified to the point where a child can be guided through the process quickly. However, the case narrative also serves to highlight the potential consequences to program development and evaluation activities when the process is oversimplified.
The Canadian International Development Agency (CIDA) has employed external monitors for many years to assist in measuring the performance of its projects. At first, this role was one of surveillance, with monitors expected to keep a distance from the implementing organizations. Today, in keeping with international trends in monitoring and evaluation, the monitoring role is, in theory, more participatory and improvement-oriented, requiring of monitors a different set of knowledge, skills, and attitudes.
Considérations théoriques et méthodologiques lors de l'évaluation de programmes d'intervention de crise
Intensive crisis programs have been the object of numerous evaluations and evaluative inquiries. Nevertheless, experts and researchers are unable to make clear statements about the effectiveness of these programs in helping families and children in crisis. Methodological bias and confusion in the definition and implementation of these programs may have contributed to the issue.