Program evaluation aims at establishing the quality of a program in order to generate a judgment that is valid and credible for the stakeholders. The present research aimed to gain a better understanding of the term "credible judgment" and to identify the required conditions to generate it. Semi-structured interviews were conducted with six experienced evaluators and clients. The results highlight the required conditions to ensure credible judgment: rigorous methodology, quality information, representation of various viewpoints, and coherence in developing argumentation.
The concept of incorporating multiple perspectives in measurement is a foundation of program evaluation in human service enterprises, but can place significant challenges on the feasibility and interpretation of projects. This article reviews triangulation methodologies and proposes a new approach to triangulation. It argues that, in order to address some of the burdens associated with triangulation in human services program evaluation, the simple notion of pre-measurement triangulation through the use of communimetric measurement theory may present an effective option.
This article investigates contribution analysis, an analytical tool invented by John Mayne, from both a theoretical and an operational perspective. Despite the substantial attention that contribution analysis has received, few studies appear to have applied it in practice. The article discusses the broadened scope of contribution analysis by analyzing its theoretical and methodological tenets, and examines its practical applicability in relation to two evaluations.
Gauging Alignments: An Ethnographically Informed Method for Process Evaluation in a Community-based Intervention
Community-based projects feature multidimensional interventions and interactions within unpredictable contexts. Process evaluations can shed light on variability in outcomes across sites and the reasons why some project outcomes fall short of expectations. The authors present an ethnographically informed study of the interactive project components in a pilot community-based falls prevention project that was implemented in 4 communities across Canada.
BOOK REVIEW: D.J. Treiman. (2009). Quantitative Data Analysis: Doing Social Research to Test Ideas. San Francisco, CA: Jossey-Bass.
BOOK REVIEW: R. Bickel. (2007). Multilevel Analysis for Applied Research: It's Just Regression! New York, NY: Guilford, 355 pages.
BOOK REVIEW: Nick L. Smith & Paul R. Brandon (Éds.). (2008). Fundamental Issues in Evaluation. New-York: Guilford, 266 pages.
External information is commonly collected for, and provided to, evaluation stakeholders without giving due consideration to their precise needs. As a result, evaluation resources are often inefficiently consumed and the impact of the evaluation process is diminished. The External Information Search and Formatting (EISF) process is a new approach that seeks to avoid such an outcome.
Using Web-Based Technologies to Increase Evaluation Capacity in Organizations Providing Child and Youth Mental Health Services
Given today's climate of ecc uncertainty and fiscal restraint, organizations providing child and youth mental health services are required to do so with limited resources. Within this context, service providers face added pressure to deliver evidence-based programs and demonstrate program effectiveness. The Ontario Centre of Excellence for Child and Youth Mental Health works with organizations to meet these demands by building capacity in program evaluation.
De l'usage des indicateurs qualitatifs en évaluation et en suivi de gestion dans l'administration publique
With the advent of management by results in public administration, the demand for indicators has increased. The notion of indicator has traditionally been quantitative; however, since the predilection for mixed methods emerged in evaluation, indicators identified as qualitative have been developed. With the ensuing confusion in the definition and use of qualitative indicators, what is the current situation?