It is time to vote for your preferred candidate for the Position of Vice-President for 2017-2019.

Volume 25, 2010 - Spring

Putting God in the Logic Model: Developing a National Framework for the Evaluation of Faith-Based Organizations

Authors :
Pages :
1-26

Relative to the contribution that faith-based organizations make to Canadian society, evaluations of them are rare. The challenge for evaluators is to develop evaluation processes that meet the scrutiny of social science yet respect the centrality of faith inherent within their interventions. The challenge is compounded when evaluating faith-based innovation. This article reviews the present status of evaluation in faith-based settings, highlighting its present limitations. It then features an innovative national faith-based evaluation framework that attempted to address these limitations. The article ends with critical reflections on the learnings of this case example in conducting evaluations of faith-based organizations and implications for other sectors.

Evaluability Assessment of a National Driver Retraining Program: Are We Evaluating in the Right Lane?

Authors :
Pages :
27-50

An evaluability assessment (EA) of the 55 Alive program, a national older driver refresher course aimed at improving driving skills, was conducted. This EA adds to the evaluation literature as previous outcome evaluations neglected to explore whether this program was prepared for such assessments. A mixed-method protocol was executed across three community sites. Based on the results of this EA, several suggestions for future evaluations are discussed: (a) sampling all stakeholder levels for a more holistic snapshot of the program, (b) using EA to facilitate stakeholders' engagement in the evaluation, and (c) incorporating scientists specializing in the area within the evaluation.

Constructing and Verifying Program Theory Using Source Documentation

Authors :
Pages :
51-67

Making the program theory explicit is an essential first step in Theory Driven Evaluation (TDE). Once explicit, the program logic can be established making necessary links between the program theory, activities, and outcomes. Despite its importance evaluators often encounter situations where the program theory is not explicitly stated. Under such circumstances evaluators require alternatives to generate a program theory with limited time and resources. Using source documentation (e.g., lesson plans, mission statements) to develop program theory is discussed in the evaluation literature as a viable alternative when time and resources do not permit a priori program theory development. Unfortunately, the evaluation literature is devoid of methodology illustrating how to translate source documentation into an explicitly stated program theory. The article describes the steps in using source documentation to develop and verify a program theory and illustrates the application of these steps. It concludes with a discussion about the feasibility and limitations of this methodology.

De l'usage des indicateurs qualitatifs en évaluation et en suivi de gestion dans l'administration publique

Authors :
Pages :
69-89

With the advent of management by results in public administration, the demand for indicators has increased. The notion of indicator has traditionally been quantitative; however, since the predilection for mixed methods emerged in evaluation, indicators identified as qualitative have been developed. With the ensuing confusion in the definition and use of qualitative indicators, what is the current situation? To clarify their use, this article suggests a classification for the different uses of qualitative indicators and a method for integrating this type of indicator in the evaluation process.

Using Web-Based Technologies to Increase Evaluation Capacity in Organizations Providing Child and Youth Mental Health Services

Authors :
Pages :
91-112

Given today's climate of ecc uncertainty and fiscal restraint, organizations providing child and youth mental health services are required to do so with limited resources. Within this context, service providers face added pressure to deliver evidence-based programs and demonstrate program effectiveness. The Ontario Centre of Excellence for Child and Youth Mental Health works with organizations to meet these demands by building capacity in program evaluation. While personal instruction and mentoring are important ways of providing support, face-to-face consultations are not always cost-effective. In this article we describe the use of interactive technology and computer-based learning as an alternative and/or complementary (to face-to-face) means of delivering evaluation information and training. We discuss the process of developing these tools and share findings from our preliminary evaluation of their effectiveness in enhancing the evaluation-related supports we offer to providers of child and youth mental health services.

An Alternative to the Traditional Literature Review

Authors :
Pages :
113-119

External information is commonly collected for, and provided to, evaluation stakeholders without giving due consideration to their precise needs. As a result, evaluation resources are often inefficiently consumed and the impact of the evaluation process is diminished. The External Information Search and Formatting (EISF) process is a new approach that seeks to avoid such an outcome. This tool requires that the information searcher collaboratively work with his or her stakeholders to ensure that their informational needs are well met and the information is presented in a manner that is useful to them.

BOOK REVIEW: Nick L. Smith & Paul R. Brandon (Éds.). (2008). Fundamental Issues in Evaluation. New-York: Guilford, 266 pages.

BOOK REVIEW: R. Bickel. (2007). Multilevel Analysis for Applied Research: It's Just Regression! New York, NY: Guilford, 355 pages.

Authors :
Pages :
128-130

BOOK REVIEW: D.J. Treiman. (2009). Quantitative Data Analysis: Doing Social Research to Test Ideas. San Francisco, CA: Jossey-Bass.

Authors :
Pages :
131-133