CES Online Learning Goes Live: Welcome to the CES e-Institute!

Volume 21, 2006 - Spring

Discours qui résistent à l'objectivation: que peut-on en tirer pour l'évaluation?

Authors :
Pages :
83-106

When using interviews as a data-collection method, the evaluator must pay attention to the distortions that may be created. Evaluation is a political exercise and, in a political context, may favour the emergence of statements that represent resistance to objectifying their meaning. Based on the interviews conducted during the evaluation of the implementation of the UNAIDS Drug Access Initiative in Chile, the article illustrates different tactics participants use that may win the evaluator's sympathy or block access to information, and it presents a method of analysis the evaluator can use to give meaning to such statements.

Humanitarian education project in Sierra Leone

Authors :
Pages :
107-129

Although participatory evaluation (PE) is now widely acknowledged as a potentially useful way to assess international development assistance programs, there is little documented evidence of participatory approaches being incorporated into evaluations of humanitarian aid for populations living in emergency situations. Two principal streams, practical participatory evaluation (P-PE) and transformative participatory evaluation (T-PE), are pertinent to international aid programs. Yet because these two approaches to PE are rooted in different goals and procedures, it is unclear how pragmatic and transformative dynamics can be integral to participatory evaluations of humanitarian aid. The example of an evaluation of a humanitarian education project for displaced children in wartorn Sierra Leone reveals the practical benefits that can accrue from even limited stakeholder participation in the inquiry process. In addition, while the evaluation of rapid education was not a transformative intervention, it nonetheless generated insights into the challenges of fostering incremental social transformation in a post-war context.

Participatory needs assessment

Authors :
Pages :
131-154

Needs assessments are typically conducted exclusively by practitioners at the cost of quality or entirely by external evaluators at the cost of relevance. This article makes a case for participatory needs assessment, which we define as a systematic approach to setting organizational priorities in which trained evaluators and program stakeholders share responsibility for all substantive and procedural decisions. We outline potential advantages and three critical challenges: enlisting genuine participation by program staff, reducing time demands on stakeholders, and maintaining evaluation quality. We conducted a case study in which 81 stakeholders worked with an external evaluator to identify and prioritize needs in one school district. The district developed nine strategies for dealing with the challenges of participatory needs assessment. The result was a needs assessment that reached relatively high levels of utilization (support for discrete decisions, conceptual use, and process use) and moderately high levels of quality, particularly with regard to credibility with users. We argue that participatory needs assessment is an appropriate extension of participatory approaches to program evaluation.

Impacts du PIRS en milieu scolaire

Authors :
Pages :
155-174

This article examines the impact of the School Achievement Indicators Program (SAIP) on the educational system. SAIP is a Canada-wide program involving the large-scale assessment of student achievement in mathematics, science, reading, and writing. Data were collected using semi-structured interviews conducted with the SAIP's jurisdictional coordinators (n = 20) and from surveys sent to participating school boards across Canada (n = 147). SAIP's impact is described using two dimensions: intrinsic versus extrinsic and positive versus negative. Results revealed that poor communication between the coordinators and various educational stakeholders plays an important role in the level of impact. Recommendations are made for learning evaluation programs.

Editor's Remarks / Un mot du rédacteur

The role of the Office of the Auditor General in Canada and the concept of independence

Authors :
Pages :
1-10

Audit and evaluation in public management: challenges, reforms, and different roles

Authors :
Pages :
11-45

Audit and evaluation play important roles in public management. There can be confusion and debate, however, over what each of these functions cover and indeed what roles they should play. This article reviews and compares each in relation to public management and the key challenges each face in today's public sector. Audit and evaluation should play different roles in public management and provide different information on the performance of public sector organizations, each playing to its own strengths.

Making cost-benefit analysis a practical tool for evaluation

Authors :
Pages :
47-62

A cost-benefit evaluation requires precise data on program outcomes. However, such data are unavailable when the analysis is prospective, and expensive and time-consuming to collect when the analysis is retrospective. This problem of uncertain data is partly solved by the revised version of the Treasury Board Benefit-Cost Analysis Guide (Watson & Mallory, 1997), which allows probabilistic estimates of program results to be used in the analysis. There are not yet many examples of this technique in practice. One is Transport Canada's evaluation of alternative requirements for small commercial vessels to carry emergency signaling equipment. This article describes that evaluation and assesses how well the methodology worked.

Evaluability assessment as a tool for research network development: experiences of the complementary and alternative medicine education and research network of Alberta, Canada

Authors :
Pages :
63-82

Many research networks have emerged as means to increase research involvement, build research capacity, and develop a research culture, but little is known regarding their effectiveness. Evaluations require that networks have a clearly specified program theory and clearly specified objectives; many networks do not. This article describes the experience of the Complementary and Alternative Medicine Education and Research Network of Alberta, a network that undertook a modified evaluability assessment to assist in developing the network and to plan a meaningful evaluation. Lessons learned may help other research networks to think strategically and plan for effective evaluations.