Volume 14, 1999 - Fall

Findings from the Ontario regional evaluation of the Community Action Program for Children

Authors :
Pages :
29-56

The Community Action Program for Children (CAPC) is a federal government initiative that funds community groups to develop local projects with the goal of supporting the development of children and strengthening families. This article describes a first round of a regional provincial-level evaluation of 30 CAPC projects in Ontario. A participatory action research approach to evaluation was used to describe the local projects and their achievements to date, and to develop a common evaluation framework for later rounds of evaluation. The evaluation identified seven categories to describe the activities and programs of projects in Ontario and ten categories of short-term outcomes through a content analysis of local evaluation reports. The evaluation found that most projects have adopted flexible, ecological and comprehensive project models centred on family support to create lasting change in the CAPC priority outcome areas.

Perspectives épistémologiques et cadre conceptuel pour l'évaluation de l'implantation d'une action concertée

Authors :
Pages :
57-83

The current enthusiasm within the health and social services for complex, multi-partner, collaborative interventions has created a need for systematic evaluation. However, given the complexity of both the interventions to be implemented and of the collaborative process itself, the evaluation must be adapted ill light of these factors. From an epistemological point of view, the main consideration is to monitor the intervention-building process with the concerned parties. From a conceptual point of view, four types of issues emerge: the nature of the collaborative action, its developmental process, its characteristics, and the changes it brings about. Finally, it is argued that the participative approach is most appropriate for this type of study.

Promoting Accountability and Continual Improvement: A Review of the Respective Roles of Performance Measurement, Auditing, Evaluation, and Reporting

Authors :
Pages :
85-104

In response to growing demands for accountability and the benefits associated with continual improvement, private and public sector organizations are increasingly applying aspects of performance management. This article provides a synthesis of the literature as it pertains to the principles and practices underlying performance measurement, auditing, evaluation, and reporting. In bringing these elements together to comprise a performance management system, it is argued, an organization can demonstrate accountability and facilitate continual improvement. The article concludes with a discussion of how this can be achieved, namely through the refinement of strategic direction, reporting on key measures, and periodic reviews of performance in a systematic manner.

Comparaison entre le questionnaire auto-administré et l'entrevue téléphonique pour l'évaluation de la satisfaction

Authors :
Pages :
105-118

Data gathered from 212 clients in a substance abuse program showed that there was no statistically significant difference in the results on measures of satisfaction and perceived changes whether data collection was through a telephone survey or a self-administered questionnaire filled out at the agency. However, on-site respondents who filled out the self-administered questionnaire were more prone to ascribe the perceived changes to the treatment they received. Once this difference is taken into account, the choice between these two modalities can be based on their respective advantages and limitations depending on the survey context and its objectives.

Distilling Stakeholder Input for Program Evaluation Priority Setting

Authors :
Pages :
119-134

As few organizations have enough resources that they can pursue all evaluation questions of interest, the setting of priorities for evaluative work is a critical corporate function. Ideally, evaluation resources should be focused on studies that will most effectively advance the organization by pointing to potential for improved strategy or programming. Identifying what should be studied, and when, requires that the organization have clear feedback from stakeholders on program issues. However, ongoing input from stakeholders may be so filtered that it cannot effectively fuel the setting of evaluation priorities. An expert panel that has high credibility both within and without the organization, yet is at arms length to both the organization and external stakeholders, can bypass many communications filters and thus provide the organization with a distilled but clear reading of stakeholder concerns.

A Connoisseurship Evaluation of the Computer Curriculum, Grade 0-7 at Sacred Heart College Primary School, Johannesburg, South Africa: Present Practices and Future Direction

Authors :
Pages :
135-144

Following three years development of the computer curriculum at Sacred Heart College Primary School, the findings of a descriptive evaluation suggest, unexpectedly, an approach to curriculum resembling a limited form of social reconstructionism presenting opportunities to bridge divides which historically have separated pupils in South Africa. The evidence suggests "ideal" use in this school is linked to creative uses of computers as "tools" rather than linked to "adjunct" use or computer-assisted instruction. In particular, it demonstrates how the development of a formal curriculum may build bridges between pupils of different cultures, languages, race, and gender.

A Multi-Dimensional Program Evaluation Model: Considerations of Cost-Effectiveness, Equity, Quality, and Sustainability

Authors :
Pages :
145-160

Program evaluation has become increasingly multi-dimensional to include considerations of cost-effectiveness, equity, quality, satisfaction, and sustainability. The various aspects are interrelated but not necessarily mutually compatible. For example, services rendered cost-effectively to an easy-to-reach urban population are not likely to be distributed equitably. The consequent trade-offs cannot be assessed objectively unless measures of equity, quality, and sustainability are made as explicit as indicators of cost and coverage. This article presents an algorithm for integrating the evaluative considerations and offers suggestions for refining the system of measurement and analysis.

valuer l'efficacité d'un programme : une question de référents?

Authors :
Pages :
1-28

The main objectives of this study carried out with 73 people from two organizations in the health field in the Quebec City region are: 1) to verify if several groups of actors associated with the same program (decision makers, participants, users) have identical concepts of the effectiveness of a program; and 2) to verify to what extent the concept of program effectiveness is influenced by temporal, organizational and contextual variables. A non-traditional approach was used. Participants in the study assessed the effectiveness of 20 fictional programs on a four point scale. Analyses of variance were the main forms of analysis. The elements of convergence and divergence in the evaluations produced by the different groups of participants are presented. Explicative factors are explored.