It is time to vote for your preferred candidate for the Position of Vice-President for 2017-2019.

Volume 12, 1997 - Spring

The Effectiveness of Community Employment Programs for Social Assistance Recipients: An Evaluation of the City of Winnipeg's Community Services Programs

Authors :
Pages :
71-86

The City of Winnipeg's community services programs (CSPs) are examples of the "work experience" approach to helping social assistance recipients find employment. Like other evaluations of work experience programs, this assessment finds the CSPs effective and cost-beneficial, in the short run, in reducing dependency on social assistance. Unlike most program assessments, however, this evaluation measures participants' length of involvement in the program and finds there is an optimal duration of involvement. A quasi -experimental design is used to assess the program's net impact on participants' subsequent length of stay on assistance. Self-selection bias is assessed, and multiple-regression analysis is used to control for differences between program and comparison group members and to determine the unique impact of program participation and length of participation.

Through the Looking Glass: What Happens when an Evaluator's Program is Evaluated

Authors :
Pages :
87-116

This paper reports on an occasion where the author, an experienced evaluator, became the client of an evaluation. This reversal led the author to reflect on the role of the client in an evaluation. The article records the thoughts and impressions of a client as his program was being evaluated. The discussion section focuses on the roles and obligations of, respectively, the client and the evaluator.

The Front-End Challenge: Five Steps to Effective Evaluation of Community-Based Programs

Authors :
Pages :
117-132

Evaluation research requires ample front end work. Preparatory work is essential to establish an environment conducive to assessment and is likely to contribute to its success. This article discusses the front end challenge of developing an evaluation for a community-based program. Highlights are provided of an evaluation protocol from one community support agency to illustrate the significance of front end work. Five front end challenges are considered: understanding the reasons for undertaking the evaluation, securing resources, establishing credibility and creating enthusiasm, developing consensus about goals and objectives, and observing and fine tuning the program.

Developing First Nations Child Welfare Standards: Using Evaluation Research within a Participatory Framework

Authors :
Pages :
133-148

Program evaluation in aboriginal communities requires a participatory approach that recognizes the importance of culture and promotes mutual learning. Despite the articulation of principles and models that support this approach, the implementation of evaluation studies often reflects a more conventional model stressing the role of the expert and a deductive approach to knowledge development. The evaluation research project summarized in this article was designed and implemented to develop culturally appropriate child and family service standards in First Nations communities using a community-based participatory research model. Process and outcome benefits were achieved by utilizing multiple focus group interviews in the first phase and adapting these strategies in a combined feedback and consultation phase following preliminary data analysis.

Organizational constraints on the introduction of program evaluation: the "self-evaluating" organization reconsidered

Authors :
Pages :
149-166

This case study suggests that organizational constraints on the introduction of program evaluation often have more to do with problems of organizational learning than with the "political" problems dealt with by Wildavsky (1979) and other students of the evaluation/organization interface. Faced with expanding needs and declining resources, the Montreal Jewish Family Services Social Service Centre attempted to introduce program evaluation as a tool for ensuring more efficient resource allocation. Instead of reflecting the needs of resource allocation, however, the evaluation framework the agency developed was based on compliance-oriented concepts of quality assurance. As was discovered during the first attempt to apply the model, the framework had to be substantially modified in order to serve as a useful tool of program evaluation. The case study thus suggests that the introduction of program evaluation is analogous to the introduction of other kinds of technological change: such changes must be carefully tailored to the needs and constraints of the organizational settings where they are being introduced.

Evaluating evaluation in the European Commission

Authors :
Pages :
1-18

There is an increasing demand for the evaluation of expenditure and regulatory measures undertaken by the European Union in order to improve accountability and achieve "value for money" objectives. At the most general level, the task of organizing evaluation systems for these programs falls to the European Commission. Historically, the commission has focused on developing policies rather than monitoring or delivering them. With the maturing of certain policy areas, the commission's role is shifting in the direction of review and evaluation. From a systemic point of view, the management of EU policy presents particularly severe challenges in the area of evaluation. There are multiple actors located at local, national, and supranational levels; divergent administrative cultures and practices; variable quality of information, records, and capabilities; an attenuated system of reporting; and unclear lines of accountability. Joint funding of some programs creates additional problems by entangling program impacts and the audit purposes and management of national and EU institutions, respectively. The commission has still to come to a view on the parameters defining commonality and diversity in evaluation. In order to improve the situation, the commission has taken a number of initiatives, among them the setting up of an expert working group on the evaluation process. This article reports, in general terms, the findings of that group for 1994–95.

Evaluation of the Hartmobile Health Promotion Program

Authors :
Pages :
19-29

There is an increasing demand for the evaluation of expenditure and regulatory measures undertaken by the European Union in order to improve accountability and achieve "value for money" objectives. At the most general level, the task of organizing evaluation systems for these programs falls to the European Commission. Historically, the commission has focused on developing policies rather than monitoring or delivering them. With the maturing of certain policy areas, the commission's role is shifting in the direction of review and evaluation. From a systemic point of view, the management of EU policy presents particularly severe challenges in the area of evaluation. There are multiple actors located at local, national, and supranational levels; divergent administrative cultures and practices; variable quality of information, records, and capabilities; an attenuated system of reporting; and unclear lines of accountability. Joint funding of some programs creates additional problems by entangling program impacts and the audit purposes and management of national and EU institutions, respectively. The commission has still to come to a view on the parameters defining commonality and diversity in evaluation. In order to improve the situation, the commission has taken a number of initiatives, among them the setting up of an expert working group on the evaluation process. This article reports, in general terms, the findings of that group for 1994–95.

Using self-report measures to lower the cost of population heart health assessment in New Brunswick

Authors :
Pages :
31-46

This paper describes the development and feasibility testing of a multivariate equation that uses self-report information rather than physiological measures to estimate coronary heart disease (CHD) risk in a population sample of New Brunswick adults with no reported history of heart disease. The multivariate Framingham risk prediction model, which uses a variety of selfreport and physiological measures to estimate CHD risk, was first used to calculate CHD risk in the population sample. Regression analysis was then employed to identify a linear combination of "self-reportable" variables capable of closely approximating the population risk indices derived using the Framingham model. To test its utility, the self-report equation derived from the regression analysis was applied to a small telephone survey data set drawn from a second random sample of adult New Brunswickers with no reported history of heart disease. When applied to the telephone survey data, the self-report equation yielded CHD risk estimates consistent with those from the first population sample. We concluded that the development of a self-report-based methodology for assessing the CHD risk or heart health of target populations is highly feasible. Owing to the use of self-report information, as opposed to the physiological measures employed in conventional CHD risk prediction models, a self-report-based model could significantly reduce the cost of assessing CHD risk in the target populations of community-based heart health programs. Although further research will be necessary to develop a complete self-reportbased CHD risk prediction model, the results of the present study clearly indicate that this line of research has significant potential to enhance the evaluation of heart health promotion programs.

A Historical Perspective on Federal Program Evaluation in Canada

Authors :
Pages :
47-70

Federal program evaluation has been a central force in defining the history of evaluation activities in Canada. This article reviews the key policy issues that underlie federal program evaluation by analyzing the historical context within which the federal evaluation function evolved. Because federal evaluation has been linked to a perceived need for accountability, and because evaluation has been expected to play an important role in the policy making, planning, and budgeting of the government (McQueen, 1992), a key issue in the history of program evaluation in Canada is outcome promise versus performance. The authors identify and discuss factors that shaped the practice of federal program evaluation, such as federal policy issues, political actors, organizational structures, and political and governmental changes, in order to generate a more complete understanding of the current state of program evaluation and its future within the federal government.