Assessing educational programming from a social marketing perspective: an illustration
This article considers the value of a socila marketing perspective as a coherent and unifying framework where a large number of separate initiatives must be reviewed within the context of a single evaluation project. It presents a social marketing perspective as applied to evaluation and briefly illustrates the application of social marketing principles as program assessment standards in a review of public education programming in the Canadian Cancer Society.
L'évaluation au sein des régions françaises et de la Commission européenne: son rôle dans le cadre du partage des responsabilités
With the example of the French government before them, European and regional administrations are establishing policy evaluation procedures, sometimes jointly and sometimes competitively. The interest these administrations have in evaluation is related to their relative youth. They are facing a double necessity: developing a formula for action apropriate to the new public entity whose interests they have to protect, ad defending their own point of view vis-à-vis the machinery of the national administration, which remains powerful and takes a dim view of the emergence of competing definitions of the public interest. There are fequent negotiations between the three levels of authority in the context of arrangements referred to as partnership policies, which are increasing. Few projects undertaken by the regions and the European Community are not jointly financed by the French government. The contractual approach to defining common goals and resources is often accompanied by an obligation to conduct an evaluation.
Evaluating a three-day exercise to obtain convergence of opinion
A department of the Government of Canada wished to identify the most serious dificulties associated with partnerships between the department and other agencies and to develop ways of improving future partnerships. To accomplish this task, a workshop was organized for a group of senior staff from the department who were familiar with these partnerships. This study reports the process employed in the very limited time available and evaluates its effectiveness.
Use of a stakeholder advisory group to facilitate the utilization of evaluation results
This article describes the steps taken by the authors to establish a stakeholder advisory group for a major evaluation project. The authors argue that use of the advisory group enhanced utilization of the evaluation project's findings.
A level-three evaluaton of five marketing classes
This study reflects the movement in American business toward measurement of education's impact on business practice. The evaluation measured the extent to which concepts taught in five marketing classes were put into practice, and determined concept implemantation factors. Both quantitative and qualitative data were gathered. Using a t-test to compare pre- and post- class scores, the author found that 16% of the behaviors had a significantly changed score. Though modest, the findings suggest a real transfer of learning to the workplace after only limited exposure to concepts. Overall, graduates were more likely to clearly identify research questions, determine the potential buy-in, and select more appropriate methodologies. Factors that affected implementation include resources, management support, clear objectives, and opennes to new ideas. Prescriptive implications include reinforcing concept and, more importantly, educating management about the importance/impact of using appropriate marketing techniques.
Evaluating an indian and métis education staff development program
The purpose of this study was to provide formative information to the Indian and Métis Education Branch of Saskatchewan Education, Training and Employment on its Staff Development Program. The evaluation was designed in cooperation with the Indian and Métis Education Branch and with input from the provincial Indian and Métis Education Advisory Committee. Data for the study were collected by means of questionnaires, telephone surveys, group interviews, on-site visits, and document analyses. The evaluation provided information on the extent to which the program has met its objectives.
Qualitative evaluation as symbolic policy making: evaluating work training programs for Quebec welfare recipients
Qualitative program evaluation usually insists on the need for "impartial" representation of stakeholder views, whereas "symbolic policy making" favors program evaluation conclusions that support existing programs. These contradictory pressures can lead to evaluations that favor some skateholder views at the expense of other. An example of a qualitative evaluation where this occured is the Quebec government program Mesures de relance, aimed at helping welfare recipients re-enter the labour force. Several stakeholder groups were involved in the program, but only one group -program participants- was thoroughly consulted. It can be argued that employers, whose cooperation in providing work assignments was vital to the program's success, could have provided useful input into the evaluation. However, their inclusion might have impaired the evaluation's ability to arrive at the desired symbolic conclusions: that if certains improvements were made the program could be successful. This example suggests that qualitative evaluation is only useful to the extent that it actually does provide impartial representation of stakeholder groups, and especially of those that are important to the success of the program. The article is followed by a brief critique by Dale H. Poel, who argues that evaluators, rather than the qualitative method itself, are the source of political bias. The author appends a brief rejoinder.
Individual choice and sources of error: idiographic evaluation of psychosocial rehabilitation
This article briefly examines experimental approaches and sources of error in the evaluation of psychiatric or psychosocial rehabilitation programs: pre-post quasi-experimental designs, the selection of groups that are extreme on criterion measures, the problem of self-selection and differential attrition, the absence of no-treatment controls and nonblind assessment, and individual variation in response to poorly defined treatments. These factors become more problematic in evaluating ecologically based treatment that involves measuring individual progress to self-identified goals in individually selected environments. The psychosocial rehabilitation focus on individuals functioning in their natural environments has not yet met with an equally ecologically valid research methodology. This article describes an idiographic evaluation approach that characterizes individual responses to treatment and that combines small group methodology using the group at its own control to provide a practically attainable assessment of treatment efectiveness in applied rehabilitation settings.
An idiographic evaluation of an integrated, team case management program: hospitalization, client problems and stages of change
The authors describe the evaluation of an integrated case management and addictions treatment program for individuals who have the dual challenge of coping with both mental illness and substance abuse problems-the dually diagnosed. This program, modeled after the New Hampshire Specialized Services Program, combined assertive case management and a four-stage addictions treatment model in which individual are conceived as moving through the processes of engagement, persuasion, active group treatment, and relapse prevention. The method consisted of a quasi-experimental A-B design, with an idiographic analysis of patterns of hopitalization across small groups of individuals. Replication of the effect across these groups was accomplished as a natural byproduct of continuous enrollment into the program. Two new measurement devices were developed to assess functioning and to determine the level of change accomplished by he participants: the Client Problems Checklist and the Stages of Change rating scale respectively. Reliabiliy data on each device indicate that during its first year the program was successful in reducing hospitalization, increasing commitment to substance abuse treatment, and improving functioning in several life areas. Program participants expressed high levels of satisfaction. Further component analysis research is needed to better define the effects and appropriate targets for treatment.
Program design can make outcome evaluation impossible: a review of four studies of community ecc development programs
Between 1981 and 1990 Employment and Immigration Canada evaluated three community ecc development programs: the Community Employment Strategy, the Local Employment Assistance and Development, and the Community Futures program. In retrospect, one can see that these evaluations were hindered by two problems of program design: there was no replicable treatment, and the broad, shallow interventions were unlikely to have measurable effects in an environment "noisy" with uncontrolled factors. This article reviews the evaluation studies of these three programs, the lessons learned from each study, how subsequent programs reflected those lessons, and how the methodological limitations of the studies constrained what was learned.
Factors influencing the utilization of results: a case study of an evaluation of an adult day care program
With encreasing emphasis on outcome research in health care, there is concern about the poor utilization of evaluative research findings at the organizational level. This article examines the factors that influence utilization of results at a local agency and program level. The selected findings of an evaluation of one small adult day-care program will be used as a case study to illustrate the issues associated with these utilization factors. Three broad categories will be adressed in the discussion: structural factors, factors associated with the research process itself, and factors related to the organizational climate of the agency.
Evaluability assesment in health care: an example of the patient care and outcome process
Evaluability assessment is a critical first step toward successful program evaluation. Programs aimed at making difficult and significant changes to important health care services must be open to both the encouragement and the critical review that follow systematic evaluation efforts. The article introduces the principles and values of evaluability assessment; provides an exemple of the application of this evaluation tool within a dynamic, rapidly changing health care environment; and identifies some lessons learned as a result of conducting the evaluability assesment.
Evaluability assesment of a community-based program
Evaluability assesment have increased the usefulness and meaningfulness of evaluation studies through in-depth program analyses to determine which program elements are amenable to further evaluations and which are not. This is especially important for community-based programs, whose structure, activities, and goals are often too broadly defined to allow one to accurately measure their effects. An example of such a program is the Rural Quality of Life (RQL) program designed to address the human consequences of the present rural crisis in Saskatchewan. The evaluability assesment found that the program's structure and description of its activities and goals required additional clarification before further evaluation research could be considered. The recommendations were directed toward further program development and planning for future evaluations. Overall, the process of the evaluability assessment was helpful for both the program staff and evaluators involved in determining the present nature and functioning of the RQL program in a practical way.