Tracer les parcours d'influence de l'évaluation sur l'utilisation des technologles de la santé
In Quebec over the past ten years many medical technology assessment reports have been submitted to decision-makers in government agencies, hospitals, and professionnal associations. This article traces the way evaluation may influence decision-making. Five topics are discussed: 1) the imperfect. Reciprocal relationship between evaluation and technology utilization, 2) the difficulty of moving theoretically and methodologically between central and local perspectives, 3) linkages between evaluation and health policies, 4) linkages between health policies and technology utilization, and finally, 5) follow-up strategies for assessing the influence of evaluation on technology utilization.
Vers un modèle dévaluation du changement technologique dans les organisations
The purpose of the following paper is to define the basis for an evaluation approach viewed as a platform for technological change management strategy in organizations. The success of' technological implementation largely depends on the capacity of' the organization to integrate the various aspects of change as they evolve. It is important to develop information collection and analysis tools in the process, in order to make the necessary technical and operational adaptations and strategic choices for the long-term management of change. That is the perspective in which the author suggests an overall evaluation approach to technological change.
The Formative Evaluation of Years 1 And 2 of a Pilot Multicultural/Anti-Racist Educational-Leadership Program
This paper describes the evaluation approach, techniques, and instruments adopted during the first two years of a three-year multicultural/antiracist educational leadership program carried out in six different school boards in four Canadian provinces involving approximately 200 secondary students. The purpose of the evaluation during these first two years was primarily to assist the program coordinators, teachers, and students to plan effectively, monitor progress, and fine-tune their training programs in order to more successfully achieve their objectives.
Facilitating Instrumental Utilization for Policy Development in a Multi-Site, Inter-Ministerially Sponsored Human Service Program
This article focuses on the instrumental utilization of evaluation findings to assist in the development of program policy, including a brief review of some of the literature in this area. The context from which examples are drawn is an innovative human service program/policy that involves multiple funding ministries and participating agencies. A number of instances of the instrumental utilization of findings from an evaluation of multi-site policy and program implementation are discussed. In addition, evaluation practices that served to enhance utilization are presented, including pre-evaluation activities, activities that occurred during the course of the evaluation, and activities performed after delivery of the final report.
Quality and Evaluation in a Comprehensive Health Organization
An innovative approach to delivering health care is being developed in several Ontario communities. The Ontario Ministry of Health has been guiding and assisting a number of communities as they pursue development of the comprehensive health organization concept (CHO). The CHO initiative has been evolving over the past five to six years and is driven primarily by enthusiasm and work at the grassroots community level. This short report describes the initial framework for quality and evaluation in a CHO. Given that there are no CHOs operational as yet (the first two are scheduled to open in northern Ontario in 1995), there is tremendous opportunity to develop a comprehensive approach to quality and evaluation that uses I wide range of tools and methodologies.
Sensitivity Analysis in Outcome Evaluations: A Research and Practice Note
Every evaluation study uses data that are uncertain to a certain degree. Therefore, the analyst and the decision-maker need to know how much the outcome of the evaluation varies given the plausible variation in uncertain data inputs. That is, how sensitive is the outcome of the analysis to a particular input variable? This note discusses what characteristics make for sensitivity, what techniques to use to clarify sensitivity (especially graphic techniques), and how to interpret the results.
Application of Program Logic Model to Agricultural Technology Transfer Programs
Program logic models have provided a method of schematically presenting program objectives and the underlying cause-effect relationships between activities and outcomes. This article presents a model that explicitly recognizes the ultimate societal level benefits, and also accommodates identification of outputs, performance indicators, arid targets for all levels of objectives. The model is illustrated with a hypothetical agricultural technology transfer program, and some anecdotal observations are presented from the author's work with public-sector program managers and staff.
Impact Assessment of the Employment Action Pilot Project Offered to Immigrants on Welfare
The study measured the impact of the employment Action Pilot Project for employable immigrants on welfare in Edmonton, Alberta. Participants were expected to achieve greater and lasting independence from welfare, as the project was intended to reduce barriers to employment in the areas of language skills, computer literacy, life skills, and work experience. The authors contend that the reduction in personal barriers to employment as achieved through employability programs is outweighed by ecc incentives and disincentives. While a person is on welfare, obtainable wage levels in comparison to welfare benefit levels strongly influence the kind of opportunities taken and whether the person leaves welfare. In contrast, when one is accessing welfare, immediate financial need is seen as the most important factor, regardless of wage levels in the marketplace.
On the Difference Between Reliability of Measurement and Precision of Survey Instruments
Despite the importance of assessing the reliability of evaluative measures, there is a confusing array of conceptual scheme, and coefficients for calculating "the reliability" of an item or scale. In particular, reliability may be conceived of and estimated from a true-score model or a sampling precision perspective. The former model is associated with such estimates as parallel or alternate forms reliability, split-half reliability, and coefficient alpha, the latter with standard error, coefficient of variation, and confidence intervals for observed scores. This review clarifies the distinction between two models. The basic theoretical models for each approach are developed and illustrated using data from the author's work on measuring organizational climate. As a result, evaluators should be better able to judge the meaning of the reliability information provided in reports, and to calculate reliability in situations requiring some assessment of the quality of their data.