Incorporating Stakeholders in Standard Setting: What's at Stake?
In this article, we discuss the roles played by participants and the processes involved in a provincial educational standards setting exercise for a large-scale assessment of student skills. The perspectives offered are those of a policy maker outside the exercise and of an insider participant who describes his experience of the standard-setting exercise from within. A number of issues involved in selecting delegates, in empanelling stakeholder representatives, and in designing standard-setting exercises are considered. Panelists are characterized as anonymous jurors rather than as subject-matter experts. An exercise will reconcile competing organizational and social values when defining standards as points for educational decision making. We conclude by describing alternate notions of representativeness, stakes, and significance.
Critical Perspectives On Educational Evaluation
This article examines what a critically informed orientation to evaluation in formal and non-formal education settings entails. A substantial critique of the technical rationality pervading conventional approaches to evaluation in education is illuminated by comparing functionalist models (exemplified in competency based education and behaviourist variations on that theme) with participatory approaches, and through a corollary account of the way system imperatives intrude on lifeworld values. However, the author argues that a commitment to critical evaluation in education calls for systematic engagement with conventional, mainstream initiatives as well as with alternative approaches. Some guiding principles, consistent with the emancipatory aims of a critical orientation, are identified in making the case for the feasibility and necessity of critical evaluation in education. Questions are raised about the efficacy of the continuing debate on quantitative versus qualitative methods and the effects on critical evaluation of Postmodern discourse.
Stakeholder Involvement in Educational Evaluation: Quebec's Commission d'évaluation de l'enseignement collégial
The introduction of program evaluation in Quebec's junior colleges - known as CEGEPs or Collèges d'enseignement général et professionnel - has followed two general principles. Those called to participate in evaluation are asked to specify the objectives of the programs being evaluated. The Quebec college teaching evaluation board, the Commission d'evaluation de l'enseignement collegial (CEEC), proposes that all who have active roles in the management or teaching of the CEGEP programs should participate in evaluation. The evaluation process should be both results-oriented and participative. Those who should be most involved in the evaluation exercise - the CEGEP teachers - have for the most part refused to participate. This is explained, according to the CEEC, by the "new and ill-defined nature of the task." However, the CEGEP teachers are also perceived as being resistant to evaluation, which the CEEC evaluators hope will change over time. In the evaluation of the Day Care Education Program, for example, the evaluators focused primarily on the objectives of the full-time diploma (DEC) programs rather than on those of the part-time (AEC) programs. As a result, the evaluation failed to identify some of the real problems of the AEC programs. It is argued that a theory-driven approach to evaluation would be more successful in dealing with the AEC programs' problems. A bottom-up approach would have mobilized more stakeholder involvement than the top-down approach actually employed.
A Performance Measurement and Evaluation Framework for Continuing Education
This article reviews the literature and sets out a framework for performance measurement (PM) and evaluation within post-secondary continuing education. In the framework, PM serves as ongoing data collection and decision support placing outputs, outcomes, and ratios as performance indicators in a matrix. Evaluation serves to periodically assess the PM system. The result is the continuing benefits of PM with the capacity-building benefits of evaluation. An example is given and the framework is judged against the evaluation standards of accuracy, feasibility, utility, and propriety.
Evaluating the Learning Opportunities Program: Some Challenges and Lessons Learned
Recent research indicates that, with appropriate instruction, accommodation, and support, students with learning disabilities can successfully meet the academic and social challenges of university. In 1998, the Learning Opportunities Program was launched at the University of Guelph to promote the academic and workplace success and social-emotional adjustment of students with learning disabilities, This multi-faceted program, funded by the Learning Opportunities Task Force of the Ontario Ministry of Training, Colleges and Universities, includes a comprehensive and thorough evaluation component. This article describes the program and presents the details of the planned evaluation. Also included is a discussion of some of the challenges faced by the evaluation team during the initial phase of the program.
Prenatal Education Class Attendance with Additional Insights Provided by a Geographic Information System
In reaction to an unexplained 25% drop in prenatal class attendance, the Regina Health District (RHD) set out to determine the proportion of women not attending, where these women live, and the reasons for not participating. The study included a cross-sectional survey of 1157 new mothers and the use of MapInfo software. The results showed that many of those who attend RHD programs live in low-income areas, a targeted population. Yet there was evidence of a significant group of mothers in these same areas who were still not attending. Strategies for better serving first-time mothers were created by examining characteristics of the clusters of non-attenders and potential barriers, such as time of day and ease of geographic access. Methods for promoting the prenatal classes were also updated based on survey feedback.
Reflections on Program Evaluation, 35 Years On
Editor's Introduction: Educational Evaluation at the Turn of the Millenium
System-Wide Program Assessment with Performance Indicators: Alberta's Performance Funding System
This article outlines the development and implementation of performance indicators (PIs) and performance funding in Alberta's higher education system. Subsequently, the model of organizational functioning that underlies performance funding is made explicit. Finally, this article explores the effectiveness of performance funding at increasing goal attainment based upon the literature and our emerging experience.
Predictors of Educator's Valuing of Systematic Inquiry in Schools
This exploratory survey study of 310 educators was conducted to investigate what variables best predict educators' attitudes toward systematic inquiry in schools. Eight variables were selected as potential predictors of educators' self-reported views about applied research utility and relevance, their personal ability to do research, the need for teacher involvement in systematic inquiry, and teacher training in research methods. Significant proportions of the variance in the dependent variables were explained by prior participation in research and personal teacher efficacy. Years of experience teaching, perceived organizational learning capacity of respondents' schools, and the panel in which respondents taught had modest explanatory value. Results are discussed in terms of our knowledge and understanding of teacher receptiveness to systematic inquiry in schools and implications for research and practice.
Evaluability Assessment of Staff Training in Special Care Units for Persons with Dementia: Strategic Issues
Ontario's Strategy for Alzheimer's Disease and Related Dementia (1999) and many others call for increased staff-training for those who work with persons suffering from dementia. Evaluators prefer to have the evaluation practices prepared at the design and curriculum development phase of educational programs but to what extent can the evaluability of staff training be assessed? An evaluability assessment of staff training in special care units for persons with dementia was completed, applying a four-step model of training evaluation. The data showed that there were barriers to best practices. The discussion reflects on eliminating these barriers through strategic management.
Priorities and Values in Accountability Programs
The authors examine relationships among three principal elements of educational accountability: assessment of achievement, collection of educational indicators, and setting of educational policy, using a Venn diagram to portray interactions among the three. Their main argument rests on the importance of understanding value perspectives in accountability. Because different value positions are largely unexamined, important data remain uncollected, and much that is collected remains underanalyzed. As a consequence, important issues remain unaddressed, and accountability fails to serve the needs of education.