Commentaire de la rédactrice invitée: Programmes d'évaluation du rendement en lecture et écriture : Quelles leçons en tirer? / Guest editor's comment: Achievement Assessment Programs in Reading and Writing: What Lessons Can Be Learned?
La comparabilité des échantillons dans les enquêtes de l'International Association for the Evaluation of Educational Achievement
This research and practice note is intended to offer a perspective on interpreting student results from the International Association for the Evaluation of Educational Achievement TIMSS 2003 survey, specifically focussing on the sampling plan of participating schools and students. The observations can therefore be extended to other surveys of that kind, where the focus is not on mathematics.
School Determinants of Achievement in Writing: Implications for School Management in Minority Settings
This study identified school factors that determined writing achievement for 13- and 16-year-old Francophone students in minority (Manitoba, Ontario, New Brunswick, and Nova Scotia) and majority settings (Quebec) (N = 5700). Factor analysis retained three factors subjected to binary logistic regression with students’ academic performance: Human and Material Resources and School-Community-Family Relations; Principal’s Vision and Beliefs; and Rules and Procedures.
valuations à grande échelle de l'écriture: lien entre le score holistique et les composantes de l'écriture
Information generated by large-scale writing assessment holistic scores should serve as indicators for decision makers. But to what extent do holistic scores represent the different writing components? Based on a sample of written essays by 3,107 13- and 16-year-old Canadian students, analyses show that six writing components are related to the holistic score and the relationships do not vary based on language (essays written in English versus essays written in French). Results allow for a discussion of the interpretations of holistic scores in the context of writing assessments.
Understanding the challenges associated with conducting secondary analysis of large-scale assessment data is important for identifying the strengths and weaknesses of various statistical models, and it can lead to the improvement of this type of research. The challenges encountered in the analysis of assessment data from subpopulations may be of particular value for this purpose. To date, few studies have discussed the problems associated with the secondary analysis of large-scale assessment data.
Design and Development Issues in Provincial Large-scale Assesaments: Designing Assessments to Inform Policy and Practice
Over the past four decades, there has been much debate on key sources of data in evaluating education, determining school effectiveness, and providing evidence to inform accountability and education planning. Entangled in this debate has been the extent to which large-scale assessments of learning provide valid evidence about the quality of schooling and education in Canada and how they can be used to inform education practice and policy. This article discusses five issues in large-scale assessments that are key for their usefulness and for making valid inferences.
With the implementation of the Ontario Secondary School Literacy Test (OSSLT) in 2002, Ontario became the first province in Canada requiring successful completion of a large-scale highstakes literacy test for high school graduation. We began to explore and analyze students' perceptions of this testing program in terms of their preparation for the test, the test's impact and value, and the potential influence of such a testing program on the students' views about literacy.
Researchers and teachers of reading and writing can assess from different viewpoints or from a common one. In this manuscript two different viewpoints—a responsive view for writing and a developmental view for reading—show different vantage points, and the responsive view is used to show a way to bring reading and writing together. In general, this article advocates assessment for learning, as different from assessment of learning. Overall, my goal is to critique whether our uses of assessment and evaluation derive from our beliefs.