Treasury Board of Canada's new policy on evaluation and its accompanying directive have placed increased pressure on those conducting federal evaluations to not only quantify the impacts of programming but also make measurable assessments of their value. However, making accurate statements about the value for money of programming can be difficult during evaluations. A number of technical and practical challenges can make common approaches infeasible.
A group of 12 evaluation practitioners and observers takes stock of the state of program evaluation in Canada. Each contributor provides a personal viewpoint, based on his or her own experience in the field. The selection of contributors constitutes a purposive sample aimed at providing depth of view and a variety of perspectives. Each presentation highlights one strength of program evaluation practiced in Canada, one weakness, one threat, and one opportunity.
Accountability requirements by central agencies in government have imposed expectations on management to show results for resources used — in other words, "value for money." While demonstrating value for money means showing that the program has relevance and a rationale and that the program logic and theory make sense, the core of value for money lies in showing that a program is cost-effective. Unfortunately, many public programs and policies do not provide quantifiable outcomes, and this limits conclusions on value for money.
A group of 12 evaluation practitioners and observers takes stock of the state of program evaluation in Canada. Each of the contributors provides a personal viewpoint, based on their own experience in the field. The selection of contributors constitutes a purposive sample aimed at providing depth of view and a variety of perspectives. Each presentation highlights one strength of program evaluation practiced in Canada, one weakness, one threat, and one opportunity.
Evaluation in the context of the Social Union Framework Agreement: a case study of the national child benefit
The Social Union Framework Agreement (SUFA) and specific programs such as the National Child Benefit (NCB) represent joint government delivery of programming, and present many challenges for evaluators. Aside from attribution (which this paper argues is not really the central issue), the essential problem faced in the evaluation of these federal-provincial-territorial initiatives is that programming is becoming both more complex and heterogeneous. The concepts of joint planning and information sharing demand a high level of cooperation among program sponsors.
Evaluation uses questionnaires as a central data-gathering technique, yet researchers often appear unaware of recent developments in questionnaire design. This article reviews issues beyond the creation of standardized questions and the basic rules researchers find useful in data collection. These elementary guidelines remain robust for much evaluation research and should not be abandoned hastily. However, rapid change in the theory underlying questionnaire design has important implications for evaluation. Three themes illustrate these changes.
Collinearity is very common in linear regression. The common methods for diagnosing the disturbance , such as evaluating parameter instability when variables are removed from the specification are only suggestive. Recent developments are reviewed which assist in diagnosing collinear disturbances. These include condition indexes and variance proportions decompositions and are available in a number of statistical packages. Some corrective strategies are also examines.