John Mayne

Fall

Performance studies: the missing link?

Authors:
Pages:
201-208

Typically, a good measurement strategy to support results-based management includes both ongoing performance measures and periodic evaluations. It is argued that this is a too limited set of measurement tools, resulting, not infrequently, in less useful and costly ongoing performance measures. It is proposed that, in addition to ongoing performance measures and periodic evaluations, an alternative measurement tool called a performance study should be used in many situations, and further, that in a number of circumstances, performance studies should replace specific ongoing performance measures.

Special Issue

Guest Editor's Introduction (Thematic Segment: Growing Evaluation: Are We Missing the Boat?)

Authors:
Pages:
43-45

Studies are not enough: the necessary transformation of evaluation

Authors:
Pages:
93-120

The authors contend that developments in the public and notfor- profit sectors over the last decade or so have profound implications for the profession of evaluation, implications that are not being adequately addressed. They argue that evaluation needs to transform itself if it wishes to play a significant role in the management of organizations in these sectors. Beyond traditional evaluation studies, evaluators working in public and not-for-profit organizations need to (a) lead the development of results-based management systems, (b) using this and all available evaluative information, strengthen organizational learning and knowledge management, and (c) create analytic streams of evaluative knowledge. Failing to grasp these challenges will result in a marginalized and diminished role for evaluation in public and not-for-profit sector management.

Spring

Reporting on outcomes: setting performance expectations and telling performance stories

Authors:
Pages:
31-60

Results, and more particularly outcomes, are at the centre of public management reform in many jurisdictions, including Canada. Managing for outcomes, including setting realistic outcome expectations for programs, and credibly reporting on what was achieved are proving to be challenges, perhaps not unexpectedly, given the challenges faced in evaluating the outcomes of public programs. This article discusses how the use of results chains can assist in setting outcome expectations and in credibly reporting on the outcomes achieved. It introduces the concepts of an evolving results-expectations chart and of telling a performance story built around the program's results chain and expectations chart.

The lay of the land: evaluation practice in Canada today

Authors:
Pages:
143-178

A group of 12 evaluation practitioners and observers takes stock of the state of program evaluation in Canada. Each of the contributors provides a personal viewpoint, based on their own experience in the field. The selection of contributors constitutes a purposive sample aimed at providing depth of view and a variety of perspectives. Each presentation highlights one strength of program evaluation practiced in Canada, one weakness, one threat, and one opportunity. It is concluded that evaluators possess skills that other professions do not offer; they are social and ecc researchers versed in using empirical data collection and analysis methods to provide a strong factual foundation for program and policy assessment. However, program evaluation has not acquired an identity of its own and, in fact, has tended to neglect key evaluation issues and to lose emphasis on rigour. Today's program evaluation environment is dominated by program monitoring, the lack of program evaluation self-identity, and insufficient connection with management needs. But evaluation is not without opportunities — resultsbased and outcome-based management, advocacy and partnership efforts, individual training and development, and bridging between program management and policy development represent some. But first, evaluators must self-define to communicate to others what their specific contribution is likely to be. The article concludes with implications for the practice of evaluation in Canada and the blueprint of a workplan for evaluators individually and collectively, in their organizations and in their professional association.

Spring

Ensuring quality for evaluation: lessons from auditors

Authors:
Pages:
37-64

This article addresses ways to enhance the quality of evaluations with weak designs through a variety of quality assurance practices. Many types of evaluations are restricted in the types of designs they can use. Evaluations of development programs with widely dispersed projects in different countries are often a case in point, where the design uses visits to a number of dispersed sites, interviews with staff and stakeholders, and reviews of documentation to draw conclusions. These interview-based evaluations are quite similar in methodological approach to many performance audits. National audit offices devote considerable resources to their quality assurance practices, and, for the most part, the quality of their performance audits is not questioned. It is argued that evaluations, and not only interview- based ones, could usefully adopt many of the quality assurance practices used by national audit offices to ensure the quality of their products.

Spring

Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly

Authors:
Pages:
1-24

The changing culture of public administration involves accountability for results and outcomes. This article suggests that performance measurement can address such attribution questions. Contribution analysis has a major role to play in helping managers, researchers, and policymakers to arrive at conclusions about the contribution their program has made to particular outcomes. The article describes the steps necessary to produce a credible contribution story.

Spring

Ongoing Program Performance Information Systems and Program Evaluation in the Government of Canada

Authors:
Pages:
29-37

Les systèmes d'information sur la performance des programmes n'ont pas donné, dans bien des cas, les résultats attendus, et cela parce qu'on n'a pas mis le soin voulu pour déterminer quels étaient les renseignements nécessaires et qui avait besoin de ces renseignements, et pour établir le lien entre ces éléments et les diverses caractéristiques des renseignements périodiques par apport aux renseignements permanents. L'information permanente est appropriée pour les besoins de la gestion et du contrôle directs des programmes, et pour l'imputabilité des gestionnaires. Elle ne se prête pas très bien à l'utilisation comme intrant des décisions de stratégie ou de ressourcement d'envergure. L'information périodique, notamment celle qui est fournie par l'évaluation des programmes, convient à l'utilisation pour la prise de décisions de stratégie ainsi qu'à la préparation des comptes à rendre au Cabinet et au Parlement au sujet des programmes. Cette information par contre ne convient pas très bien à la gestion et au contrôle de programme. Pour qu'ils servent avec efficacité, les systèmes d'information sur la performance des programmes au sein du gouvernement doivent faire une distinction bien nette entre ces modes d'utilisation, décider du type d'information qui est approprié, et procéder avec parcimonie en ce qui a trait à la nature et à la quantité de l'information produite.

In Defense of Program Evaluation

Authors:
Pages:
97-102

Défense de l'évaluation de programme

Authors:
Pages:
103-108