CES Online Learning Goes Live: Welcome to the CES e-Institute!

Kenneth Watson

Winner, Award for Contribution to Evaluation in Canada, 2002

Today CES is honoured to recognise Dr. Kenneth Watson as recipient of the contribution to Evaluation award.

Over the past twenty years of evaluation practice, Ken Watson has contributed to the development of the profession, as a practicing evaluator and as a methodologist. He was a founding member of the Canadian Evaluation Society, the treasurer for two annual meetings, and has been a member of the Editorial Board of the Journal of Program Evaluation for fifteen years.

He is the principal author of the Treasury Board Guide to Benefit-Cost Analysis, and extensively revised and re-wrote the Treasury Board Guide to Evaluation Methods. He has published about 10 refereed articles in the Canadian Journal of Program Evaluation, in addition to several commentaries and professional notes. He was the senior instructor in cost-benefit analysis for Training and Development Canada for five years, designing, writing and presenting the materials.

Ken has also contributed to the methodology of value-for-money auditing, most recently writing a principles and practices statement on Integrity of Performance Information for Public Works and Government Services Canada. He has written many guidance and discussion papers for the Office of the Auditor General of Canada, on Value for Money and results-based audit topics.

He has also contributed to evaluation methods in the field of international development, being a quality control reviewer for World Bank guidelines, undertaking assignments for Operations Evaluation Division of the World Bank, and, most recently, writing a paper on Evaluating Country Performance as a Factor in Concessionary Resource Allocation for the Caribbean Development Bank, 2000; and redesigning the project evaluation system of the Caribbean Development Bank, 2001.

His doctorate is from Harvard University, specializing in program evaluation. He spent one year in the mid-1990s as a full professor, non-continuing, at the Australian National University, teaching decision analysis and evaluation.

Unfortunately Ken could not be with us today, but he has asked Michael Obrecht to accept the award for him.

May 2002

Acceptance by Ken Watson of the Canadian Evaluation Society's 2002 Contribution to Evaluation in Canada Award

I am honored and pleased to be the recipient of this award in 2002, and regret that I cannot accept in person. The reason for this is that I must be in the Philippines at the time of the CES Annual Meeting, since I am the principal evaluator for the Asian Development Bank's study of its Asian Development Fund. This fund is the poverty-alleviation vehicle for this institution and, in a way, it is important to evaluation for Canada, if not 'in' Canada, because we have a major stake in this multilateral organization. Michael Obrecht has kindly agreed to convey my acceptance and thanks to you.

In accepting this award, I would like to make some comments on the practice of evaluation in Canada, as I have observed it over the past twenty years, and my modest contribution to it.

In the 1960s there was an experiment in doing evaluations centrally in the Treasury Board Secretariat. It did not succeed and the 1970s saw a pause to re-group. When I first became seriously involved in evaluation, around 1979, as the joint convener, with Treasury Board Secretariat, of a conference on "The Evaluation of Social Programs", the Government of Canada was ready to try again, but in a different direction, with evaluation sited within departments, serving the Deputy Minister as client, with the Treasury Board Secretariat in a support and advisory role. At about the same time the Office of the Auditor General was in its first cycle of value-for-money audits, and was actively developing new methods for this new type of audit that overlapped somewhat with objectives-based program evaluation.

In the United States the 1970s had been a fruitful period for evaluation studies. The government had funded about a dozen major evaluations using quasi-experimental designs, such as the studies of the housing allowance experiment and the Head Start experiment. US Government and agencies had, of course, also funded hundreds of smaller studies. Theoreticians of evaluation were publishing exciting work.

The Canadian strategy that emerged was built around a few key ideas — program logic models to get more rigor into thinking about causality and results, evaluation frameworks and assessments to force better program design and to plan ahead for issues-based evaluation, and departmental evaluation plans to ensure comprehensive coverage. I think the logic models I produced for the then Department of Industry Trade and Commerce in 1979 and 1980 were among the first to be done in this new mode in Canada and influenced the approach that became standard in the manuals of the Comptroller General of Canada in the following years.

I was one of the founding members of the Canadian Evaluation Society. In 1985 I became a member of the editorial board of the Canadian Journal of Program Evaluation, which I have found to be a very interesting and pleasurable experience. In the first edition of the Journal, I conducted and reported an interview with Donald Campbell. Now, twenty years later, I am the convener for the economic methods group of the Campbell Collaboration dedicated to evaluation syntheses of social programs across countries. I've published a dozen or so articles in the Journal, and always found it valuable reading.

In the late 1980s and early 1990s, as new public management theory emerged, particularly in New Zealand, Australia and Great Britain, advocating much clearer principal-agent performance agreements in government, the Treasury Board Secretariat offered departments 'increased ministerial responsibility' through memoranda of understanding that included heightened accountability to balance the heightened autonomy and flexibility. This instrument was little used.

In the late 1990s the Treasury Board Secretariat engaged me, and Charles Mallory of Consulting and Audit Canada, in a successful effort to re-write the Guide to Cost-Benefit Analysis, which in its previous form dated from 1969. Repeated efforts previously to revise the Guide, including major efforts circa 1975 and 1985, had failed, so I was quite pleased to have this one succeed, despite the difficulty obtaining agreement from all major departments on some tricky issues, such as the discount rate, and despite the challenging and important extension of the guide to include techniques for handling uncertain data, generally termed 'risk analysis'.

Treasury Board Secretariat, at about the same time, hired me to revise and rewrite the Guide to Evaluation Methods. As well, I've worked on a number of methodology projects at the Office of the Auditor General, including a guide to results-based value-for-money audit. I've found it very worthwhile to work in both worlds, VFM audit and program evaluation.

All in all, its been an interesting time for an evaluation methodologist, and, as well, I've worked as an evaluation practitioner on dozens of studies, many of which I thought were productive of important insights. However, as this award prompts me to think back over this period, it seems to me that we have not moved very far from where we were in 1980. I don't think Ministerial accountability has improved. We have a highly centralized form of national government, with few independent centers of evaluation. No central agency does evaluation, nor do Parliamentary committees, and we do not have the independent think tanks/foundations that the Americans do. Evaluation of departments, rather than evaluation by them, remains underdeveloped.

We've done a lot of studies, but have we accumulated much wisdom from them? Is there a better 'national memory' of what works and what doesn't? How many meta-evaluations have there been that synthesize a department's understanding of its business, based on years of evaluation work? Not many. I can name only one or two. In the first Trudeau administration, the Clerk of the Privy Council, Michael Pittfield, deliberately set out to create a general cadre of senior managers, where before they had developed within a specialty, such as fisheries. By the mid-1990s DM and ADM musical chairs had reached such an extent that average tenure was less than a year and a half. Departmental memory has become short, and evaluation has largely missed the opportunity to fill the gap.

Do we have better tools for evaluation? Perhaps some. I thought the Guide to Cost-Benefit Analysis was a step forward. Its been translated into Mandarin. I hope the Chinese are using it more than we are. In my opinion, the past two decades have seen a lot of churning of evaluation terminology but not a lot of forward progress. Are the current plans and priorities papers really an improvement on the Part III of the Estimates twenty years ago? Maybe, a little.

Does it matter that we don't have better accountability, better evaluation tools, and better 'public memory' — that the most popular evaluation seminar topic this year seems to be 'how to connect costs and results', apparently still a big puzzle? I think it does matter. Everyone has his or her own list where evaluation could have helped more than it did, but let me mention a few items. I don't think we had to lose the cod fishery. I don't think our HIV/AIDS inflection rate had to be twice Australia's. I don't think our per capita incomes had to fall to two-thirds of those of our neighbors in the United States. I think evaluation is important because it should result in doing things smarter. To take a little liberty with the words of a song of the great blues singer, Bessie Smith: 'I've been right and I've been wrong. Right is better.'

Thank you again for the award, and here's to better evaluation and being right more often.

Read by Michael Obrecht at the CES Awards Luncheon, Halifax, May 7, 2002