It is time to vote for your preferred candidate for the Position of Vice-President for 2017-2019.

J. Bradley Cousins

Winner, Award for Contribution to Evaluation in Canada, 1999

Conversation with Linda Lee

Linda: Thinking about the development of an evaluation culture in organizations, can evaluators have an impact in creating this culture — or is it more the role of program administrators? In other words, should evaluators see creating this culture in organizations as part of their primary role?

Brad: Absolutely, from my point of view, but there are so many different perspectives. There are evaluators who would absolutely denigrate that position because their belief is that evaluation needs to remain detached, that evaluation and development need to be separated. For those evaluators who see evaluation as a summative, stock-taking, judgment of merit and worth, they would be less inclined to see that perspective as being a valid one. But my orientation has always been that evaluation is a way to improve programs and organizations and, because of that, evaluation is necessarily intimately linked with development. So, of course you want evaluators working in concert with organizational administrators and decision makers and other people who are key to program implementation, because it's when they work together that you can get into some activities that really generate the kinds of outcomes you need to see. From my point of view, I would say, yes, it's a very important role for evaluators to play, to try and change organizational culture. And, the only way it's going to happen is through repeated trials where people in organizations can finally see the pay-offs that are available. But I'm sure not all folks would agree with that.

Linda: How does that fit with the work that you've done in participatory evaluation?

Brad: A lot of my work in participatory evaluation has been in the schools, as you know. In schools, especially the programs I have been working with, it's not been so much a case of "should this program be continued or discontinued" or "should funding go to this program or that program", the kind of summative evaluation questions where you need the external, independent non-partisan judgment. The activities I've been involved with have more been along the lines of school improvement and the formative evaluation of programs. Basically, how do you modify programs in order to improve them and enhance implementation and, ultimately, improve outcomes? So, in doing that with my colleagues in the schools, it's been a good opportunity to develop those linkages and start to work on the cultural kinds of changes that are needed. Again, it's not something that will happen over night. It's a sustained relationship that is important. It's going to happen over a period of time and people have to be able to see the benefits.

Linda: Talking about education, the field in which you've been working for many years, sometimes educators and policy makers (not even to mention politicians and the media) are unaware of the results of research and evaluation in education. Do you think there is urgency to become more proactive in disseminating, or making public, the results of some of our research in education.? And in follow-up, do you think that could affect the quality of educational policy in this country?

Brad: Absolutely. I think I can speak from the Ontario perspective, where many of the policy decisions are made on grounds that are, in my view at least, heavily under-represented by the academic perspective. I recognize that it's a political milieu, but I firmly believe that the policy arena needs to embrace the academic knowledge base and research-based knowledge in its deliberations. We can identify a few different policy initiatives that have developed and you try and link those to some of the extant research findings and the knowledge that has been created — and you are very hard pressed to do that. On the other hand, we have strong bodies of knowledge and evidence to suggest certain policy directions and policy initiatives in these areas that end up being derailed or they just plain evaporate. And you have to wonder why. It's a highly political milieu and I believe that it's important that researchers and evaluators continue to try to have voice in the policy arena through whatever means is possible. I think that's a priority on the agenda of most serious educational researchers, as well as people doing evaluation in the area.

Linda: This might not be a fair question, but do you see that there is a role — or more of a role — for the Canadian Evaluation Society to play in that process; that is, in getting policy makers to pay more attention to the results of good research and evaluation.

Brad: Yes. I think that any time you have a national society or organization rallied around common principles, as is CES, the chances of gaining recognition — or being heard — are enhanced quite substantially. But one concern is that an organization such as CES might be perceived as taking on a certain political orientation in trying to lobby policy makers because, after all, we're talking about values in decisions. But I don't know if that necessarily has to be the case. I think there is a case to be made for just heightening people's awareness of the evidence that's been accumulated and the knowledge that's been created. One can do that, I think, in a carefully planned way such that you're not, for example, biasing one policy stream over another. So it's a bit delicate, but nonetheless, I think there is a role for associations or societies to play, just by virtue of their existing recognition and their mandate.

Linda: One thing we do see is governments and school districts using more performance indicators. In education we have a range of those that may, for example, include provincial examination scores, with the rationale being that using these performance indicators will create better teaching and learning conditions. How do you see that? Is this whole move to performance indicators positive in evaluation? Are we using the right kinds of indicators?

Brad: I think it's absolutely crucial that we endeavour to measure intended outcomes of programs in reliable and valid ways. It's essential for evaluators to do that. Regarding the trend towards people embracing the performance indicators approach and outcome monitoring, to me there's lots of potential there, but there's also lots of danger. The trouble is maybe we can debate and deliberate about what the appropriate outcome indicators should be and maybe we can even arrive at a set of indicators that can pretty much satisfy all the different stakeholder perspectives. But, once we implement this system of monitoring performance on the basis of those indicators, when we observe changes in the indicators without the systematic collection of important implementation and process data, it's basically anybody's guess as to why the indicators have changed in the directions in which we've observed. To me it's highly problematic and it just adds fuel to the political debates. People will use trends in outcomes in any way shape or form, but they really don't tell us a lot about what we need to change in order to improve the system. So, I see a valid use for performance indicators and most certainly we are accountable to the public and the public has every right to know how well we are doing — and that's fine — but I think, unless we go well beyond that and enter into the realm of how we can explain the pattern of variation that we are observing, we are kind of missing the boat. We are just contributing to an ongoing political dialogue and rhetoric. So, for me, I would like to see a lot more emphasis put on understanding program implementation and process components and measuring them, by way of trying to explain variability in the performance indicators and outcome measures we have developed.

Linda: What do you see looking down the road — what are you hoping we'll see happen in the realm of educational evaluation in this country? What would you like to see so that in ten years we could say, " wow, we've really made some strides in how we do evaluation"? What would that look like to you?

Brad: I think what it would look like to me is that policy makers at the local level — and by that I mean school boards or school districts — are recognizing the value of systematically collected information and striving to rationalize the decision-making process by virtue of attenuating such information. For that to happen, it relates to their own willingness to invest in research and evaluation functions or structures within the organizations. I know that, in Ontario at least, many of the larger boards still maintain such offices, but it's becoming increasingly difficult to justify those offices in times of heavy retrenchment and financial cut-backs. Being accountable to the public is taking the steps that are necessary to improve educational programs, curriculum and the system. In order to really make wise decisions about which steps to take, we need to have systematically collected information. So, if we are giving way to political argumentation and the rhetoric of retrenchment in favour of de-emphasizing the role of theses units or the role of evaluation consultants in providing information, I just think that's highly problematic. I suppose I would like to see an increasing valuing of systematically collected information.

Also, in education I would like to see a more valuing of information that is generated by educational practitioners; by that I mean the whole domain of action research, embracing the notion that people in the field of practice should have, and can have, an extremely valuable role to play in the creation and production of knowledge. We need to capitalize on that knowledge. I think we're starting with that and there's a lot of discussion — and some people have taken action research quite seriously — but I certainly think we have a long way to go before it becomes an accepted part of the organizational culture.

Linda: Well, that was a nice way to bring us back to where we began with this. Is there anything else, any final comments?

Brad: I guess one area that has always intrigued me is the business of the professionalization of evaluation; the question of, "will evaluation become a profession?" I find that to be a fascinating question, much the way I find the question of "are teachers professionals?" fascinating. There's been quite a lot of interest in the answers to such a question and I don't think it's a very easy one to answer. It speaks to the notion of are evaluators — and should evaluators — be operating independently of program implementors? Should the function of evaluation be strictly to judge the merit or worth of programs without interacting or connecting with the developmental function of those programs? I think that if we think about the issue of should evaluation be a profession, you get different answers depending on which side of the debate you fall. So I look forward to some ongoing dialogue there. At the root of it, I think evaluators always need to be cognizant of exactly what does evaluation mean to one. Is it just detached, relatively objective, summatively oriented judgement of merit and worth, or does it have a role to play in integrating with program implementation and informing development? I think those are issues that people working in the field will continue to grapple with.

I certainly subscribe to the development of first class skills and the use of available tools and state of the art methodologies. I think there's a strong role for training evaluators and people who work in the field and that will continue to be a strong role. But apart from the training and professional development function, a profession needs to create its own esoteric knowledge base. Evaluators have done that in spades but primarily at the level of theory. What is missing is an emphasis on empirical research on evaluation. I would look for a much stronger thrust on bridging the gap between theory and practice with empirical research if evaluators want to move seriously toward professionalization. To me, there are some fundamental issues that need to be addressed that underpin that whole issue of should evaluation be a profession.

Linda: I'm glad you raised that because it is a topic, as you know, of much debate right now. It takes us into the issue of certification of evaluators. CES has had position papers put together on that issue and it creates a lot of controversy when you talk about taking professionalization to that next step of whether we should be certifying people as evaluators.

Brad: That's exactly right. You know, with a lot of evaluators, evaluation's an important part of what they do in their roles, but it may not be the only thing they do. It becomes a bit problematic from that point of view as well.

Linda: Thank you very much, Brad, for taking the time to talk to me. Rather than putting a longer bio of you in the newsletter as one of the CES award winners, it's going to be much more interesting for people to read some of your thoughts; so I appreciate that you took the time to do this for CES.

Brad: My pleasure!