2016
The editorial team of the Canadian Journal of Program Evaluation (CJPE) is pleased to announce that volume 31(1) is now published on-line. In conformity with the CES embargo policy, this issue is reserved for CES members until three more issues are added. Reproduced below is the introduction to the spring 2016 issue.
CJPE 31(1) is jam-packed with practical learning interspersed with research on evaluation that both enhances our academic understanding of evaluation and advances the practice of evaluation.
Carreau et al. lead the way with a study that fits squarely into the category of empirical research on evaluation. This team not only conducted research on evaluation, but also used results to identify weaknesses in current evaluation methods in the field of interprofessional education and collaborative practice. It is a strong example of well-done research on evaluation that has direct practical applications.
The next three full-length articles depict innovations in evaluation approaches and methods. Evaluability assessment is all too forgotten and is rarely discussed. Soura et al.'s French-language manuscript brings evaluability assessment to life. Readers should note the utility of well-done evaluability assessment and will perhaps be inspired to include it in their evaluation toolkit. Michelle Searle and Lyn Shulha open our eyes to arts-informed inquiry as a methodological tool for evaluation. I recall being mesmerized by Michelle's CES conference presentation on this approach and am delighted to see it published here so that others can learn, adopt, and adapt. The final research article, by Arsenault et al., takes us into the deep, dark, and, yes, scary environment of prisons to demonstrate how we can adapt our approaches to unusual and challenging contexts.
The four short articles in the Research and Practice Notes section demonstrate how varied the practice of evaluation is. I believe that Williamson et al.'s piece is a CJPE and CES "first": a piece published by student participants on how the Student Case Competition contributed to the development of specific evaluation competencies. Nutter et al. draw our attention to challenges and solutions to conducting a "needs assessment"—something evaluators are often called upon to do and that some might argue is not a typical evaluation pursuit. Henson argues that standardized evaluation questions can be modified to assess the quality of data generated by programs for evaluation—a case of evaluators helping others to help evaluators. And Renger returns to these pages with some colleagues to share how to conduct process flow mapping as part of continuous quality improvement.
Jam-packed, fun-filled, and, I think, with at least one thing for every reader!
Robert Schwartz
Editor-in-chief