Tarek Azzam

Fall

Using public databases to study relative program impact

Authors:
Pages:
57-68

Evaluators are under increasing pressure to answer the “compared to what” question when examining the impact of the programs they study. Program contexts and other restraints often make it impossible to study impact using some of our more rigorous methods such as randomized control trials. Alternative methods for studying impact under extreme contextual constraints should be explored and shared for use by others. This article presents a method for studying program impact using existing public datasets as a means for deriving comparison groups and assessing relative impact.

Spring

Research on Evaluation: A Needs Assessment

Authors:
Pages:
39-64

This survey study attempts to understand the research questions that evaluators were most interested in answering. The findings suggested that there is a great deal of interest in research efforts that (a) explore factors that increase the impact of evaluation, (b) help develop new methodologies, (c) examine the influence of context on evaluations, and (d) help to address ethical dilemmas. Respondents also provided research questions for each topic, revealing a diverse body of concerns and issues. The study also indicated that research on evaluation is viewed as an important endeavour with strong support from the community.