Evaluation and research: differences and similarities
This article discusses the similarities and dissimilarities between research and evaluation, which are two clearly differentiated disciplines despite their similarity in concepts, tools, and methods. The purpose of research is to enlarge the body of scientific knowledge; the purpose of evaluation is to provide useful feedback to program managers and entrepreneurs. In this article I examine the central characteristics of research and evaluation (validity, generalization, theory and hypotheses, relevance, and causality) and the different roles those characteristics play in each. I discuss the different functions of evaluation and research, and propose some criteria for fulfilling the different demands of evaluation and research. And I argue that the constant pressure to examine evaluations by the criteria of research prevents evaluation from becoming an independent discipline and delays the development of standards and criteria that are useful to evaluators.
The language of evaluation theory: insights gained from an empirical study of evaluation theory and practice
Broad concern for language issues in evaluation has been limited in comparison to other social science disciplines. In this article, some occasions of definitional or conceptual confusion with evaluation theory language are identified that emerged during a study conducted by Christie. We suggest that much of the language we use to describe evaluation practice is steeped in theoretical terminology, which may limit the utility of the language. We also argue that theoretical language ought to be used with great care, with attention to the subtleties and nuances of terms, for there may be unexpected confusion or ambiguity in the field about the terms we routinely use. A research agenda is offered, suggesting that it would be both an informative as well as a useful task for us to learn more about the everyday "folk theories" of the field and the vernacular used to describe them.
valuation de l'implantation et des effets d'un programme de soutien intensif offert à des familles afin d'éviter un placement en milieu substitut
This article focuses on the evaluation of a form of intervention with families where there is a risk that a child may be placed in a substitute environment. The program, implemented by the Centre jeunesse de Montréal, was reviewed by evaluators for close to five years. Intervention is characterized by: 1) rapid mobilization of various participants, 2) intensity, and 3) a limited time-frame. Caseworkers are available 24 hours a day, 7 days a week, generally intervene in the family environment, and provide varied therapeutic and technical support. The evaluation focuses on: 1) the characteristics of the families and services, 2) the impact of the program on placement rates and reporting following intervention, and 3) the impact of the program on family functioning and the children's well-being. The discussion suggests ways of alleviating problems in the evaluation of intervention programs in child services.
User-friendly evaluation in community-based projects
There is a growing trend in current evaluation to encourage the active participation of those being evaluated, particularly in community-based programs. This evaluation often focuses on processes as well as outcomes, documenting what has been effective over the life of the project. However, such evaluation is often done "on the run," without thinking through what the evaluation might mean from a user perspective, particularly for clients or project participants. Three examples of communitybased evaluation projects are used to explore some issues in taking the idea of "user-friendly" evaluation seriously.
Theory-driven approach for facilitation of planning health promotion or other programs
This article revises and extends Chen's (1990) theory-driven framework to address program development. The theory-driven approach to program development is useful for evaluators to facilitate stakeholders in strengthening program plans before implementation. Using this approach, evaluators are able to assist stakeholders in systematically developing a program theory for what they are proposing to do in a program. The theory-driven approach can ensure that crucial components and steps are systematically incorporated into the program plan. This article discusses in detail strategies and techniques for applying the theory-driven approach to program planning and development. It also provides two concrete examples of health promotion programs to illustrate such application.
Creating logic models using grounded theory: a case example demonstrating a unique approach to logic model development
This article describes, using a case example, the procedure of creating logic models using grounded theory methodology in the context of process evaluation. There currently exists a dearth of literature on the specifics of how logic models should be created. The authors reduce this gap by detailing an integrated methodology they utilized during their recent evaluation of the Youth Educating About Health (YEAH) program. A number of parallels between grounded theory and logic modelling are first discussed to demonstrate their potential for integration. Then the data collection and analysis procedures are explained with a focus on how the integration between grounded theory and logic modelling was conducted. The completed logic model is then presented and each category is explained in detail. The authors conclude by discussing the lessons they learned from utilizing this integrated methodology. These lessons include the specific benefits this methodology contributes to process evaluation, the added depth of information that grounded theory provides to logic modelling, and the cost- and time-effectiveness of this unique methodology.
Développement d'un cadre de classement des résultats de la réadaptation pour documenter la progression des clientèles
In the field of rehabilitation, documenting intervention results and the progress of clients is a major concern at all levels, be it clinical, administrative, or scientific. Using new information technologies, it is now possible to meet this challenge by developing an information system capable of taking into account the complexity of the variables involved. To identify those variables, 27 programs representative of the Quebec rehabilitation continuum took part in a study to identify progress indicators in abilities, life habits, and environmental factors. A harmonized framework for the classification of variables and progress indicators was validated with clients by the clinical teams. This framework is a groundbreaking tool and will be useful not only as a resource in individual client follow-up but also in program and policy evaluation support.