(Through a reflective process, the CES Board of Directors, supported by a consultation of members, has crafted and adopted the following as the CES definition of evaluation. PDF version. Cheryl Poth, Mary Kay Lamarche, Alvin Yapp, Erin Sulla, and Cairine Chisamore also published Toward a Definition of Evaluation Within the Canadian Context: Who Knew This Would Be So Difficult? in the Canadian Journal of Program Evaluation, vol. 29, no. 3.)

Evaluation is the systematic assessment of the design, implementation or results of an initiative for the purposes of learning or decision-making.

Systematic: An evaluation should be as systematic and impartial as possible (UNEG, 2005). An evaluation is methodical, providing information that is credible, reliable, and useful to enable the incorporation of lessons learned into decision-making process of users and funders (OECD, 2010). Evaluation is based on empirical evidence and typically on social research methods, thus on the process of collecting and synthesizing evidence (Rossi Lipsey and Freeman, 2004). Conclusions made in evaluations encompass both an empirical aspect and a normative aspect (Fournier, 2005). It is the value feature that distinguishes evaluation from other types of enquiry such as basic science research, clinical epidemiology, investigative journalism, or public polling.

Assessment: Evaluation assessment considers value, merit, worth, significance or quality (Scriven, 1991). It may aim to identify what works, for whom, in what respects, to what extent, in what contexts, and how (Pawson and Tilley, 2004). It may examine expected and achieved accomplishments, the results chain, processes, contextual factors and causality in order to understand achievements or the lack thereof (UNEG, 2005). Evaluation may focus on a broad range of topics including relevance, accessibility, comprehensiveness, integration, fulfillment of objectives, effectiveness, impact, cost, efficiency, and sustainability (Patton, 1997; OECD, 2010). The evaluation process normally involves some identification of relevant standards, some investigation of performance on these standards, and some integration or synthesis of the results to achieve an overall evaluation (Scriven, 1991; OECD, 2010).

Initiatives: Evaluation can focus on any kind of initiative such as programs, projects, sub-programs, sub-projects, and/or their components or elements (Yarbrough et al, 2011; Scriven, 2003).

Purposes: Evaluation can be conducted for the purposes of decision making, judgements, conclusion, findings, new knowledge, organizational development and capacity building in response to the needs of identified stakeholders leading to improvement, decisions about future programming, and/or accountability ultimately informing social action ameliorating social problems and contributing to organizational or social value (Yarbrough et al, 2011; Patton, 1997).