Ralph Renger

Spring

Process Flow Mapping for Systems Improvement: Lessons Learned

Authors:
Pages:
109-121

This article fills a gap in the evaluation literature by detailing how to conduct process flow mapping: a continuous quality improvement (CQI) method. The importance of process flow mapping and the steps required to complete the method are illustrated in the context of evaluating a cardiac care system. The article discusses several challenges and solutions in conducting process flow mapping, including (a) selecting appropriate subject matter experts, (b) mapping simultaneous processes, (c) terminating mapping, (d) integrating process flow maps, and (e) validating process flow maps. The article concludes by reinforcing the importance for systematically documenting new evaluation methods for dissemination and utility purposes.

Spring

The Reciprocal Relationship Between Implementation Theory and Program Theory in Assisting Program Design and Decision-Making

Authors:
Pages:
27-41

The focus of this article is how Theory Driven Evaluation (TDE) and two of its central tenets—program theory and implementation theory—can be simultaneously used to inform and assist programmatic decision-making. The article argues there is a paucity of evaluation literature demonstrating how program theory can be beneficial to the design and interpretation of implementation theory. A case example is used to illustrate the importance of program theory in developing and interpreting implementation theory.

Spring

Contributing Factors to the Continued Blurring of Evaluation and Research: Strategies for Moving Forward

Authors:
Pages:
104-117

Despite many studies devoted to the diff erent purposes of evaluation and research, purpose-method incongruence persists. Experimental research designs continue to be inappropriately used to evaluate programs for which sufficient research evidence has accumulated. By using a case example the article highlights several contributing factors to purpose-method incongruence, including the control of the federal level evaluation agenda by researchers, confusion in terminology, and the credible evidence debate. Strategies for addressing these challenges are discussed. Keywords: barriers, credibility, discipline, evaluation, research.

Spring

Constructing and Verifying Program Theory Using Source Documentation

Authors:
Pages:
51-67

Making the program theory explicit is an essential first step in Theory Driven Evaluation (TDE). Once explicit, the program logic can be established making necessary links between the program theory, activities, and outcomes. Despite its importance evaluators often encounter situations where the program theory is not explicitly stated. Under such circumstances evaluators require alternatives to generate a program theory with limited time and resources. Using source documentation (e.g., lesson plans, mission statements) to develop program theory is discussed in the evaluation literature as a viable alternative when time and resources do not permit a priori program theory development. Unfortunately, the evaluation literature is devoid of methodology illustrating how to translate source documentation into an explicitly stated program theory. The article describes the steps in using source documentation to develop and verify a program theory and illustrates the application of these steps. It concludes with a discussion about the feasibility and limitations of this methodology.

Spring

What an eight-year-old can teach us about logic modelling and mainstreaming

Authors:
Pages:
195-204

This article presents a short case narrative, the purpose of which is to illustrate that complex evaluation methodologies such as logic modelling can be simplified to the point where a child can be guided through the process quickly. However, the case narrative also serves to highlight the potential consequences to program development and evaluation activities when the process is oversimplified. Like a double-edged sword, simplifying the process encourages more organizations to use a logic model to develop and evaluate programs, but, in hindsight, the simplicity may lead to program architectures that have little opportunity of demonstrating success or to evaluations that may be off the mark.