CES Online Learning Goes Live: Welcome to the CES e-Institute!

Volume 16, 2001 - Fall

Do distinct servqual dimensions emerge from mystery shopping data — a test of convergent validity

Authors :
Pages :
41-54

Service quality is commonly though to comprise five generic dimensions - responsiveness, assurance, tangibles, empathy and reliability. These dimensions form the basis for service measurement tools such as SERVQUAL. Research in this area, using tools such as SERVQUAL has predominantly focused on customer perceptions of quality. However, another approach used by many organisations is to send trained raters into the service environment, posing as customers to evaluate service levels. This approach is often called "mystery shopping" and is very commonly used in both private and public sector organisations. This study examines whether the accepted service quality dimensions derived from customer perceptions studies are reflected in service quality evaluations using mystery shopping. It finds that the dimensions that emerge from mystery shopping data resemble SERVQUAL dimensions. Furthermore, a replication found that those dimensions are reasonably stable over time. The findings suggest that data from mystery shopping surveys can exhibit convergent validity.

Accountability, rationality and new structures of governance: making room for political rationality

Authors :
Pages :
55-70

This article argues that the new institutional arrangements and results or performance based accountability run the risk of overlooking several political realities and considerations. The new arrangements and processes represent a preference for ecc rationality over bureaucratic rationality. They give little consideration to political rationality, which is concerned with collective values and power relationships. In managerial terms, it is not clear that the new arrangements will be any more durable than their numerous predecessors. The accent on performance targets has led to a decline in programme evaluation. It tends to marginalize a number of ethical concerns, reduces the role and scope of citizens, and ignores the conflicts that frequently arise when clients' interests are at odds. Performance management and results based accountability represent a constitutional revolution, but there is little sign that members of parliament are interested or able to make good use of the new management reports. Accountability, at any rate, is a word that should probably be used only for the relationship between a subordinate agency and the authority from which it obtains its mandate and its resources. Accountability also implies a theory of motivation. It is worthwhile considering the strength of motivation for executive agency leaders to conform to what their political masters want from them. Finally, a post-modern look at agency suggests that targets cannot replace attention to public sector ethics.

Insurance claimants working while on claim

Authors :
Pages :
71-86

Subject to certain restrictions, the Canadian unemployment insurance ( UI ) system permits insurance benefit recipients to work and supplement the UI benefits. Although this provision relating to the treatment of earnings during the benefit period under the UI (now called EI) Act had been in operation for many years, hardly anything is known about the extent to which this provision is utilised and what impact this has had on their UI benefit period. The main objective of the provision during the benefit period is to encourage UI claimants to maintain some linkage with the job market so that their re-employment is facilitated. The empirical analysis presented here confirms that the UI benefit period of insurance claimants working while on claim is substantially shorter than that of others who do not work while on claim.

Softly, softly catch the monkey: innovative approaches to measure socially sensitive and complex issues in evaluation research

Authors :
Pages :
87-100

Many government program evaluations require information to be captured that is hard to measure, of a sensitive nature and difficult for the respondent to articulate. This paper suggests research designs and methodologies to assist in overcoming such problems in evaluation research. Our discussion is illustrated by three evaluation case studies. Suggestions for research design focus on increasing reliability through intersubjective certifiability and the use of triangulated respondent groups, as well as varying the composition of the research team at different stages of the research. Methodological suggestions are for multi-faceted research processes, run in parallel and in sequence, to uncover topics on which findings vary and to find information ‘hidden' in other approaches. Methods for improving recruitment and retention of respondents are also discussed. We conclude by critically evaluating the outcomes of having applied these new approaches and discuss the implications of gaining different or new information from having adopted such innovative approaches.

Modeling success: articulating program impact theory

Authors :
Pages :
101-112

This study was undertaken to articulate program impact theory for the Comprehensive Home Option of Integrated Care for the Elderly (CHOICE) program. The study showed that CHOICE combines elements found in a traditional Health Maintenance Organization with elements and process components drawn from primary care and case management to deliver a broad range of home support, day program, and social and health services to its participants and their informal care givers. In doing so the program provides participants with a level of comprehensive, coordinated care not possible within the traditional community based health and social service delivery system.

Do evaluator and program practitioner perspectives converge in collaborative evaluation?

Authors :
Pages :
113-133

Interest in collaborative and participatory forms of evaluation -- evaluation that involves evaluators working directly with non-evaluator program practitioners or stakeholders -- has increased substantially in recent years. Yet research-based knowledge about such approaches remains limited. Moreover, empirical studies have focussed almost exclusively on the perspectives of evaluators or to a lesser extent non-evaluator stakeholders associated with the program. The present study examines in a direct comparative way the convergence of evaluator and non-evaluator perspectives about collaborative evaluation. Sixty-seven pairs of evaluators and program practitioners, members of which participated on a common collaborative evaluation project, completed a questionnaire about the evaluation and their opinions concerning collaborative evaluation. Results showed that relative to their evaluator counterparts, program practitioners indicated that they were more involved in technical evaluation activities, were more conservative in their views about evaluation consequences and tended to feel more positively about the collaborative evaluation affective experience. They agreed, however, about evaluator involvement and the range of stakeholder groups participating in the program. In general program practitioner and evaluator views and opinions about collaborative evaluation converged although some differences regarding who should participate and the power and potential of collaborative evaluation were noted. Typically, program practitioners were more conservative in their opinions. The results are discussed in terms of their support for the integration of evaluation into program planning and development.

Using the right tools to answer the right questions: the importance of evaluative research techniques for health services evaluation research in the 21st century

Authors :
Pages :
1-26

Because of the marked changes in health care expenditures in recent years, there has been a call for greater accountability in the area of health education, policy, services and reform. In recent years, evaluative research has been conducted in health arenas under the rubric of health services research. Health services research methods have not evolved from evaluation research methodology, but rather have adopted methods that often do not lend themselves to deriving appropriate causal inferences in multi-causal environments. Given the complexity of the interrelationships of political, social, psychological and ecc factors that can affect health services, more complex evaluative techniques are demanded. This paper describes the epistemology of evaluative research and through a series of examples from the health services literature demonstrates the strengths of theory-driven approaches and statistical multi-variate techniques compared to traditional black-box methods as ways to increase validity for causal inference.

Methodological challenges in evaluating mobile crisis psychiatric programs

Authors :
Pages :
27-40

Mobile crisis psychiatric programs (MCPPs) are innovative community interventions that have gained acceptance in the health, social and political environments. In Canada they are becoming widely implemented and the need to evaluate them is pressing. Unfortunately, there has been very little formal evaluation of them and virtually no data as to their effectiveness. Part of the reason for this deficiency may be the methodological challenges inherent in these programs. In this article, we discuss these difficulties by examining the integration of these programs in the service delivery network and offer suggestions for future evaluative research.