Many government program evaluations require information to be captured that is hard to measure, of a sensitive nature and difficult for the respondent to articulate. This paper suggests research designs and methodologies to assist in overcoming such problems in evaluation research. Our discussion is illustrated by three evaluation case studies. Suggestions for research design focus on increasing reliability through intersubjective certifiability and the use of triangulated respondent groups, as well as varying the composition of the research team at different stages of the research. Methodological suggestions are for multi-faceted research processes, run in parallel and in sequence, to uncover topics on which findings vary and to find information ‘hidden' in other approaches. Methods for improving recruitment and retention of respondents are also discussed. We conclude by critically evaluating the outcomes of having applied these new approaches and discuss the implications of gaining different or new information from having adopted such innovative approaches.