This study examines the usefulness of the Montreal Service Concept framework of service quality measurement, when it was used as a predefined set of codes in content analysis of patients' responses. As well, the study quantifies the interrater agreement of coded data. Two raters independently reviewed each of the responses from a mail survey of ambulatory patients about the quality of care and recorded whether or not a patient expressed each concern. Interrater agreement was measured in three ways: the percent crude agreement, Cohen's kappa, and the coefficient of the generalizability theory. We found all levels of interrater code-specific agreement to be over 96%. All kappa values were above 0.80, except four codes associated with rarely observed characteristics. A coefficient of generalizability equal to 0.93 was obtained. All indices consistently revealed substantial agreement. We empirically showed that the content categories of the Montreal Service Concept were exhaustive and reliable in a well-defined content-analysis procedure.