84
Views
10
CrossRef citations to date
0
Altmetric
Original Research

Developing anchored measures of patient satisfaction with pharmaceutical care delivery: Experiences versus expectations

, &
Pages 113-122 | Published online: 07 Apr 2009

Abstract

Background:

A pilot study was undertaken to evaluate patients’ satisfaction with pharmaceutical care (PC) activities delivered at community pharmacies. The objectives of the study were to: (1) operationalize patient satisfaction in terms of the advanced pharmacy practice experience (APPE) PC activities, (2) conduct psychometric analysis of the satisfaction instrument, and (3) assess the sensitivity of the instrument to detect any differences that may exist between what patients expect to receive versus what is actually experienced.

Methods:

Pharmacies affiliated with two national chains were recruited to participate. Asthma patients at each of these sites were invited to complete a survey designed to assess their expectations of and their experiences with PC at the respective site.

Results:

One hundred forty-seven surveys were completed from patients in 19 community pharmacies. Psychometric analysis confirmed the survey’s internal reliability and sensitivity to be very high. Data analysis suggested that most patients expect more from PC services than they actually experienced.

Conclusion:

Unlike other PC satisfaction surveys, this instrument allows patient experiences to be anchored against their expectations. The results suggest that most patients would be willing to engage in PC activities outlined in the survey.

Introduction

In the 1990s, academic and professional pharmacy organizations across North America adopted pharmaceutical care (PC) as the new professional mandate.Citation1Citation5 PC is defined as a philosophy of practice where “the pharmacist cooperates with patients and other professionals in designing, implementing, and monitoring therapeutic plans that will produce specific therapeutic outcomes.”Citation6 As with other pharmacy schools across Canada and the United States, the University of British Columbia’s Faculty of Pharmaceutical Sciences Structured Practice Education Program (SPEP) faculty refined its curricula to incorporate PC outcomes and activities within its community-based advanced pharmacy practice experience (APPE). The specific competency-based skills and proposed learning activities for the community APPE are summarized in . With the shift from dispensing to PC-related activities, the APPE community pharmacy managers were interested in determining whether patients would welcome such interventions. Based on the premise that patients rather than providers or pharmacy schools can best determine the value of PC services, the SPEP faculty undertook a project to determine how patients at respective APPE sites would respond to the PC activities being proposed.Citation7

Table 1 Community-based advanced pharmacy practice experience activities

Over the past 15 years, satisfaction with pharmacy service has been conceptualized around a variety of frameworks; Kucukarslan and colleagues have reviewed several of these.Citation8 For example, Gourley favors satisfaction keyed to an ECHO (economical, clinical, and humanistic outcomes) model, Kucukarslan and colleagues have contrasted prior experiences with ideal referents and market expectations, Oliver has anchored satisfaction to “pleasurability,” and MacKeigan and Larson have adapted multifactorial medical satisfaction measures to pharmacy applications.Citation8Citation11 Additionally, both pharmacy and nonpharmacy literature has proposed that satisfaction is a complex phenomenon with multiple determinants such as preferences, experiences, and social interaction; and attempting to capture satisfaction as a single concept risks making several assumptions about what patients actually mean when they say they are “satisfied”, which has the potential to misrepresent some of their responses.Citation12Citation14

Accordingly, to fully appreciate what patients in general considered to be important features of PC services as well as to obtain a baseline for gauging how well community pharmacy APPE sites were meeting those expectations, we decided to assess both patients’ “expectations” of PC and patients’ “perceptions of what had been received.” Such a strategy would also lead to an anchored scale contrasting “internal” satisfaction experiences to the “external” expectations of in-store practices; an approach that is endorsed by the literature on satisfaction.

A literature search was conducted to help model the patient satisfaction evaluation at APPE sites, but the authors found limited work in this area. While three studies were identified that examined patient satisfaction with services delivered by pharmacy students during their experiential training in outpatient clinics, none were sufficiently comprehensive in evaluating the array of activities that are involved with providing PC.Citation15Citation17 Two of the three studies primarily looked at general aspects of satisfaction such as patients’ comfort level when interacting with students and the perceived usefulness of this time, whereas the third incorporated only a few items reflective of PC activities. Expansion of the literature search to identify any PC satisfaction survey instrument that could be adapted for this project found two validated surveys for use in the community pharmacy settings.Citation9,Citation11 But again, neither study encompassed all the PC activities our University of British Columbia (UBC) APPE students commonly engaged in and both evaluated patient satisfaction as though it were a single entity.

Consequently, a new instrument was developed to assess patients’ expectations of various APPE PC activities and compare these expectations with actual experiences at a given site, thus leading to an anchored scale tying “internal” satisfaction expectations to the “external” realities of in-store practices.

The study objectives were to: (1) operationalize patient satisfaction in terms of the APPE PC activities, (2) conduct psychometric analysis of the instrument, and (3) assess the sensitivity of the instrument to detect any differences that may exist between what patients expect to receive versus what is actually experienced.

Methods

Design

This was a cross-sectional study designed to validate a newly developed patient satisfaction survey using a selected number of community pharmacies from two regional chains, with continuing histories as placement sites for UBC. The study was conducted between September 2002 and May 2003 in Vancouver, British Columbia, Canada. Ethical approval was received from the Office of Research Services at University of British Columbia.

Participants

A list of all community pharmacies representing two regional chains with a previous history of preceptoring UBC APPE students and whose store managers had expressed an interest in participating in the new community-based APPE program, was developed. The pharmacies were clustered into either rural or urban geographical regions, and the first 10 pharmacies from each of the two clusters to agree to participate in the study were recruited. As with other community APPE sites, the pharmacies in this study agreed to serve as an APPE site for a total of eight weeks. While some pharmacies took students for the full eight weeks, others split their commitments over two four-week periods delivered at two different times during the winter session which they self-selected between the months of January and April. All pharmacies received the same, standard remuneration payment of Canadian $50 per four-week experience. To preserve pharmacy and student anonymity, all pharmacy and student identifiers were removed prior to collating and analyzing the data for this project.

Intervention

The community APPE syllabus was designed to provide students with the opportunity to hone PC-related competencies by engaging in both direct and indirect patient care activities as outlined in . All students were held to the same expectations, and were required to complete each of the direct patient care activities using the PC framework defined by Strand and Hepler.Citation6 Briefly, this framework included: developing relationships with patients to facilitate discussions about their drug-related needs; engaging in acquisition and assessment of the patient’s drug, disease, and other relevant information to identify actual or potential drug-related concerns; engaging in informed shared decision making with patients and other health professionals; developing pharmacy care plans to prevent and resolve concerns that are identified; and providing continuity of care by monitoring progress through follow-up care.

Instrument development

The survey items were generated by examining the various tasks and activities completed by APPE students during their provision of PC, reviewing several published and unpublished PC patient satisfaction surveys and consulting with various clinical faculty members with PC experiences in the community and institutional settings. A 14-item instrument was developed representing patient satisfaction in six PC domains: developing a relationship, assessing patients, clarifying the role of medications, developing a pharmacy care plan, working collaboratively with other health care providers, and providing follow-up to patients.Citation6 A fifteenth item was used as an introductory item; “I expect pharmacy staff to be pleasant and courteous to me.” The survey also asked the patient whether they had engaged in consultation with an in-store pharmacist or pharmacy student, which medical conditions or medications were discussed, whether they had observed that the pharmacy services at the store had changed over the past year, and finally inquired about demographic variables including gender, age, education level, and household income.

Items were rendered into a four-page survey using a single-sheet 11 × 17 fold-over format; a front page of welcome, introduction and instructions, and a final page of background information about experience with the pharmacy and personal demographics. The inside two pages were the crux of the study representing two scales, expectation and experience, on the inside left page were printed the 15 items preceded by a header directing respondents to report baseline assessments of “Here is what I would expect in any pharmacy” while the inside right page repeated the same 15 items preceded by the ‘situational’ instruction, “Here is what I have experienced recently in this store.” Thus expectations about PC-related baseline satisfaction in any pharmacy could be contrasted with situational experiences in this pharmacy, item-by-item, or collectively as a scale total. Patients responded to both inside pages on a five-point Likert letter-scale of disagreement/agreement: [Strongly disagree (SD), Disagree (SD), Neutral (N), Agree (A), Strongly agree (SA)] in order to emphasize conceptual distinctions between different agreement levels. A copy of the survey may be obtained from the corresponding author.

Data collection

Project staff deposited bundles of blank surveys in participating pharmacies together with secure survey return boxes labeled to assure patients that their responses would be delivered directly to the research project office without being read by pharmacy personnel. Students were instructed to hand surveys out to all patients who were able to speak and read English and requiring a refill or a new prescription for asthma over a four-month period (from January to April), and to ask them to deposit the completed surveys in the distributed survey return box. Subsequent to the survey phase, telephone follow-up calls were made to selected subsets of respondents who had volunteered their names and contact information to test for survey appropriateness, ease of understanding, clarity of language, and time required to complete. A research assistant entered all the data into an Excel (Microsoft Corp., Redmond, WA) spreadsheet.

Statistical analysis

Statistical analysis was carried out by a hired statistician using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL). Descriptive statistics on the sample characteristics and questionnaire items were computed (frequencies, means, and standard deviations). Content validity was ensured by a panel of pharmacy academics/practitioners involved with PC across Canada and the United States. Face-validity was established prior to and following the survey phase, by obtaining feedback from UBC staff, UBC students, and a select subset of participants who had volunteered their names and contact information to ask about survey appropriateness, ease of understanding, clarity of language, and time required to complete the survey. Coefficient alpha reliabilities confirmed that satisfaction was appropriately operationalized, followed by face validity to ensure item readability and overall instrument comprehensibility. A Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy was calculated to determine the extent to which the variables belonged together and were appropriate for factor analysis; KMOs > 0.9 are rated as “marvelous” for factor analysis.Citation18 Conventional indicators of scale utility such as reliability, validity, and sensitivity were used to evaluate the expectation scale and the experience scale. Reliability was assessed using Cronbach’s alpha, where for this evaluation 0.70 was chosen as a minimum acceptable value.Citation19 To establish the factor structure of the instrument, exploratory factor analysis using principal components was carried out. Varimax rotations were used to simplify the structure and improve interpretability of the factors. Varimax rotation is the most widely used orthogonal rotation method and the preferred method of many writers.Citation20 Items with factor loadings greater than or equal to 0.40 were considered significant.Citation21 For each scale, an overall score was computed as the mean of all 15 items, and factor scores were computed as the means of those items identified with a particular factor. Since no external measures of patient satisfaction were collected, convergent validity was assessed using corrected Pearson correlations between items within a factor subscale and alpha reliabilities for the factor subscale total; a correlation of 0.50 was considered moderate and 0.70 considered excellent. Convergent validity examines how closely the new scale is related to other measures of the same construct to which it should be related. In this case, the other measures were the individual items whose construct validity had been previously established as part of the study’s examination of its substantive validity aspects.Citation22 Sensitivity was evaluated by comparing expectation with experience for individual items, the overall score and the factor scores were carried out using paired t-tests. Overall expectation and experience scores were compared with respect to demographic variables using one-way analysis of variance.

Results

A total of 147 patient satisfaction surveys were returned from 19 stores. One community pharmacy dropped out of the APPE experience for a period of one year due to staffing shortages. The respondents’ profile is detailed in . Potential baseline scale biases were tested but none were found for gender, age, education, income, willingness to engage in a follow-up phone call or participating chains.

Table 2 Demographic characteristics of total sample

Instrument validation

Face and content validity

Ten pharmacy academics/practitioners were invited to review the proposed survey items for appropriateness, repetitiveness and clarity. The guiding question was always “Is this survey inclusive of the PC activities that your students are involved in during their APPE, is each question clearly articulated and are there any items that are repetitive?” Each faculty member reviewed the items independently and suggested changes, additions and deletions. The revisions were made and circulated for another cycle of review and comments. After several iterations of item generation, pre-testing and refinement, beginning with 24 provisional items, 14 items were included in the final instrument. Prior to using the survey, five UBC staff and five pharmacy students were asked to take the survey and provided comments on readability, clarity and length of the survey. Additionally, the administrative aspects of the survey tool were assessed in a subsequent telephone follow-up with 13 survey respondents from both pharmacy chains out of 91 who had agreed to be contacted by telephone. These respondents reported averaging about 15 minutes to complete their in-store questionnaires. Nearly all reported that the time for survey completion was a “reasonable amount of time.” All 13 found the survey “easy to read,” “easy to understand” and had no difficulties understanding the questions.

Factor analysis

Exploratory factor analysis revealed that patients have a much less complex conceptual grasp of “baseline satisfaction” than the six-domain PC view held by professional pharmacists ().Citation6 Based on the 15 expectation items, the three factors identified by patients which explained 60% of the common factor variance were: (1) monitor outcomes by asking them about their medical conditions and medications and by developing care plan to ensure their conditions are well-controlled (23% of rotated factor variance); (2) provide information and education using a variety of information sources (verbal, print, or video) to educate about different medications options, how medications are supposed to help and work, and to work with them and their physician to ensure correct drug therapy (20% of rotated factor variance); and (3) personalized, collaborative and preventive care by asking them if they have any concerns about their medications, explaining what they should do in the event of side effects, involving them in decision-making about medications, being courteous and ensuring privacy (17% of rotated factor variance). All 15 items contributed substantially to at least one of these patients’ conceptual domains. As well, the KMO measure of sampling adequacy is extremely high at 0.88.

Table 3 Factor loadings of the 15 “expectation” items, sorted by factor of highest loadingTable Footnotea

As a check on the stability of the factor structure, the analysis was repeated twice and this process provided strong confirmation of the original factor structure. The first check entailed using the experience items, for which the KMO measure was equally high at 0.92. Of the 15 items, only two loaded on factors different from the expectation items. Here, the monitor outcomes factor included item #11 “The pharmacist suggests a range of medical information sources” and the information and education factor included item #13 “The pharmacist explains what to do if medical side effects occur.” The next check involved treating the responses to the experience items as replicates independent of the expectation items, hence, doubling the effective sample size. Once again the factor analysis identified the virtually same structure as the first analysis of the expectation items, expect for one items loading on a factor different from the expectation item. As with the experience items, the item #13 “The pharmacist explains what to do if medical side effects occur” loaded on the information and education factor. The strong similarity of the two follow-up factor analysis results to the primary factor analysis based on expectation supports the initial structure. Further, since the tool comprises items developed to address expectation and since confounding factors such the type and quality of social interaction with pharmacy staff can influence a patient’s interpretation of service received, we have chosen to use the factor structure based on expectation items.

Scale reliability

Scale reliabilities were very high for each scale. Cronbach’s alpha based on the 15 items of baseline expectations in any pharmacy (expectation scale) was 0.89, and for situational experiences in this pharmacy (experience scale) was 0.94. The alpha coefficients remained virtually unchanged with the deletion of any individual item, supporting the homogeneity of the scale. This is further supported by the fact that all 15 items had corrected item–total correlations greater than 0.50 except the first two items, which had item–total correlations of about 0.40. Hence, each item is well correlated with each other single item and with the average of all the other items.

Reliability analysis was then carried out for each of the three factor subscales. reports the Cronbach’s alpha and item–total correlations for each subscale as a surrogate assessment of convergent validity (in the absence of other suitable measures of patient satisfaction for comparison). Reliabilities were: Monitoring outcomes α = 0.85; Information and education, α = 0.80, and personalize, collaborative and personalized care α = 0.75; all exceeding Nunnally’s threshold of 0.70.Citation20

Table 4 Cronbach’s alpha reliability and corrected item–total correlations for each factor subscale

Sensitivity

The third objective, assessing the sensitivity of the instrument to detect differences between what patients expect and experience, was addressed using paired t-tests on individual items, on the factor subscale scores (computed as the mean of the items comprising each subscale) and on the overall score scale (computed as the mean of all 15 items). presents descriptive statistics and t-test results. Items are presented in descending order of expectation scores. The difference scores are computed as experience minus expectation and a negative value indicates that patients’ collective experience was less satisfactory that their expectations.

Table 5 Comparison of baseline expectations in any pharmacy and in-store experiences at this pharmacy

In general (and averaged over all 15 items), patients’ situational experiences at this pharmacy were about a half-point less satisfactory (3.61 out of 5) than their baseline expectations for any pharmacy (4.11 out of 5) and strongly significantly so (t = 9.07, p < 0.001). This half-point deficiency was also seen in the three factor subscale scores. A comparison of the overall expectation score to the overall experience score showed that about 78% of respondents reported that service features in this store fell below their expectations for any pharmacy in general and these shortfalls were scattered across different stores and different chains. Thus, overall actual in-store experiences (computed as a mean of all 15 items) fell short of overall baseline expectations for about four out of five respondents.

Across all 15-service features, the most-expected item (and best realized) was pleasant and courteous staff. Next most expected but poorly realized were opportunities for reasonable privacy when discussing health issues. Third most expected (and not well realized) were explanations about what to do should side effects of medications emerge. The least-expected item was follow-up by the pharmacist by telephone or asking between refills whether the patient’s medications were working; interestingly of all 15 items, only this one had a mean expectation below the midpoint of 3 on the 5-point scale. Only two items showed no statistical differences between expectation and experience: staff courtesy and follow-up practices. All other items were deemed less satisfactory in experience than ‘expected’. Finally, one-way analysis of variance on the overall expectation score and overall experience score showed that there were no significant differences with respect to gender, age, education, or income.

Discussion

This paper’s aims were to outline the process used to operationalize patient satisfaction with PC for APPE programs and to present the psychometric analysis conducted to confirm the reliability, validity and sensitivity of the instrument. Face and content validity testing confirmed the appropriateness of the survey items, and psychometric analysis confirmed the survey’s internal reliability (Cronbach’s alpha for expectation scale 0.80, for experience 0.94). Additionally, the instrument’s validity was supported by the moderate to high corrected item–total correlations, and factor analysis supported a three-factor subscale explaining 60% of total variance with high alpha reliabilities (monitoring outcomes 0.85; information and education 0.80; and personalize, collaborative and preventive care 0.75).

While our overall scale and subscale reliabilities are high and comparable to previous validated studies, there are four important differences worth noting.Citation9,Citation11,Citation23 First, previous studies often assessed satisfaction as a single entity; second, they included items that assessed aspects important to pharmacy services but not necessarily limited to PC activities, such as timeliness of services, cost related to medication acquisition, and appearance of the pharmacy, to name a few; three, the studies were not inclusive of all PC activities, with the exception of Traverso where most activities were considered; and, four, there were differences in the factor structure identified. Comparing our factor structure with that of Traverso, the one instrument that appeared to include most of the items that we considered, there were key differences.Citation23 Unlike Traverso, in our study items asking about the provision of information and education factored together but separated from items that asked about monitoring of patient outcomes. Considering that the provision of information and education are supported by different practice activities and patient involvement, we believe an instrument that can discriminate between these two conceptually different collections of services is essential to helping analyze these services separately. For example, while explaining to a patient how each of their medication is suppose to help them can be provided within the traditional practice model as an extension of counseling, it is necessary to shift the practice model and to expect greater commitment from the patient if follow-up care by phone or at refill is to be provided for the purpose of assessing success or failure with a therapy.

Additionally, the sensitivity analysis demonstrated that the instrument is effective in discriminating between the two key determinants of satisfaction: patients’ “expectations” of PC and patients’ “perceptions of what is actually delivered.” In this study, actual service delivery was lower than patient expectations at most sites. Patient baseline expectations exceeded in-store experience in 13 of 15 individual items (as well as overall), demonstrating clearly that most patients expect more from any pharmacy than what they experienced in this pharmacy. Differences between overall expectation and overall experience were −0.50; representing more than a 10% gap on the 5-point scale. As well, nearly 80% of patients rated their experience lower than expectation thus identifying a clear patient service gap. Differences between expectation and experience scores confirm that parallel but contrasting measures between any pharmacy and this pharmacy provides a helpful baseline from which to calibrate the importance of the 15 individual measures rather than relying solely on the subjective meanings of the item stem wordings and their corresponding scale points.Citation14

This survey’s ability to discriminate between patient expectation and experience is important because, unlike other patient satisfaction surveys, this version allows both components to be analyzed separately. Since satisfied patients are more likely to follow treatment instructions and medical advice and are less likely to change providers, detailed analysis of their service delivery is useful to practicing pharmacists and academics interested in knowing patient expectations about PC and how well these expectations are met in practice.Citation24 More specifically, this survey format allows pharmacists to investigate how well their services meet patient expectation and, consequently improve on the specific service aspects where patients are less satisfied.Citation25 Similarly, schools of pharmacy can utilize this information to reinforce the important curricular changes which ensure students have the necessary learning opportunities to engage in PC competencies and to work collaboratively with community pharmacy managers to determine the impact of PC-focused APPEs.

Finally, the instrument was designed to be self-administered in order to minimize any bias that pharmacists might introduce by their presence. Pharmacy students who were unacquainted with patients distributed the surveys and asked them to deposit their returns directly into the UBC survey return box to be returned directly to the university and read only by university staff. Furthermore, while patients were told their personal identifiers were optional, 91 patients (62%) provided personal names and contact information for future follow-up, suggesting that most patients felt no limitations in expressing their views.

Limitation

While the authors worked to ensure representation from two different pharmacy chains with urban and rural store locations, the number of community pharmacies in this study was a small convenience sample rather than a truly large random sample; hence, there are limitations of range. Additionally, participants were not fully representative of the general population of patients seen by community pharmacies as only those asthma patients who were able to speak and read English were invited to complete the survey. Asthma patients clearly differ from patient with other chronic diseases or acute illness in terms of health perceptions and illness experiences, likely resulting in different health-related behaviors and PC needs; thus raising questions about the generalizablity of the results.Citation26 Furthermore, as with any cross-sectional study, certain threats to internal validity need to be considered. First, surrogate bias cannot be ruled out as patients were allowed to take the survey home to complete. It is possible under such circumstances that someone other than the patient, who may not have had the same perspective as the patient, completed the survey on their behalf. Second, reporting bias needs to be considered as it is plausible that some patients may have been reluctant to report decreased satisfaction due to certain beliefs or perceptions. Next, while all patients visiting the pharmacy during the study period would have done so regardless, no records were kept of patients who visited but returned no questionnaire. Since those who responded could have been different from nonrespondents, potential response biases may exist for wider samples of patients. Hence, future survey testing with wider patient populations will strengthen its generalizability. In addition, further research should check the test–retest reliability of the scale over a reasonable time interval to check for consistency of results.

Conclusion

The study was prompted by a lack of validated survey tools to determine patient responses to PC-related student APPE activities. Study findings confirm the reliability and validity of this 15-item satisfaction survey in community pharmacy settings. An obvious next step is to contrast patient satisfaction in sites with enhanced PC service delivery practices against patient responses in more traditional settings in order to quantify PC benefits.

Acknowledgements

This project was supported by the SPEP development and evaluation fund, an initiative supported by independent and chain community pharmacies of British Columbia, the College of Pharmacy of British Columbia, and the Faculty of Pharmaceutical Sciences at UBC. The authors report no conflicts of interest that are relevant to the content of this study.

References

  • College of Pharmacists of British Columbia. Framework of professional practiceMar 2006. Accessed on Mar 10, 2009. Available from: http://www.bcpharmacists.org/library/D-Legislation_Standards/D-6_Standards_of_Practice/1009-FPP.pdf
  • National Association of Pharmacy Regulatory AuthoritiesModel standards of practice for Canadian pharmacists. Apr 2003. Accessed on Mar 10, 2009. Available from: http://www.napra.org/docs/0/95/123.asp
  • The Association of Faculties of Pharmacy of Canada (AFPC)Development of levels and ranges of educational outcomes expected of baccalaureate graduates1998Accessed on Mar 10, 2009. Available from: http://www.afpc.info/downloads/1/Educational_Outcomes_1999.pdf
  • Canadian Council for Accreditation of Pharmacy ProgramsAccreditation, standards and guidelines for the baccalaureate degree program in pharmacy2006Accessed on Mar 10, 2009. Available from: http://www.ccapp-accredit.ca/standards
  • Accreditation Council for Pharmacy Education (ACPE)Accreditation standards and guidelines for the professional program in pharmacy leading to the Doctor of Pharmacy DegreeJuly 2007. Accessed on Mar 10, 2009. Available from: http://www.acpe-accredit.org/standards/default.asp
  • CipolleRJStrandLMMorleyPCPharmaceutical Care Practice1st edNew York, NYMcGraw-Hill1998
  • RuppMTCompensation for pharmaceutical careKnowltonCHPennaRPPharmaceutical CareNew York, NYChapman and Hall1996283296
  • KucukarslanSSchommerJCPatients’ expectations and their satisfaction with pharmacy servicesJ Am Pharm Assoc200242489496
  • OliverRLA cognitive model of the antecedents and consequences of satisfaction decisionsJ Marketing Res198017460469
  • GourleyGKGourleyDRRigolosiELMReedPSolomonDKWashingtonEDevelopment and validation of the pharmaceutical care satisfaction questionnaireAm J Manag Care2001746146611388126
  • LarsonLNRoversJPMacKeiganLDPatient satisfaction with pharmaceutical care: update of a validated instrumentJ Am Pharm Assoc20024214450
  • Baron-EpelODushenatMFriedmanNEvaluation of the consumer model: relationship between patients’ expectations, perceptions and satisfaction with careInt J Qual Health Care200113431732311560351
  • ReidLDWangFYoungHAwiphanRPatients’ satisfaction and their perception of the pharmacistJ Am Pharm Assoc1999396835842
  • WilliamsBPatient satisfaction: a valid concept?Soc Sci Med19943845095168184314
  • PadiyaraRSMcCordADLurveyPLFourth professional year pharmacy students in the ambulatory care setting: patient perceptions and satisfactionJ Pharm Teach20061321737
  • GrabeDWBailieGRManleyHJThe early patient-oriented care program as an educational tool and serviceAm J Pharm Educ199862279284
  • HersmansenCJPigarelliDWSorknessCAWiederholtJBThe patient perspective of pharmacy clerkship students’ role in pharmacist-patient relationship development and the delivery of careAm J Pharm Educ200064413419
  • StewartDWThe application and misapplication of factor analysis in marketing researchJ Market Res1981185162
  • CronbachLJCoefficient alpha and the internal structure of testsPsychometrika195116297334
  • NunnallyJCBernsteinIHPsychometric Theory3rd edNew York, NYMcGraw-Hill Inc1994
  • HairJFAndersonRETathamRLBlackWCMultivariate Data Analysis with Readings4th edNew JerseyPrentice Hall1995
  • MessickSValidityLinnRLEducational Measurement3rd edNew York, NYMacmillan198913103
  • TraversoMLSalamanoMBottaCColauttiMPalchikVPerezBQuestionnaire to assess patient satisfaction with pharmaceutical care in Spanish languageInt J Qual Health Care200719421722417545673
  • HardyGEWestMAHillFComponents and predictors of patient satisfactionBr J Health Psychol199616585
  • HarrisLESwindleRWMungaiSMWeibergerMTierneyWMMeasuring patient satisfaction for quality improvementMed Care199937121207121310599602
  • RichMTaylorSAChalfenRIllness as a social construct: understanding what asthma means to the patient to better treat the diseaseJt Comm J Qual Improv200026524425318350769