ABSTRACT

Croucher and Kelly (2019) laid out guidelines to develop measures that can be used across cultures. The present study provides support for their guidelines, indicating that pancultural measurements cannot be behavioral and should not include unnecessary contexts; however, they should be worded as simplistically as possible. This study utilizes measures of dissent, perceived immediacy, and workplace freedom of speech in Australia, Canada, New Zealand, and the United States. Only the perceived immediacy measure, which follows Croucher and Kelly’s (2019) guidelines, maintained internal consistency.

Research into communication behaviors, traits, and attitudes is increasingly being conducted outside of the United States. Therefore, it is common for researchers to adapt measures developed in the U.S. for testing in non-U.S. settings (Croucher & Kelly, Citation2019). With such modification comes questions of translation, measurement invariance, equivalence, reliability, etc. (Croucher et al., Citation2019, Citation2020; Gudykunst, Citation2003). Croucher and Kelly (Citation2019) identified various validity and reliability issues with adapting measures cross-culturally and with researchers attempting to develop measures to function pan-culturally. Additionally, Croucher and Kelly (Citation2019) proposed steps to manage validity issues in adapting measures: avoid using behavioral proxies to measure psychological states, avoid unnecessary contextual cues within measures, and use simplistic, concise wording when possible. The purpose of this study was to test these recommendations.

Croucher and Kelly’s (Citation2019) guidelines are designed to develop measures that have high generalizability of validity. Generalizability of validity is the extent to which a measure can maintain evidence of validity used in a population for which it was not designed (e.g., a different culture; American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME), Citation2014). A critical component of testing for validity is evidence of content validity, wherein the researchers verify that the hypothesized factor structure of a measure has been maintained within the new sample (Kelly & Westerman, Citation2020). Confirmatory factor analysis (CFA) is used to determine whether a factor structure has been maintained when used in a new sample, where the better the data fit the hypothesized factor structure, the stronger the evidence is that the measure has content validity. Evidence of generalizability of validity is provided when a measure maintains factor structure across multiple or diverse samples, representing populations unlike the sample in which it was validated (AERA, Citation2014).

In their guidance on developing measures to maintain generalizability of validity pan-culturally, Croucher and Kelly also explain that behavioral measures are unlikely to work well pan-culturally because different cultures assign different meanings to behaviors. Therefore, unique behavioral measures should be developed for unique cultures. The current study cross-culturally assessed the validity and reliability of three U.S. designed measures, often used to measure workplace communication constructs: The Organizational Dissent Scale (ODS) (Kassing, Citation2000), Workplace Freedom of Speech Scale (WFSS) (Gorden & Infante, Citation1991), and the Perceived Immediacy Measure (Kelly et al., Citation2015).

Dissent, workplace freedom of speech, and perceived immediacy

Organizational dissent is “expressing disagreement or contradictory opinions about organizational practices, policies, and operations” (Kassing, Citation1998, p. 183). Researchers exploring organizational dissent originally used Kassing’s (Citation1998) 24-item measure of dissent, which assessed dissent within (articulated and latent) and outside (displaced) of an organization. In recent years researchers have focused more on dissent within organizations (i.e., Croucher et al., Citation2019; Kassing, Citation2006; Zaini et al., Citation2016). Thus, researchers have used Kassing’s (Citation2000) 18-item measure which assesses only articulated and latent dissent. Articulated dissent is expressed to management and latent dissent is shared with colleagues of a similar level. Increasingly, organizational dissent research has been conducted cross-culturally and in non-U.S. settings (Croucher et al., Citation2021; Kassing & Avtgis, Citation1999; Zeng et al., Citation2020). Both latent and articulated dissent are types of employee voices and are typically positively correlated because they are predicted by a variety of personality traits (Zeng, Citation2018). Yet, supervisor communicative behaviors also predict whether employees will engage in articulated dissent (Croucher et al., Citation2021; Kelly et al., Citation2023).

Research conducted during the COVID-19 pandemic found perceived immediacy mediated the relationship between a supervisor’s communicative behaviors and subordinate’s articulated dissent (Kelly et al., Citation2023). Perceived immediacy is the perception of psychological closeness between communicators (Kelly et al., Citation2015). Perceived immediacy with one’s supervisor has been studied cross-culturally with articulated dissent, but again during COVID-19 (Croucher et al., Citation2021). While there is evidence that at the height of the pandemic, subordinates’ perceived immediacy with their supervisor weakly to moderately correlated with their willingness to engage in articulated dissent, it is not known whether this pattern maintains now that the world has returned to “more” normal working conditions.

Workplace freedom of speech is the extent to which individuals perceive their organization fosters a sense of open communication (Gorden & Infante, Citation1991). Members in organizations with higher levels of workplace freedom of speech tend to report higher levels of identification, commitment, and articulated and latent dissent (Croucher et al., Citation2014; Kassing, Citation2000, Citation2006).

This study examines these workplace variables cross-culturally through the lens of Croucher and Kelly’s (Citation2019) guidance. These measures were chosen for two reasons. First, regarding the development of this set of measures, one follows Croucher and Kelly’s guidance (perceived immediacy), one does not (dissent), and one follows their guidance with only half of its items (workplace freedom of speech). As such, there is a variety of consistency and inconsistency of fit with the guidance represented across these three measures. Second, dissent behaviors, perceived immediacy with one’s supervisor, and workplace freedom of speech all relate to the theme of open communication with the people that participants interact with most in the workplace (i.e., immediate supervisor and close colleagues). While any set of measures could be chosen for such an investigation, the researchers sought to find measures that fell under a theme, to reduce the participants’ need to mentally pivot across referents while completing the questionnaire. The less mental pivoting that is required when taking an assessment, the less likely participants are to experience mental fatigue, which is a direct threat to the validity of a measure (Campbell & Stanley, Citation1963; Kerlinger & Lee, Citation2000). As such, this cluster of measures were chosen to remove validity threats that were not inherent to measure design.

Because Croucher and Kelly’s (Citation2019) guidance is specifically for maintaining validity cross-culturally, cross-cultural samples were chosen. Australia, Canada, New Zealand, and the U.S. are all English-speaking countries. Therefore, samples from these countries were targeted because this allows measures to be examined cross-culturally without validity threats through translation errors. As such, the following is proposed:

RQ:

Do the organizational dissent, workplace freedom of speech, and perceived immediacy measures produce evidence of generalizability of validity across Australia, Canada, New Zealand, and the U.S.?

Method

Participants and procedure

Data for this study were collected through Qualtrics. The researchers paid Qualtrics approximately $5US for each completed response in the online questionnaire. The survey panel included participants from Australia, Canada, New Zealand, and the U.S. who were native English speakers, at least 18 years of age, and had full-time employment in which they had a direct supervisor. An overview of participant demographics is in .

Table 1. Participants.

Instrumentation

Articulated and latent dissents were measured through Kassing’s (Citation2000) instrument. The measure was composed of 18 Likert-type items with 5-point response scales, with nine items devoted to assessing articulated and latent dissent, respectively. The measure is for behavioral constructs and all items reference behaviors. The reliability score for the latent measure in this sample was ω = .76, α = .76. The reliability score for the articulated measure was α = .48 (no omega is available due to a few negative relationships in the inter-item correlation matrix).

Perceived immediacy was assessed with Kelly et al. (Citation2015) measure. It was composed of nine semantic differential items with 7-point response ranges. The measure is of a perception and items do not reference behaviors. The reliability score was ω = .96, α = .96.

Workplace freedom of speech was measured with Gorden and Infante’s (Citation1991) instrument. The measure assesses a psychological construct but has four items assessing behaviors and six referencing perception. It was composed of 10 Likert-type items with 5-point response ranges. The reliability score was ω = .89, α = .90.

Results

Before the analysis, each measure was checked for normality. No signs of skewness, kurtosis, or multi-modality were seen. displays the descriptive statistics for all measures. With no threats to normality, the data were appropriate for analysis.

Table 2. Descriptives.

To assess the measures, a CFA was performed (see ) on each measure. CFA tests the content validity of measures by assessing whether the hypothesized factor structure of the measure is maintained within the sample (Kelly & Westerman, Citation2020). Bryne (Citation2016) recommends the following heuristics for fit statistics to determine whether measures are fit for hypothesis testing: goodness-of-fit index (GFI) ≥ .9, comparative fit index (CFI) ≥ .9, standard root mean residual (SRMR) ≤ .08, and ideally, root mean square error approximation (RMSEA) ≤ .1 (Bryne, Citation2016). Because RMSEA is extremely sensitive, there is justification to use a measure in further analysis if RMSEA alone is poor (Chen et al., Citation2008). Fit statistics for these measures appear in . Fit statistics indicated that behavioral measures (articulated and latent dissent) did not maintain internal consistency when used across these cultures, nor did the workplace freedom of speech measure. However, perceived immediacy had good internal consistency.

Table 3. Fit statistics.

Supplemental Analyses

The workplace freedom of speech measure was reexamined using only the six items free of behaviors. The fit was poor (GFI = .80, CFI = .75, RMSEA = .33, and SRMR = .10). However, the standard residual covariance matrix indicated that only the item, “In my work organization there is more concern for quantity than quality,” was problematic. This item is arguably double-barreled as it requires participants to assess both quantity and quality. After that item was removed, the fit of the remaining perceptual items was acceptable (GFI = .93, CFI = .93, RMSEA = .19, and SRMR = .05). The reliability scores for this respecified measure were ω = .86, α = .86.

The correlation matrix for these measures broken down per country is shown in . The only statistically significant correlations were between articulated and latent dissent. (Note: the two versions of the workplace freedom of speech measures were also correlated highly, but as this was a subset measure. For items correlated with the original measure items, a fit of near 1.00 is expected.)

Table 4. Correlation matrices.

Discussion

CFA was used on the 4-nation sample to identify whether the measures maintain factor structure when the sample is culturally diverse. Only the perceived immediacy measure, which was the only measure in this study developed consistent with Croucher and Kelly’s (Citation2019) guidance for developing a pan-cultural measure, maintained factor structure. Only the measure that was perceptual, rather than behavioral, did not mention unnecessary contexts in items, and was worded as simplistically as possible with its semantic differential structure yielding evidence of content validity with the culturally diverse sample. As such, only the measure that followed the guidance of development for achieving pan-cultural validity shows evidence of generalizability of validity across these cultures.

The dissent measure did not yield evidence of generalizability of validity as internal consistency was not seen within this culturally diverse sample. As these are behavioral sub-measures, this finding is consistent with expectations laid out by Croucher and Kelly (Citation2019) that behavioral assessments will not work on multi-national datasets given cultures assign different meanings to behaviors. To be clear, this guidance does not mean behaviors cannot be objectively counted across cultures. How often someone smiles during a speech, for example, is an objective behavioral measure that can be recorded by an observer. Following Croucher and Kelly’s (Citation2019) guidance, the act of smiling is considered an immediate behavior and may vary by culture; therefore, the collection of behaviors that are immediate should be validated within each culture (c.f., Kelly et al., Citation2015).

Notably, this dataset did not yield a positive relationship between articulated dissent and perceived immediacy as found in prior studies (i.e., Croucher et al., Citation2021; Kelly et al., Citation2023). There are a few potential explanations for this. First, it could be that the positive relationship found between these two variables is unique to data collected during the pandemic. Second, it could be an issue of calibration. Both the Croucher et al. (Citation2021) and Kelly et al. (Citation2023) studies had to make a small respecification to the articulated dissent sub-measure for issues of minor misfit in the samples. While the misfit shown in the present study is far greater than what was found in those two prior studies, it is possible that part of the fit issues observed with these measures could be age-related. Over time, the validity of measures may decline as the meanings behind words change, and younger generations of participants may interpret an item’s wording differently than the generation for which the measure was developed (Autman & Kelly, Citation2017). The dissent measure was developed more than 20 years ago (Kassing, Citation2000). As such, its validity may be decreasing over time, indicating that it is time to develop new measures. Future research is needed to explain this finding.

While the workplace freedom of speech measure did not maintain factor structure in its original form, the five perceptual items (the measure less the four behavioral items and the double-barreled item) had acceptable fit. This indicates that the five items in this measure that were developed according to Croucher and Kelly’s (Citation2019) guidance of having a perceptual, rather than behavioral referent, no inclusion of unnecessary contexts, and simple wording, yielded evidence of generalizability of validity. Those items that did not follow this guidance did not yield such evidence. The analysis of this measure supports Croucher and Kelly’s (Citation2019) guidance for pan-cultural measurement development.

The findings of this study provide an example of measures consistent (i.e., perceived immediacy and some items in workplace freedom of speech) and inconsistent (i.e., organizational dissent and some items in workplace freedom of speech) with Croucher and Kelly’s (Citation2019) recommendations for measurement development and demonstrate how these measures fit when utilized cross-culturally. As expected, the behavioral items did not show evidence of generalizability of validity, yet the perceptual items did. As Croucher and Kelly (Citation2019) explain, while the meanings placed on behaviors vary by culture, psychological processes are common to all humans; therefore, these are the items most likely to be unable to maintain factor structure across cultures and fail to meet standards of content validity.

However, Croucher and Kelly’s (Citation2019) guidance is also clear in that the type of item (i.e., behavioral versus perceptual) should match the construct. That behavioral measures are not likely to maintain generalizability of validity across cultures does not mean perceptual measures should be used as proxies of measurement for behavioral constructs in such studies. Instead, behavioral measures should be developed uniquely for each culture. Moreover, the mixing of perceptual versus behavioral referents leads to the conflation of constructs. Measurement development initiatives should avoid this practice, regardless of whether the measure is designed to be used cross-culturally (Kelly et al., Citation2015). Behavioral variables should be composed of items that reference behaviors. Likewise, perceptual variable assessments should only be composed of items that reference perception. Workplace freedom of speech is defined as a perception. As such, it should be measured with items referencing perception rather than behaviors in any culture.

Croucher and Kelly’s (Citation2019) guidance to word items concisely and simplistically is also good guidance for all measurement development work, despite being offered specifically for measures intended to be developed cross-culturally to help with translation. Simplistic, concise wording makes items more accessible to participants at a variety of literacy and education levels. Concise wording also helps measurement developers avoid adding extra referents into an item (e.g., double-barreled items), as was the issue in the workplace freedom of speech measure.

Lastly, the evidence from this study supports calls from other scholars about the reliability of measures. Goodboy and Martin (Citation2020) called for communication scholars to quit reporting alpha reliabilities and move to reporting omega reliabilities. This call was made because omega is more discerning when calculating reliability for measures composed of many items and can alert researchers to validity issues. An omega reliability score cannot be calculated when there are negative correlations between the items of a measure. If all items composing a measure are measuring the same construct, then they should all be positively correlated. This data supports Goodboy and Martin’s (Citation2020) call, given that the measure of articulated dissent which yielded the worst fit and showed no evidence of content validity, could not derive an omega reliability score.

These results echo the caution of multiple scholars (e.g., Croucher & Kelly, Citation2019; Croucher et al., Citation2019, Citation2020; Kelly & Westerman, Citation2020) who maintain evidence of reliability does not imply evidence of validity. The workplace freedom of speech measure had a high reliability score. Yet, the measure CFA identified content validity issues in the mismatch between behavioral and perceptual items and a double-barreled item. Checks of reliability do not test whether the items are assessing the same construct and should not be used to make that assertion.

Conclusion

While the results of this study support Croucher and Kelly’s (Citation2019) guidelines, this is only one selection of variables. Since Croucher and Kelly’s recommendations were published, no research has been conducted to test whether these recommendations, based on logic rather than data, can be supported in practice. Scholars who engage in cross-cultural research are encouraged to replicate the process demonstrated in this paper to determine whether their data align with Croucher and Kelly’s guidelines or suggest the guidelines need to be modified.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Massey University.

Notes on contributors

Stephen M. Croucher

Stephen M. Croucher Professor and Head of School - School of Communication, Journalism and Marketing, Massey University.

Stephanie Kelly

Stephanie Kelly Professor, Business Info Systems & Analytics, North Carolina A&T State University.

Doug Ashwell

Doug Ashwell Senior Lecturer and Associate Head of School, School of Communication, Journalism and Marketing, Massey University.

Shawn Condon

Shawn Condon Senior Tutor, School of Communication, Journalism and Marketing, Massey University.

Beth Tootell

Beth Tootell Senior Lecturer and Associate Head of School, School of Management, Massey University.

References

  • American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (2014). The standards for educational and psychological testing (2014 edition). American Psychological Association.
  • Autman, H., & Kelly, S. (2017). Reexamining the writing apprehension measure. Business and Professional Communication Quarterly, 80(4), 516–529. https://doi.org/10.1177/2329490617691968
  • Bryne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming. Routledge.
  • Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Rand-McNally.
  • Chen, F., Curran, P. J., Bollen, K. A., Kirby, J., & Paxton, P. (2008). An empirical evaluation of the use of fixed cutoff points in RMSEA test statistic in structural equation models. Sociological Methods & Research, 36(4), 462–494. https://doi.org/10.1177/0049124108314720
  • Croucher, S. M., & Kelly, S. (2019). Measurement in intercultural and cross-culturalcommunication. In E. E. Graham & J. P. Mazer (Eds.), Communication research measures III: A sourcebook (pp. 141–159). Routledge.
  • Croucher, S. M., Kelly, S., Hui, C., Rocker, K. T., Cullinane, J., Homsey, D., Ding, G. G., Nguyen, T., Anderson, K. J., Green, M., Ashwell, D., Wright, M., & Palakshappa, N. (2021). Articulated dissent and immediacy: Across-national analysis of the effects of COVID-19 lockdowns. International Journal of Conflict Management, 33(2), 181–202. https://doi.org/10.1108/IJCMA-04-2021-0062
  • Croucher, S. M., Kelly, S., Rahmani, D., Burkey, M., Subanaliev, T., Galy-Badenas, F., Rahmani, D., Zeng, C., Jackson, K., Turdubaeva, E., Eskiçorapçı, N., & Jackson, K. (2020). A multi-national validity analysis of the self-perceived communication competence scale. Journal of International & Intercultural Communication, 13(1), 1–12. https://doi.org/10.1080/17513057.2019.1569250
  • Croucher, S. M., Kelly, S., Rahmani, D., Subanaliev, T., Jackson, K., Galy-Badenas, F., Lando, A. L., Chibita, M., Nyiranasbimana, V., Turdubaeva, E., Eskiçorapçı, N., Condon, S., Stanalieva, G., & Orunbekov, B. (2019). A multi-national validity analysis of the personal report of communication apprehension (PRCA-24). Annals of the International Communication Association, 43(3), 193–209. https://doi.org/10.1080/23808985.2019.1602783
  • Croucher, S. M., Parrott, K., Zeng, C., & Gomez, O. (2014). A cross-cultural analysis of organizational dissent and workplace freedom in five European economies. Communication Studies, 65(3), 298–313. https://doi.org/10.1080/10510974.2013.811430
  • Goodboy, A. K., & Martin, M. M. (2020). Omega over alpha for reliability estimation of unidimensional communication measures. Annals of the International Communication Association, 44(4), 422–439. https://doi.org/10.1080/23808985.2020.1846135
  • Gorden, W. I., & Infante, D. A. (1991). Test of a communication model of organizationalcommitment. Communication Quarterly, 39(2), 144–155. https://doi.org/10.1080/01463379109369792
  • Gudykunst, W. G. (2003). Issues in cross-cultural communication research. In W. B. Gudykunst (Ed.), Cross-cultural and intercultural communication (pp. 149–161). Sage.
  • Kassing, J. W. (1998). Development and validation of the organizational dissent scale. Management Communication Quarterly, 12(2), 183–229. https://doi.org/10.1177/0893318998122002
  • Kassing, J. W. (2000). Investigating the relationship between superior‐subordinate relationship quality and employee dissent. Communication Research Reports, 17(1), 58–69. https://doi.org/10.1080/08824090009388751
  • Kassing, J. W. (2006). Employees’ expressions of upward dissent as a function of current and past work experiences. Communication Reports, 19(2), 79–88. https://doi.org/10.1080/08934210600917115
  • Kassing, J. W., & Avtgis, T. A. (1999). Examining the relationship between organizational dissent and aggressive communication. Management Communication Quarterly, 13(1), 76–91. https://doi.org/10.1177/0893318999131004
  • Kelly, S., Rice, C., Wyatt, B., Ducking, J., & Denton, Z. (2015). Teacher immediacy and decreased student quantitative reasoning anxiety: The mediating effect of perception. Communication Education, 64(2), 171–186. https://doi.org/10.1080/03634523.2015.1014383
  • Kelly, S., & Westerman, D. (2020). Doing communication science: Thoughts on making more valid claims. Annals of the International Communication Association, 44(3), 177–184. https://doi.org/10.1080/23808985.2020.1792789
  • Kelly, S., Zeng, C., & Cundall, M. K., Jr. (2023). Subordinate articulated dissent as influenced by supervisor behaviors: The hazards of humor. International Journal of Business Communication, 232948842311664. https://doi.org/10.1177/23294884231166405
  • Kerlinger, F. N., & Lee, H. B. (2000). Foundations of behavioral research (4th ed.). Belmont, CA: Wadsworth Publishing.
  • Zaini, R. M., Elmes, M. B., Pavlov, O. V., & Saeed, K. (2016). Organizational dissent dynamics: A conceptual framework. Management Communication Quarterly, 31(2), 258–277. https://doi.org/10.1177/0893318916671216
  • Zeng, C. (2018). Employee dissent: A means to facilitate constructive conflicts in organizations. In S. M. Croucher, B. Lewandowska-Tomaszczyk, & P. A. Wilson (Eds.), Conflict, mediated message, and group dynamics: Intersections of communication (pp. 67–80). Lanham, MD: Rowman & Littlefield.
  • Zeng, C., Permyakova, T. M., Smolianina, E. A., & Morozova, I. S. (2020). Exploring the relationships between employee burnout, organizational dissent and work-family culture in Russian organizations. Journal of Intercultural Communication Research, 49(2), 119–132. https://doi.org/10.1080/17475759.2020.1719430