4,326
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Analyzing survey data in marketing research: A guide for academics and postgraduate students

ORCID Icon
Pages 203-215 | Received 20 Dec 2021, Accepted 19 Jan 2023, Published online: 07 Feb 2023

ABSTRACT

Numerous marketing scholars have been overly loyal to seminal (and out-dated) analytical processes when using surveys. If this continues, these individuals may receive rejection decisions, as the standards of the so-called publishing game have increased. Accordingly, from an Associate Editor perspective, this commentary provides a contemporary guide (with thirteen stages) on how survey data can be effectively analyzed. This covers a series of basic techniques (like treating missing values, presenting profiling information about samples, and undertaking descriptive statistical analyses), alongside describing some scale purification tools and several complex procedures (e.g., running tests for various forms of reliability and validity, addressing common method variance, and testing for endogeneity bias). Then, some conclusions and methodological recommendations are offered to help authors to advance theory and practice with robust survey data. It is anticipated that these discussion points will facilitate debates among readers of the Journal of Strategic Marketing (and beyond) about the analysis of survey data.

Introduction

Surveys are at the forefront of the broader marketing discipline – mostly because they are relatively cheap, can quickly reach a large number of people, and are likely to generate findings that advance theory and practice (Hulland et al., Citation2018). Despite these benefits, numerous researchers have overlooked certain analytical techniques that must be undertaken when using surveys. That is, unfortunately, some authors (including those submitting work to the Journal of Strategic Marketing) have opted to depend on seminal (and out-dated) papers without considering key modern-day robustness checks (e.g. Ageeva et al., Citation2019; Foroudi et al., Citation2018; Tourky et al., Citation2020). This is a pity, since following this approach may lead to rejection decisions, as the standards of journal editors and reviewers have increased. Accordingly, the objective of this commentary is to provide a contemporary guide (from an Associate Editor point-of-view) on how marketing academics and postgraduate students can effectively analyze survey data. By achieving this objective, the subsequent contributions are made to enhance the extant literature:

  1. Using best practices from the wider marketing domain, a thirteen-stage guide is produced to assist with the analysis of survey data. This should bring some scholars up-to-speed, rather than being focused on the past (as per Ageeva et al., Citation2019; Foroudi et al., Citation2018; Tourky et al., Citation2020).

  2. Alongside covering several basic tools (e.g. missing value analyses), some complex processes are evaluated, such as addressing common method variance and testing for endogeneity bias (reinforcing Baumgartner & Weijters, Citation2021; Sande & Ghosh, Citation2018). These demonstrate the importance of analyzing survey data in a sequential manner to yield robust findings.

  3. Since top-tier journals are shifting their preferences towards field experiments (Antonakis et al., Citation2010; Rutz & Watson, Citation2019), emphasis is placed on how surveys can be re-vitalized (via comprehensive analyses) to make strong contributions to theory and practice.

  4. A few differing perspectives pertaining to quantitative methods are highlighted, as some techniques have been criticized (e.g. partial least squares structural equation modelling – like Smart PLS) (Cadogan & Lee, Citation2023). This improved awareness encourages caution to be exercised in the so-called publishing game.

  5. Potential debates between readers of the Journal of Strategic Marketing (and other respected outlets) are expected to be commenced about how to best-analyze survey data (extending Hulland et al., Citation2018). It is hoped that such discussions will facilitate stringent future research.

The remaining sections cover the literature retrieval processes, the above-mentioned guide, as well as some conclusions and methodological recommendations.

Literature retrieval processes

The following literature selection criteria was followedFootnote1. First, using the Australian Business Deans Council list, respected marketing (and wider commercial) outlets were chosen (Donthu et al., Citation2022). Where possible, and to be comparable with the Journal of Strategic Marketing, these were rated as an A or A*, with large impact factors. Second, editorials were examined to highlight areas that journal editors (and to some extent reviewers) have recommended, alongside methodological approaches that they are opposed to (Guide & Ketokivi, Citation2015). This identified some notable debates (and prominent authors) regarding quantitative methods. Third, key books were referenced to acknowledge important developments pertaining to survey-based methodologies (Churchill, Citation1995; Dillman et al., Citation2009). These were helpful for acquiring insights that were less-common in journal articles. To process these sources, seminal material were compared with more recent studies to show the evolution of surveys over time.

Analyzing survey data in marketing research

Treating missing values (step 1)

Missing values refer to data files that contain empty cells that are attributed to respondents not answering certain survey questions (which may be done accidentally or on purpose). Researchers may need to delete any columns and/or rows of data – where there are missing values. This might be necessary if respondents have provided very little information. If there are only a handful of missing values, certain procedures can be employed to replace these scores. One solution is expectation maximization, which typically involves substituting these cells with the means for given constructs (via SPSS) (Hamzah et al., Citation2023). That said, the replaced values may not be what the respondents would have provided if they had answered such questions. Hence, if missing values are replaced, authors must outline their reasons for employing a chosen technique(s), as all options have notable benefits and drawbacks.

Outlining sample characteristics (step 2)

Once a final sample size has been established, the associated characteristics should be displayed. This encapsulates using SPSS to outline the regional locations, exporting activities, employees, industry types, annual sales, and so on about a given sample (Karami et al., Citation2023). Without such information, it is difficult to determine what individuals have responded to a survey. This could denote generalizability, in which a sample is diverse enough to be transferrable to wider populations. Equally, it might highlight certain problems with a sample, like being exclusive to a particular regional area. This helps academics and postgraduate students to acknowledge the limitations of their samples – or motivate them to collect improved survey data.

Introductory statistical analyses (step 3)

Some statistical analyses can be used as pre-cursors to more complex assessments in survey-based research. By way of illustration, it is helpful to consider early, vis-à-vis, late response bias. Specifically, reminders are sometimes necessary to boost responses, for which participants answer a set of survey questions after being politely nudged, but provide skewed information. This can be tested by comparing the t-values between the first-halves and second-halves of the responses (via SPSS). If there are non-significant differences, non-response bias is unlikely to be at play (Sraha et al., Citation2020). Plus, these introductory analyses cover processing the descriptive statistics of scales, like their means and standard deviations, as well as using histograms to monitor the distributions of these latent variables (Crick, Citation2018).

Sample size checks (step 4)

Following an earlier point, researchers are likely to struggle to obtain survey responses due to the difficulties associated with persuading participants to engage with a given study (Dillman et al., Citation2009). Yet, certain authors have published work with relatively small sample sizes in respected outlets. As an example, Crick and Crick (Citation2021b) examined export-focused coopetition strategies in the New Zealand wine sector. They expressed some reservations that their final sample of 101 observations was small – even though it accounted for a 13.91% response rate. Consequently, they ran several checks to confirm that there were no problems in this capacity – one of these being the a-priori test using the G*Power software (Faul et al., Citation2009). After setting certain specifications (like the error probability), they found that they only needed 88 responses to test their hypotheses and controls. This meant that they had obtained an acceptable sample size. The G*Power software can also be used to conduct post-hoc tests. Specifically, once the elements of a conceptual framework have been evaluated, key information can be inputted to calculate the effect size and whether an adequate sample size has been obtained to achieve a high-level of statistical power (Faul et al., Citation2009). This reduces type II errors (false negative results) and other concerns about sample sizes.

Exploratory factor analysis stage (step 5)

Exploratory factor analyses assess the underlying factor structure of a set of multi-item scales to ensure that they correspond to the latent variables that they are designed to measure (Sharma, Citation1996). There are various ways to conduct exploratory factor analysis models (e.g. through SPSS). This article does not explicitly focus on these approaches, but emphasizes that rotation (e.g. varimax), extraction (like principal axis factoring), and other specifications, such as the consideration of Eigenvalues should be underpinned by an appropriate logic(s) (Fabrigar et al., Citation1999). Another issue is to decide what factor loadings are large enough to be retained. Some scholars have advised that factor loadings that are above than 0.20 are satisfactory (Eggers et al., Citation2020), others have used 0.40 as a minimum threshold (Sharma, Citation1996), and 0.60 has even been recommended (Crick, Citation2020). At any rate, larger factor loadings usually denote the strongest measures.

It is helpful to run the Kaiser-Meyer-Olkin test of sampling adequacy to monitor whether enough data have been collected (signified by scores that are greater than 0.60, but preferably, in excess of 0.80) (Crick & Crick, Citation2019). This supplements a-priori and post-hoc tests using the G*Power software (Faul et al., Citation2009). Also, Bartlett’s test of sphericity must be significant (chi-square test statistics with p-values below 0.05). Plus, it is a good idea to record the total amount of variance explained to monitor the underlying factor structure – with higher amounts being ideal. This shows that the measures capture the nomological properties of the constructs that are being studied. Furthermore, exploratory factor analysis models can outline problems. By way of example, two (or more) latent variables might load onto the same factor (cross-factor loadings), for which different solutions exist to overcome such concerns. On the one hand, a set of items may need to be eliminated (Crick, Citation2018). On the other hand, the shared variance can be extracted, so that the purified indicators load onto distinct factors (Cadogan et al., Citation2001). Hence, exploratory factor analysis models serve various purposes to purify multi-item scales.

Confirmatory factor analysis stage (step 6)

Confirmatory factor analyses are used to verify the underlying factor structure of a set of single-indicators and multi-item measures (Souchon et al., Citation2016). There are different ways to undertake confirmatory factor analysis models, which largely vary depending on whether scholars use covariance-based or partial least squares structural equation modelling approaches (e.g. LISREL versus Smart PLS) (Cadogan et al., Citation2001; Sraha et al., Citation2020). This paper does not delve deep into these two approaches, but recognizes that there have been problems raised about the accuracy of using partial least squares structural equation modelling (Cadogan & Lee, Citation2023; Guide & Ketokivi, Citation2015). Regardless, this can be an iterative process, so it is advantageous to eliminate one item at a time (if required) to ensure that they are correctly removed – albeit several indicators should be retained for multi-item measures. Here, the factor loadings, error variances, and t-values should be displayed, coupled with the model fit indices (Cadogan et al., Citation2009; Crick, Citation2022). This allows authors to demonstrate the robustness of their measures.

Reliability considerations (step 7)

Reliability surrounds whether similar (or identical) results will be obtained if an investigation is repeated comparable settings (Churchill, Citation1979). This can be checked by using the test/re-test method, which involves replicating quantitative studies to ensure that the findings do not significantly vary across different groups (Hinkin, Citation1998). While this is an effective tool, it is easier said than done because it is likely to be challenging for most researchers to collect one-round of robust survey responses, let alone obtain multiple high-quality samples (Dillman et al., Citation2009). Alternatively, reliability can be monitored through internal consistency, which encapsulates using SPSS to calculate the Cronbach’s alpha coefficients (α) of multi-item scales (Crick, Citation2021). Scores that are above 0.70 are deemed to be reliable, but results that are in excess of 0.60 can be reported. However, Cronbach’s alpha coefficient (α) is a somewhat simplistic tool, as it is susceptible to the number of items inflating the values – and only applying to operationalizations with multiple indicators (Hinkin, Citation1998). Nevertheless, it is a good start to assess the reliability of multi-item scales, but should be reinforced with more stringent metrics (as covered later).

Validity assessments (step 8)

Validity (a multi-faceted issue) is summarized as whether scholars have measured whey they sought to capture (Churchill, Citation1979). Face validity can be checked via pre-testing surveys with key experts and/or using attention questions to ensure that the scales are understood and aligned with the chosen context (Bolton, Citation1993; Gummer et al., Citation2021). Also, face validity can be monitored through using an informant quality tool. This is an interval-based scale that can be included at the end of surveys for participants to rate the extent to which they were able to answer the questions presented to them (Crick et al., Citation2021). Using SPSS, this helps researchers to outline that they have sampled suitable key informants (or re-sample if the respondents are unsuitable). That said, a problem with this option is that some key informants might rate themselves as being qualified – even if they struggled to answer the survey questions.

To address content validity, established survey-based measures should be used. Yet, when earlier work is weak (or unavailable), new measures may need to be developed and validated (as per Eggers et al., Citation2020). To ensure that convergent validity exists, researchers can examine (using LISREL) the composite reliabilities (which should be above 0.60) and the average variance extracted values (which must exceed 0.50) for all multi-item scales (Hamzah et al., Citation2023). These stringent tools supplement Cronbach’s alpha coefficient (α). Moving forward, a measure has nomological validity if it ‘behaves as expected, with respect to some other construct to which it is theoretically-related’ (Churchill, Citation1995, p. 538). This is most appropriate when establishing new scales, in which conceptually-relevant connections can be tested (via the likes of LISREL and SPSS) between a new operationalization and a key driver and/or outcome (Crick & Crick, Citation2019).

Regarding discriminant validity, one of the most popular checks in survey-based methodologies is for researchers to run their confirmatory factor analysis model (via LISREL) to purify their operationalizations (Cadogan et al., Citation2009; Souchon et al., Citation2016). Then, the phi matrix correlations (the associations between the latent variables) should be squared and compared against the average variance extracted values – for the multi-item measures. If all multi-item operationalizations have reliability statistics that are in excess of the minimum benchmarks, the single-indicators are likely to be satisfactory (Crick, Citation2018). Here, the largest squared phi matrix correlation should be below the smallest average variance extracted value (Fornell & Larcker, Citation1981). Yet, exceptions can be made if there are high-degrees of shared variance for latent variables that are conceptually-related (see Crick et al., Citation2022).

Addressing common method variance (step 9)

Formally speaking, ‘common method variance refers to the shared variance among measured variables that arises when they are assessed using a common factor’ (Siemsen et al., Citation2010, p. 456). If left untreated, incorrect conclusions can be made based on findings that are distorted by these biases (Baumgartner & Weijters, Citation2021). As a procedural solution, surveys should be designed in a user-friendly manner, such as making the measures (and instructions) interactive, clear, and/or using attention questions, so that the respondents read the questions carefully to provide accurate answers (Gummer et al., Citation2021; Podsakoff et al., Citation2003). Another option is using a mixture of survey data and archival information. That is, when independent variables are measured using different sources of data to an outcome variable(s), the drawbacks of single-source results are overcome (Chang et al., Citation2010). This may not be possible (i.e. researchers might struggle to obtain multi-source data), meaning that it is critical to boost the quality of the single-source data being used.

Certain tests must be conducted to ensure that any findings are not influenced by common method variance (for which several evaluations exist). As an example, Harman’s single-factor test typically involves running all purified scales in an exploratory factor analysis model (through SPSS), with some set extraction and rotation specifications. If a single-factor emerges, or one component accounts for at least 50.00% of the total variance, there are likely to be concerns (Hamzah et al., Citation2023). If these issues do not occur, common method variance is seemingly not at play. Unfortunately, Harman’s single-factor test can show that there are no problems, but in reality, common method variance could have influenced a set of findings (as noted by Hulland et al., Citation2018). Thus, this technique (alone) is not satisfactory, so if it is employed, it should be supplemented by more rigorous procedures.

A more stringent test is the marker variable technique (Lindell & Whitney, Citation2001). This can be undertaken using SPSS as follows. First, the marker variable must be specified – a construct that is conceptually-unrelated to any other latent variable that is being tested in a given study (with a relatively large standard deviation). This could be the earlier-mentioned informant quality scale (as per Crick et al., Citation2021). Second, two correlation matrices must be created (reporting the Pearson’s correlation coefficients). One of these should state the bivariate correlations between all latent variables within a conceptual framework. The other should be a partial correlation matrix, with the same constructs, but controlling for the marker variable. Third, the differences between these two correlation matrices should be calculated and averaged. Although there is not a single-agreed cut-off value, if the mean difference is below 0.10, the data are unlikely to be biased by a common method factor (Crick, Citation2018).

Testing for endogeneity bias (step 10)

There are different descriptions, but ‘endogeneity concerns arise in situations where an explanatory or independent variable correlates with the error term (residual) of a specified model, rendering estimates inconsistent. This is because the coefficient estimate of the compromised explanatory variable contains the effect of unaccounted for variable(s) that also partially explain the dependent variable’ (Rutz & Watson, Citation2019, pp. 481–482). Endogeneity bias is especially problematic when researchers are attempting to claim causality. That is, they might conclude that a certain independent variable is a driver of an outcome variable. However, if there is a missing variable at play, this unaccounted construct could mean that their findings are spurious (Sande & Ghosh, Citation2018). Endogeneity bias can be treated in numerous ways, but in this investigation, the following survey-based option is advised. While this approach has not been utilized by all authors, it is an appropriate assessment for a variety of outlets – not least of which the Journal of Strategic Marketing (extending Crick et al., Citation2022; Souchon et al., Citation2016).

First, for demonstration purposes, higher-levels of a market orientation are proposed to drive business performance (linking with Cadogan et al., Citation2009; Karami et al., Citation2023). This association could be part of a larger conceptual framework that has been simplified to test for endogeneity bias – albeit if this decision is made, it should be justified to journal editors and reviewers. Second, an instrument must be selected. This is a construct that is theoretically correlated with an independent variable, but is not directly linked with the outcome variable (as advised by Antonakis et al., Citation2010). Here, this instrumental variable might be a market-oriented mind-set (organization-wide beliefs about the importance of creating value for customers). Consequently, researchers should include such instrumental variables within their surveys in anticipation of testing for endogeneity bias.

Third, via LISREL, two structural models should be run. One of these involves the path between a market orientation and business performance and the other contains the instrumental variable. A market orientation would need to be modelled as a driver of business performance – before a market-oriented mind-set is employed as an antecedent of a market orientation. This should be done by using the purified measures. Fourth, the chi-square test statistic should be recorded, coupled with the degrees of freedom, to calculate the change values. If there is a non-significant difference between the two structural models, endogeneity bias is unlikely to be a concern (following Crick & Crick, Citation2021a; Crick et al., Citation2021). Also, authors can observe the change in variance explained by noting the squared multiple correlation in the same capacity. Such a test would add value to a survey-based methodology, alongside reliability, validity, and common method variance considerations.

Model-testing stage (step 11)

It is common practice to utilize survey data to test a series of hypotheses and control paths, for which various approaches and considerations are available (Cadogan et al., Citation2001; Karami et al., Citation2023). Some model-testing tools are more relevant with larger sample sizes – like covariance-based structural equation modelling, which tends to be more effective with at least 200 observations (Hinkin, Citation1998). That said, covariance-based structural equation modelling packages (e.g. LISREL) can be used if scholars have relatively small sample sizes (Crick et al., Citation2021; Souchon et al., Citation2016). Another consideration is the number of parameters within the model. For instance, complex models, namely, those with a series of moderators and/or mediators are best-evaluated using partial least squares or covariance-based structural equation modelling to avoid running redundant tests (following Cadogan et al., Citation2009; Sraha et al., Citation2020). Alternatively, conceptual frameworks with multiple independent variables and one outcome variable are effectively tested using regression analyses (see Crick, Citation2022).

Researchers should display their path coefficients and significance-levels, as well as other appropriate details, such as the model fit indices (Cadogan et al., Citation2009). At any rate, the bivariate correlations (usually Pearson’s correlation coefficients) and descriptive statistics of the latent variables should be examined within the model-testing stage. It is good practice to account for certain control variables – as other factors that might (based on a given theory) explain the variance of an outcome variable(s). Procedural control paths must be considered when evaluating complex associations, like quadratic and interaction effects (Crick & Crick, Citation2021c; Hamzah et al., Citation2023). Further, floodlight and spotlight analyses can be employed to calculate the slope values for interaction effects (Crick, Citation2021; Spiller et al., Citation2013). This provides extra rigor for quantitative investigations to make better-use of survey data. Collectively, scholars must justify their chosen model-testing tools to journal editors and reviewers to alleviate any confusion.

Unpacking statistical results (step 12)

Post-hoc tests can be undertaken to unpack statistical results. For example, Crick et al. (Citation2022) revisited the relationship between a market orientation and customer satisfaction performance under the moderating role of coopetition. They argued that collaborating with competitors was expected to propel the performance outcomes of market-oriented strategies because these networks could equip entrepreneurs with the necessary assets to create value for their end-users. Despite a market orientation having a positive and significant connection with customer satisfaction performance, this link was negatively and significantly moderated by coopetition. As a post-hoc test, Crick et al. (Citation2022) investigated how industry experience might salvage the probable dark-sides of coopetition and its interaction with a market orientation. Here, a positive and significant three-way interaction effect existed between a market orientation, coopetition, and industry experience on customer satisfaction performance. This helped them to identify that decision-makers can use their sector-wide knowledge to overcome of the problems related to market-oriented activities and coopetition to create value for their customers. Hence, post-hoc tests can offer counter-intuitive explanations that amplify contributions to theory and practice (following Cadogan et al., Citation2009).

Presenting robust survey data (step 13)

Academics and postgraduate students are advised that ‘when a manuscript is fun to read for readers, it will be fun for reviewers as well. This also means that figures and tables should be completely self-explanatory and easy to understand. Do not just dump all the details that are available, but present those results that are instrumental to getting the message across’ (Lindgreen & DiBenedetto, Citation2020, p. 5). Thus, researchers must not present the entirety of their outputs (including minimizing the use of graphs and equations). In doing so, stand-alone tables can be used to conserve space. For instance, tests for discriminant validity can be outlined in a single-table, alongside the final multi-item scale reliabilities (as per Crick, Citation2021; Hamzah et al., Citation2023). By combining tables (and equivalent information), articles become much more interesting to read.

Conclusions and methodological recommendations

The objective of this commentary was to provide a contemporary guide (from an Associate Editor point-of-view) on how marketing academics and postgraduate students can effectively analyze survey data. To achieve this objective, a thirteen-stage guide was developed to describe some rigorous ways that survey data can be analyzed to meet the ever-growing expectations of journal editors and reviewers. This yielded the following conclusions and methodological recommendations. First, despite this paper’s focus on analytical processes, it is concluded that there are critical issues related to collecting comprehensive survey data. Therefore, it is advised that:

  • Collecting rigorous survey data can be undertaken via translation considerations, pre-testing, field interviews, and pilot studies (Bolton, Citation1993; Gummer et al., Citation2021).

  • The feasibility of collecting a mixture of survey and archival data should be determined. If it is possible to use multiple data sources, it would be of interest to many top-tier journals (Chang et al., Citation2010). Yet, single-source data can be of a publishable standard, especially if objective data cannot be obtained (e.g. when sampling smaller-sized organizations that do not publish their finances to the public).

  • Whatever is being studied, surveys must be launched in appropriate contexts – with suitable key informants (Cadogan et al., Citation2001; Crick, Citation2022). This extends to striving for findings that are generalizable to wider populations.

  • When collecting survey data, academics and postgraduate students should not be driven by convenience, in which they must avoid using overly simple operationalizations that impede their ability to make novel theoretical and practical contributions (Eggers et al., Citation2020).

Second, another conclusion is that analysis is integral to survey-based methodologies. Accordingly, it is recommended that:

  • Survey data should be analyzed using a series of steps that involve simple (but important) tools (e.g. missing value analyses) before more complex techniques (like confirmatory factor analysis models) are employed to purify scales (Cadogan et al., Citation2009; Souchon et al., Citation2016).

  • The expectations of top-tier outlets have increased, meaning that survey data should be assessed for more stringent issues (e.g. addressing common method variance and testing for endogeneity bias) that seminal papers in this domain did not cover (Baumgartner & Weijters, Citation2021; Rutz & Watson, Citation2019).

  • Seminal methods-based articles should be utilized to ground analytical decisions, but more recent studies must be referenced to become up-to-speed with modern-day practices (unlike Ageeva et al., Citation2019; Foroudi et al., Citation2018; Tourky et al., Citation2020).

  • Authors should analyze their survey data in a robust capacity, but follow the preferences of the outlet(s) that they are targeting. For instance, if the editors and reviewers of a given journal are opposed to partial least squares structural equation modelling (via Smart PLS), they are wasting their time if they pursue such analytical tools (Guide & Ketokivi, Citation2015; Cadogan & Lee, Citation2023).

  • The most relevant statistical information should be outlined. Authors should create concise tables and graphs to show the components of their scale purification processes and model-testing techniques, but not highlight distracting details (Lindgreen & DiBenedetto, Citation2020).

Third, a final conclusion is that there are other factors (extending the earlier conclusions and methodological recommendations) pertaining to survey-based research. Consequently, it is suggested that:

  • The points raised within this commentary are driven from insights that span from across the broader marketing discipline about effectively analyzing survey data (Hinkin, Citation1998; Hulland et al., Citation2018). Moving forward, readers that are interested in this paper should articulate their agreements and disagreements with these issues to re-vitalize survey-based research designs in the years to come.

  • The elements of this contemporary guide do not form an exhaustive list, for which more details can be found in the sourced material. That is, there are many more tests and robustness checks that can strengthen survey-based methodologies (Karami et al., Citation2023; Spiller et al., Citation2013). These can be discussed in future research.

  • If a study has a bullet-proof methodology (i.e. all modern-day robustness checks were evaluated – with rich data sources), but tells the wider marketing community something that is well-established, the lack of theoretical and practical contributions will prevent such a strong survey-based research design from ever being published in a high-quality journal. Therefore, survey data should be employed to advance knowledge that earlier work has overlooked (Crick et al., Citation2022; Crick, Citation2020).

In closing, this commentary has simplified various issues for scholars using survey-based methodologies. This should increase these individuals’ chances of surviving the tough modern-day review process of the Journal of Strategic Marketing (and beyond).

Acknowledgements

The author would like to thank the referees for their valued comments that were utilized to shape this article. Further, thanks is offered to Professor Carolyn A. Strong (as the Editor-in-Chief of the Journal of Strategic Marketing) for her guidance, alongside Professor John W. Cadogan and Dr Didier Soopramanien for their helpful suggestions pertaining to quantitative research methods in the broader marketing field (and the wider social sciences).

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1. Consistent with the themes of the Journal of Strategic Marketing, the focus of this paper surrounds utilizing survey data to evaluate different marketing strategies. However, these factors are transferrable to other areas of the wider marketing field (and the broader social sciences) to assist authors researching various topics using surveys. Also, while some scholars use qualitative methods to supplement surveys (e.g. interviews), this commentary concentrates on how survey data (alone) can be effectively analyzed. Thanks is offered to the referees for requesting clarity on these matters.

References

  • Ageeva, E., Melewar, T. C., Foroudi, P., & Dennis, C. (2019). Cues adopted by consumers in examining corporate website favorability: An empirical study of financial institutions in the UK and Russia. Journal of Business Research, 98(1), 15–32. https://doi.org/10.1016/j.jbusres.2018.12.079
  • Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. (2010). On making causal claims: A review and recommendations. The Leadership Quarterly, 21(6), 1086–1120. https://doi.org/10.1016/j.leaqua.2010.10.010
  • Baumgartner, H., & Weijters, B. (2021). Dealing with common method variance in international marketing research. Journal of International Marketing, 29(3), 7–22. https://doi.org/10.1177/1069031X21995871
  • Bolton, R. N. (1993). Pre-testing questionnaires: Content analyses of respondents’ concurrent verbal protocols. Marketing Science, 12(3), 280–303. https://doi.org/10.1287/mksc.12.3.280
  • Cadogan, J. W., Kuivalinen, O., & Sundqvist, S. (2009). Export market-oriented behavior and export performance: Quadratic and moderating effects under differing degrees of market dynamism and internationalization. Journal of International Marketing, 17(4), 71–89. https://doi.org/10.1509/jimk.17.4.71
  • Cadogan, J. W., & Lee, N. (2023). A miracle of measurement or accidental constructivism? How PLS subverts the realist search for truth. European Journal of Marketing, forthcoming.
  • Cadogan, J. W., Paul, N. J., Salminen, R. T., Puumalainen, K., & Sundqvist, S. (2001). Key antecedents to export market-oriented behaviors: A cross-national empirical examination. International Journal of Research in Marketing, 18(3), 261–282. https://doi.org/10.1016/S0167-8116(01)00038-6
  • Chang, S. J., Van Witteloostuijn, A., & Eden, L. (2010). Common method variance in international business research. Journal of International Business Studies, 41(2), 178–184. https://doi.org/10.1057/jibs.2009.88
  • Churchill, G. A., Jr. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16(1), 64–73. https://doi.org/10.1177/002224377901600110
  • Churchill, G. A., Jr. (1995). Marketing research: Methodological foundations (sixth ed.). Dryden Press.
  • Crick, J. M. (2018). The Antecedents and Consequences of a Customer Value-Oriented Dominant Logic: A Dynamic Managerial Capabilities Perspective. Unpublished doctoral thesis in entrepreneurial marketing, Loughborough University,
  • Crick, J. M. (2020). The dark-side of coopetition: When collaborating with competitors is harmful for company performance. Journal of Business & Industrial Marketing, 35(2), 318–337. https://doi.org/10.1108/JBIM-01-2019-0057
  • Crick, J. M. (2021). Unpacking the relationship between a coopetition-oriented mind-set and coopetition-oriented behaviours. Journal of Business & Industrial Marketing, 36(3), 400–419. https://doi.org/10.1108/JBIM-03-2020-0165
  • Crick, J. M. (2022). Does competitive aggressiveness negatively moderate the relationship between coopetition and customer satisfaction performance? Journal of Strategic Marketing, 30(6), 562–587. https://doi.org/10.1080/0965254X.2020.1817970
  • Crick, J. M., & Crick, D. (2019). Developing and validating a multi-dimensional measure of coopetition. Journal of Business & Industrial Marketing, 34(4), 665–689. https://doi.org/10.1108/JBIM-07-2018-0217
  • Crick, J. M., & Crick, D. (2021a). Coopetition and sales performance: Evidence from non-mainstream sporting clubs. International Journal of Entrepreneurial Behavior & Research, 27(1), 123–147. https://doi.org/10.1108/IJEBR-05-2020-0273
  • Crick, J. M., & Crick, D. (2021b). Internationalizing the coopetition construct: Quadratic effects on financial performance under different degrees of export intensity and an export geographical scope. Journal of International Marketing, 29(2), 62–80. https://doi.org/10.1177/1069031X20988260
  • Crick, J. M., & Crick, D. (2021c). Rising up to the challenge of our rivals: Unpacking the drivers and outcomes of coopetition activities. Industrial Marketing Management, 96(1), 71–85. https://doi.org/10.1016/j.indmarman.2021.04.011
  • Crick, J. M., Karami, M., & Crick, D. (2021). The impact of the interaction between an entrepreneurial marketing orientation and coopetition on business performance. International Journal of Entrepreneurial Behavior & Research, 27(6), 1423–1447. https://doi.org/10.1108/IJEBR-12-2020-0871
  • Crick, J. M., Karami, M., & Crick, D. (2022). Is it enough to be market-oriented? How coopetition and industry experience affect the relationship between a market orientation and customer satisfaction performance. Industrial Marketing Management, 100(1), 62–75. https://doi.org/10.1016/j.indmarman.2021.11.002
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail, and mix-mode surveys: A tailored approach (third ed.). John Wiley & Sons Limited.
  • Donthu, N., Kumar, S., Paul, J., Pattnaik, D., & Strong, C. A. (2022). A retrospective of the journal of strategic marketing from 1993 to 2019 using bibliometric analysis. Journal of Strategic Marketing, 30(3), 239–259. https://doi.org/10.1080/0965254X.2020.1794937
  • Eggers, F., Niemand, T., Kraus, S., & Breier, M. (2020). Developing a scale for entrepreneurial marketing: Revealing its inner frame and prediction of performance. Journal of Business Research, 113(1), 72–82. https://doi.org/10.1016/j.jbusres.2018.11.051
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A. -G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149
  • Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.1177/002224378101800104
  • Foroudi, P., Akarsu, T. N., Ageeva, E., Foroudi, M. M., Dennis, C., & Melewar, T. C. (2018). Promising the dream: Changing destination image of London through the effect of website place. Journal of Business Research, 83(1), 97–110. https://doi.org/10.1016/j.jbusres.2017.10.003
  • Guide, V. D. R., Jr., & Ketokivi, M. (2015). Notes from the editors. Restructuring the Journal of Operations Management, 37(1), 5–8. https://doi.org/10.1016/S0272-6963(15)00056-X
  • Gummer, T., Rosmann, J., & Silber, H. (2021). Using instructed response items as attention checks in web surveys: Properties and implementation. Sociological Methods & Research, 50(1), 238–264. https://doi.org/10.1177/0049124118769083
  • Hamzah, M. I., Crick, J. M., Crick, D., Ali, S. A. M., & Yunus, N. M. (2023). The nature of the relationship between an entrepreneurial marketing orientation and small business growth: Evidence from Malaysia. International Journal of Entrepreneurship and Small Business, forthcoming.
  • Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1(1), 104–121. https://doi.org/10.1177/109442819800100106
  • Hulland, J., Baumgartner, H., & Smith, K. M. (2018). Marketing survey research best practices: Evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science, 46(1), 92–108. https://doi.org/10.1007/s11747-017-0532-y
  • Karami, M., Crick, D., & Crick, J. M. (2023). Non-predictive decision-making, market-oriented behaviours, and smaller-sized firms’ performance. Journal of Strategic Marketing, forthcoming, 1–25. https://doi.org/10.1080/0965254X.2022.2052938
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. The Journal of Applied Psychology, 86(1), 114–121. https://doi.org/10.1037/0021-9010.86.1.114
  • Lindgreen, A., & DiBenedetto, C. A. (2020). How reviewers really judge manuscripts. Industrial Marketing Management, 91(1), 1–10. https://doi.org/10.1016/j.indmarman.2020.04.002
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J. -Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. The Journal of Applied Psychology, 88(5), 879–903. https://doi.org/10.1037/0021-9010.88.5.879
  • Rutz, O. J., & Watson, G. F. (2019). Endogeneity and marketing strategy research: An overview. Journal of the Academy of Marketing Science, 47(1), 479–498. https://doi.org/10.1007/s11747-019-00630-4
  • Sande, J. B., & Ghosh, M. (2018). Endogeneity in survey research. International Journal of Research in Marketing, 35(2), 185–204. https://doi.org/10.1016/j.ijresmar.2018.01.005
  • Sharma, S. (1996). Applied Multivariate Techniques. John Wiley & Sons Limited.
  • Siemsen, E., Roth, A., & Oliveira, P. (2010). Common method bias in regression models with linear, quadratic, and interaction effects. Organizational Research Methods, 13(3), 456–476. https://doi.org/10.1177/1094428109351241
  • Souchon, A. L., Hughes, P., Farrell, A. M., Nemkova, E., & Oliveira, J. S. (2016). Spontaneity and international marketing performance. International Marketing Review, 33(5), 671–690. https://doi.org/10.1108/IMR-06-2014-0199
  • Spiller, S. A., Fitzsimons, G. J., Lynch, J. G., & McClelland, G. H. (2013). Spotlights, floodlights, and the magic number zero: Simple effects tests in moderated regression. Journal of Marketing Research, 50(2), 277–288. https://doi.org/10.1509/jmr.12.0420
  • Sraha, G., Sharma, R. R., Crick, D., & Crick, J. M. (2020). International experience, commitment, distribution adaptation, and performance: A study of Ghanaian firms in B2B export markets. Journal of Business & Industrial Marketing, 35(11), 1715–1738. https://doi.org/10.1108/JBIM-05-2019-0197
  • Tourky, M., Alwi, S. F. S., Kitchen, P. J., Melewar, T. C., & Shaalan, A. (2020). New conceptualization and measurement of corporate identity: Evidence from UK food and beverage industry. Journal of Business Research, 109(1), 595–606. https://doi.org/10.1016/j.jbusres.2019.03.056