3,961
Views
3
CrossRef citations to date
0
Altmetric
PRACTICE, POLICY, & PERSPECTIVES

Enhancing the Reporting of Quantitative Research Methods in Australian Social Work

&
Pages 375-383 | Accepted 24 Mar 2015, Published online: 25 Jun 2015

Abstract

Australian Social Work (ASW) seeks to publish high quality, original research and receives a steady stream of manuscripts reporting quantitative methods. However, there has been considerable variation in standard and style. Consistent with international efforts to ensure quality and integrity in reporting quantitative research, this paper offers guidance and suggestions for the reporting of intervention research, observational studies, and the quantitative components of evaluation and mixed-method studies. They address the overall structure of articles and considerations for the abstract, introduction, method, results and the discussion sections, including presentation of statistical analyses and findings. The paper can assist authors and reviewers by clarifying expectations and contribute to ongoing efforts to enhance the quality of research reporting.

《澳大利亚社会工作》旨在发表高质量的原创研究,而且不断收到定量研究报告。不过,这些报告在知识标准及风格上颇有出入。本文作者与国际同仁一道为确保定量研究报告的质量和诚实而努力,为此,就干预研究、观察研究、评估的定量部分以及混合方法研究等等为作者提供指导和建议,涉及了总体结构、标题、提要、引言、方法、结果、讨论以及统计分析和发现的呈现。本文或许有助于作者及评审人明白预期,帮助他们提升研究报告的质量。

Australian Social Work (ASW) receives a steady stream of submitted manuscripts reporting quantitative research methods and numerical data. Reviewing a decade (1998–2007) of research articles published in ASW, Ryan and Sheehan (Citation2009) found that 18% (55/313) employed quantitative or mixed methods approaches. Another review found that 29% (40/136) of reports on health research, authored by Australian social workers (1990–2009), in social work and related journals employed quantitative or mixed methods (Brough, Wagner, & Farrell, Citation2013).

However, experience of the authors over the past decade in reviewing for ASW has found considerable variation in the style and quality of manuscripts. Ensuring quality and integrity in reporting quantitative research has been a challenge internationally across all disciplines, and a proliferation of guidelines has been developed to improve standards. Prominent examples include guidelines for reporting randomised controlled trials (see CONSORT, Moher et al., Citation2010); nonrandomised interventions in public health (see TREND, Des Jarlais, Lyles, Crepaz, & the TREND Group, Citation2004) and observational studies (see STROBE, Vandenbroucke et al., Citation2007). In a similar vein, leading social work journals have published guidelines that have focused on the reporting of statistics (e.g., Health and Social Care in the Community, Campbell, Citation2010; British Journal of Social Work, Smeeton & Goda, Citation2003) and checklists for manuscript submission (Thyer, Citation2012, Chapter 5).

This article offers guidance and suggestions for reporting quantitative research methods and numerical data in manuscripts submitted to ASW, consistent with the international standards cited earlier. We address the particular configuration of issues found in submissions to ASW and take a broader focus than statistics per se. Moreover, we discuss common challenges that authors face in preparing manuscripts and indicate where more detailed guidance is available within the style guide employed by ASW (Publication Manual of the American Psychological Association, 6th ed., Citation2010; referred from hereon as the APA Publication Manual). Links are also made to useful references for further reading on more complex issues.

Our suggestions cover reporting of intervention (experimental and quasi-experimental) and observational (descriptive, correlational, and psychometric) studies, and the quantitative components of evaluation and mixed methods studies. Issues in preparing all sections of a manuscript are addressed. However, this overview does not cover every contingency that authors might encounter.

Structure of Quantitative Research Articles

In accordance with international standards for health and social sciences (Campbell, Citation2010; ICMJE, Citation2008), quantitative research reports are usually structured around four sections: Introduction, Method, Results, and Discussion. A Conclusion may be added to further draw out implications and recommendations, although it is not essential.

Abstract

ASW does not require Abstracts to adhere to a structured format. Nevertheless, it is important that Abstracts succinctly convey the key elements of the article. A common problem arises when too much space is given to the Introduction, compressing the Method, Results, and Discussion. To balance the Abstract, it is recommended that the rationale and aims are succinctly stated in no more than three sentences, followed by key information on sampling, measures, and procedures. Include only core findings with supporting numerical data or statistical test results. End with a brief statement regarding the significant implications of the study (see Freiberg, Homel, & Branch, Citation2014; Grace & Gill, Citation2014 for examples of balanced Abstracts published in ASW).

Introduction

Commence the Introduction by succinctly describing the problem or issue that forms the focus of the paper. Explain its importance and outline any theoretical or conceptual frameworks underpinning the study. Next, review previous scholarship on the topic, highlighting how the present study addresses gaps in existing knowledge, introduces something novel, or challenges existing orthodoxy. The Introduction is also a place to recognise relevant methodological issues (e.g., the rationale for choice of competing measures or interventions). However, the details of measures and interventions are conventionally provided within the Method (see APA Publication Manual, chapter 2).

There is no consensus in international guidelines on whether the aims of the study (often expressed as a research question or hypothesis) should be placed at the end of the Introduction or within the Method. Compare, for example, Campbell (Citation2010) and ICMJE (Citation2008) with Moher et al. (Citation2010) and Des Jarlais et al. (Citation2004). We would encourage authors to choose the former, as it melds the elements of the Introduction together, confirming the purpose of the study and laying the platform for the Method.

In descriptive studies, it is usually sufficient to state a research aim or question, incorporating the study variables (e.g., the study aimed to identify the number of students experiencing financial hardship). Hypotheses are commonly employed for studies examining relationships between variables or for intervention studies. They are usually framed as directional statements that can be tested (e.g., people living below the poverty line will have poorer health than those in higher income groups). However, there are other ways to frame hypotheses and for more information, see Weinbach and Grinnell (Citation2014). If the study has multiple parts, or employs mixed methods, an overall aim can be followed by specific questions or hypotheses for each component.

Next follows a brief statement of research design outlining the approach taken to answer the study questions or hypotheses. This terminology is sometimes confused with research methods. Specific study designs (e.g., an observational, prospective, cross-sectional study) can potentially employ a range of different data collection approaches (e.g., mailed questionnaires, phone interviews), and the latter detail is conventionally described in the procedures section of the Method. For more information about research design terminology, see Alston and Bowles (Citation2012) or Neuman (Citation2014).

Method

The Method is divided into subsections, such as Sample, Measures, and Procedures, which may vary to meet the reporting needs of particular studies. The aim is to provide sufficient description to demonstrate the appropriateness of the methods, the reliability and validity of the results, and enable the study to be replicated by other researchers (APA Publication Manual, see chapter 2). To be consistent with universal reporting requirements (Campbell, Citation2010; ICMJE, Citation2008), the Method contains only information known to the authors at the outset of the study. Therefore, information gathered about participants (e.g., demographic characteristics and response rates) are best reported at the beginning of the Results.

Sample

This subsection contains information about ethics and consent; the setting or settings within which the research was conducted; the sampling frame; the sampling method; when the data were collected; inclusion and exclusion criteria; and any sample size calculations. Journals, including ASW, now expect studies with human participants to have been approved by an ethics committee (ICMJE, Citation2008). For studies involving direct contact with participants, the process of obtaining informed consent (or proxy consent) also needs outlining, either here or in the Procedures sub-section. Ethical issues, other than those associated with informed consent, are usually addressed as part of the Procedures.

Delineating the study setting is important, and provides the opportunity to also detail the sampling frame from which participants were recruited (e.g., clients attending an agency; residents within a geographic region). Sometimes, all members of a sampling frame receive an invitation to participate (e.g., an e-mail survey sent to all students enrolled in a faculty). However, most commonly, a subset of participants is recruited. In these cases, it becomes important to outline the sampling strategy so that readers can judge any risk of bias relating to the degree to which the sample is representative of the overall population.

The sampling method needs to be specified. Probability sampling refers to methods that ensure every person within a defined population has an equal or known chance of being selected (Alston & Bowles, Citation2012). Random sampling is the most frequently used form of probability sampling. If randomisation is employed to select participants or allocate participants to groups, the method of randomisation needs to be described (e.g., flipping a coin; computer random number generator).

Most studies published in ASW employ nonprobability sampling methods (for information on descriptors for different nonprobability sampling methods, see Alston & Bowles, Citation2012; Neuman, Citation2014). Sampling approaches, such as choosing every 10th person on a list, which do not generate a random sample, need to be described precisely. One commonly used nonprobability sampling method is a consecutive series, which involves inviting every eligible person during the period of recruitment (e.g., every new referral to a social work service). In reporting the sampling strategy, specify any inclusion or exclusion criteria employed in selecting participants and the dates for the period of data collection (Campbell, Citation2010).

If a statistical power analysis was conducted to generate an estimate of the required sample number for a specified effect size, this is best outlined in the Sample section. Similarly, other sample size calculations such as case-to-variable ratios for multivariate analyses such as multiple regression or factor analysis are also best reported in this section. For more information on this topic, see Weinbach and Grinnell (Citation2014) or Tabachnick and Fidell (Citation2012).

Measures

Measures is the subsection of the Method in which the data collection tools are described. A brief rationale for the choice of measures can be provided either in the Introduction or at the beginning of this subsection. It should be evident how the measures relate to the study aims. If multiple tools are used, grouping them under subheadings related to their role (e.g., outcome, predictor and control variables) or by conceptual domain (e.g., affect-related measures, measures of family functioning) can be helpful.

There are two major types of tools employed in manuscripts submitted to ASW, namely purpose-designed questionnaires and validated scales. Purpose-designed questionnaires are used in survey research and usually contain a range of diverse items (e.g., demographic items, multiple choice, and open-ended questions). Social work or social survey texts (e.g., Alston & Bowles, Citation2012; De Vause, Citation2013; Neuman, Citation2014) provide guidance in relation to terminology, which is helpful in accurately describing the development of, and elements contained within, the questionnaire. If both a questionnaire and validated scale have been used for data collection, it is better to report them separately as distinct tools, even if they have been administered together.

Validated scales capture data on a wide range of psycho-social domains (e.g., Parent Empowerment and Efficacy Measure, Freiberg et al., Citation2014). Scales can be self-rated (i.e., by the research participant), practitioner-rated, or proxy-rated (e.g., teacher and parent ratings of child behaviour). The term “validated” signifies that the psychometric properties (i.e., reliability and validity) of the scale have been tested and meet certain standards.

For validated scales, it is important to provide a description of the scale and its psychometric properties. The description ideally includes what the scale measures, the scale structure (“single scale” or “scale with x number of subscales”), and, if the scale is relatively unknown, an example of a scale item. Specify the number of items for each scale or subscale, how items are rated (e.g., from 1 to 5 on a 5-point Likert scale), and the direction of scores (e.g., higher scores indicate higher distress). Include the potential range in scores and if relevant to the study, information about norms or relevant cut-off points (e.g., scores reflecting symptoms in the “clinical” range). Also specify which variables from the scale will be used in the study (e.g., the total score; subscale scores).

The description of scales can then be followed by an outline of their known statistical properties (see Streiner & Norman, Citation2008; Weinbach & Grinnell, Citation2014 for assistance with terminology). Internal reliability is the most commonly reported scale metric (usually measured by Cronbach’s alpha). Report the reliability coefficient from the original article detailing the scale’s properties and ideally in addition, the coefficient for the current sample. Scale metrics may also be reported for other forms of reliability, including inter-rater or test-retest reliability. Also provide information and supporting references relating to any data on scale validity (e.g., forms of criterion validity, namely concurrent validity, predictive validity; forms of construct validity, namely convergent/ divergent validity, discriminant validity). Further information about the correct terminology for different types of validity can be found in social science texts (e.g., Neuman, Citation2014; see also Freiberg et al., Citation2014 for a recent example in ASW).

Procedures and Data Analysis

Procedures provides a stepwise factual account of how the research was conducted. If not reported in the Sample subsection, the Procedures usually commences by confirming the study received institutional ethics approval and indicating how participant informed consent was obtained. In outlining the recruitment process, disclose any incentives or support offered to encourage or enable participation (e.g., payment, transport, child care). Any commentary on the strengths and limitations of the procedures is generally reserved for the Discussion. One exception to this is if the procedures were modified during the study, due to unanticipated developments.

A full account of how data were collected then follows. This typically includes information on the training or qualifications of those involved in data collection; how confidentiality and privacy were maintained; instructions provided to participants; how the measures were administered (e.g., by phone, by e-mail); steps taken to verify or confirm data; and any follow-up contact with participants.

For intervention studies, include an additional subsection describing the type of intervention or program, which is either referenced to a previously documented account or presented in sufficient detail to enable replication (Moher et al., Citation2010). Adaptations or changes to documented interventions need to be briefly explained. The qualifications and training provided to those conducting the intervention should also be included, along with any procedures to ensure fidelity of the intervention or to assess variation in implementation (Moher et al., Citation2010).

A data analysis subsection, outlining the approach to the planned analysis of the data, may be included towards the end of the Method (Campbell, Citation2010). However, if only descriptive data are being generated (e.g., means and standard deviations, frequencies and percentages), a dedicated data analysis section is usually unnecessary, and a statement in the Procedures indicating that descriptive statistics were calculated for the study variables will suffice.

A dedicated data analysis section is required if the study includes inferential statistics (see Weinbach & Grinnell, Citation2014 for more information). It is important to align the analyses with the study aims and to ensure that the reporting is done at a level that is accessible to a professional readership (i.e., mathematical proofs or formulae are not required), while still providing essential information needed to enable the results to be evaluated. Common inferential statistical tests published in ASW have included t-tests (e.g., Al-Kandari, Citation2015), chi-square tests (e.g,. Cleak & Smith, Citation2012), analysis of variance (ANOVAs) (e.g., Al-Kandari, Citation2015), and regression analyses (e.g., Cortis & Meagher, Citation2012).

Generally, data analysis commences by referring to the statistics program that the data were entered into, including the version number but not the reference. It is then usual to generate descriptive statistics as the first step. Any procedures for managing outliers or missing data also need explaining (Campbell, Citation2010). The testing of assumptions underlying specific statistical procedures (e.g., normality, multicollinearity) need to be mentioned, but it is unnecessary to provide detail (Campbell, Citation2010). Decisions about the use of parametric versus nonparametric statistical procedures need to be explained (see Weinbach & Grinnell, Citation2014).

Detailed guidelines for the reporting and displaying of statistical analyses including univariate and multivariate analyses (e.g., correlations, ANOVAs, regressions, survival analyses, factor analysis) can be found in the APA Publication Manual and Campbell (Citation2010). When p-values are used, the significance level below which the null hypothesis is rejected needs to be reported, even if it is standard alpha (usually p < .05) (Campbell, Citation2010), as well as the level of any Bonferroni corrections made to the value of alpha in the case of multiple tests. Confidence intervals are a useful alternative to p-values in indicating the significance of statistical tests (Smeeton & Goda, Citation2003; Weinbach & Grinnell, Citation2014), and either approach can be used.

Results

The Results provides a factual account of the study outcomes while reserving commentary on the results for the Discussion (Campbell, Citation2010). This section typically commences by describing the sample (and subgroups, where applicable), followed by the results for the key analyses. Findings are best presented in alignment with the study aims or hypotheses and follow the data analysis procedures described in the Method. For example, descriptive data for key outcome variables are usually reported first. These results then provide the starting point after which results from univariate or multivariate analyses can be outlined.

Careful attention needs to be paid to the reporting of statistically significant results among variables from correlational analyses in observational studies. As significant correlations do not imply causality (Weinbach & Grinnell, Citation2014), terms, such as “association”, more accurately characterise the relationship between variables (e.g., “Increasing levels of social support were associated with decreasing stress reactions among parents of children with severe disability.”).

Large amounts of descriptive or statistical data are best reported in Tables (see below). However, if there are small amounts of data, these can be reported in the text. In such cases, percentages need to be accompanied by the base numbers (e.g., “The study found that 40% (60/150) of respondents…”). If multiple percentages derived from the same base number are reported in the same paragraph, then outlining the base number at the beginning of the paragraph will usually suffice. Alternatively, if the percentages in a paragraph represent data with differing denominators, the base number for each percentage will need to be supplied. If reporting a mean, then the standard deviation also needs to be supplied (e.g., M = x.xx, SD x.xx). If reporting inferential statistics, then the test statistic, degrees of freedom, and p-value need to be included (e.g., χ2 (1, N = 100) = 1.47, p = 0.042; see Cleak & Smith, Citation2012; Al-Kandari, Citation2015 for examples). The p-value is best reported as the actual value instead of p < 0.05, unless it is very close to zero (e.g., p=0.0002), in which case the p-value can be reported as p < 0.001 (Lang & Secic, Citation2006). Statistics (e.g., r, t, F, χ2) are commonly reported to two decimal places.

The selective use of Tables is also encouraged to efficiently display patterns of descriptive data and analyses of relationships among variables. Tables are usually preferred over Figures, because they generally convey quantitative data with greater accuracy (APA Publication Manual, chapter 5). A typical article reporting quantitative research may have up to six Tables or Figures. Avoid duplicating results in the text if they are displayed in a Table. If data are reported in Tables, use the text to provide an overview of the pattern of data, drawing the readers’ attention to particular results of significance. The most common Figures comprise graphs and charts. Graphs can be a useful way to display the relationship between continuous quantitative variables (y-axis) and groups of participants (x-axis) (APA Publication Manual, chapter 5; for an example in ASW, see Grace & Gill, Citation2014). Charts can be valuable for displaying conceptual relationships or theoretical models.

Discussion

The focus of the Discussion is on interpretation of the findings and drawing out their implications, rather than on summarising the findings already presented in the Results. The first paragraph is often a good place to showcase the two or three key take-home messages from the study. More detailed consideration of the significant findings can be provided in subsequent paragraphs. If appropriate, discuss the findings in the same order as the research questions and hypotheses.

A central task in the Discussion is to contextualise the findings in relation to previous scholarship and research, and highlighting the key contributions of the current study. The Discussion is also the place to address methodological issues that influenced the findings, such as similarities and differences between samples from different studies. Nonsignificant and unexpected findings also warrant recognition and interpretation.

Ensure that results from multivariate analyses are given greater weight than those from univariate analyses. The temptation to introduce new data or further analyses into the Discussion to support an argument is to be avoided. Moreover, all studies have limitations, no matter how well designed. The Discussion needs also recognise the key limitations and provide a balanced commentary on their potential impact. It is equally important not to overemphasise the limitations, as this might lead readers to dismiss findings of potential value.

Where relevant, the Discussion needs to also address the implications for policy, practice, and theory. In this regard it is crucial that the significance placed on the findings is proportionate to the size of the study. For example, if a preliminary study has been conducted, it is more realistic to propose that it demonstrates the feasibility of the approach, or sets the groundwork for more substantial investigation, rather than providing sufficient basis for a change in policy or practice. Finally, it is also important to indicate directions for future research, highlighting new questions raised by the study and proposing further avenues to explore.

Conclusion

The guidance and suggestions offered respond to issues raised in quantitative research papers submitted to the ASW. They reflect the growing engagement with research, among social workers within Australia and international efforts to enhance reporting of research. We anticipate that the paper will provide a useful resource to authors and reviewers, assist in the development of editorial policy, and contribute to advancing the quality and consistency in the reporting of quantitative research submitted to ASW.

References

  • Al-Kandari, H. Y. (2015). High school students’ contact with and attitudes towards people with intellectual and developmental disabilities in Kuwait. Australian Social Work, 68(1), 65–83. doi:10.1080/0312407X.2014.946429
  • Alston, M., & Bowles, W. (2012). Research for social workers: An introduction to methods (3rd ed.). Sydney: Allen & Unwin.
  • American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.
  • Brough, M., Wagner, I., & Farrell, L. (2013). Review of Australian health-related social work research 1990–2009. Australian Social Work, 66, 528–539. doi:10.1080/0312407X.2012.738236
  • Campbell, M. (2010). Detailed guidelines for reporting quantitative research in Health & Social Care in the Community. doi:10.1111/j.1365–2524.2010.00962.x. Retrieved 12 November 2014, from http://www.blackwellpublishing.com/pdf/hsc_962_rev.pdf
  • Cleak, H., & Smith, D. (2012). Student satisfaction with models of field placement supervision. Australian Social Work, 65, 243–258. doi:10.1080/0312407X.2011.572981
  • Cortis, N., & Meagher, G. (2012). Social work education as preparation for practice: Evidence from a survey of the New South Wales community sector. Australian Social Work, 65, 295–310. doi:10.1080/0312407X.2012.707666
  • Des Jarlais, D. C., Lyles, C., Crepaz, N., & The TREND Group. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. doi:10.2105/AJPH.94.3.361
  • De Vaus, D. (2013). Surveys in social research (6th ed.). London: Routledge.
  • Freiberg, K., Homel, R., & Branch, S. (2014). The Parent Empowerment and Efficacy Measure (PEEM): A tool for strengthening the accountability and effectiveness of a family support service. Australian Social Work, 67, 405–418. doi:10.1080/0312407X.2014.902980
  • Grace, M., & Gill, P. R. (2014). Improving outcomes for unemployed and homeless young people: Findings of the YP4 clinical controlled trial of joined up case management. Australian Social Work, 67, 419–437. doi:10.1080/0312407X.2014.911926
  • Hodgkin, S. (2011). Participating in social, civic, and community life: Are we all equal? Australian Social Work, 64, 245–265. doi:10.1080/0312407X.2011.573798
  • International Committee of Medical Journal Editors. (2008). Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication. Retrieved 12 November 2014, from http://www.icmje.org/.
  • Lang, T. A., & Secic, M. (2006). How to report statistics in medicine (2nd ed.). USA: ACP Press.
  • Moher, D., Hopewell, S., Schulz, K. F., Montori, V., Gøtzsche, P. C., Devereaux, P. J. … Altman, D. G. (2010). CONSORT 2010 explanation and elaboration: Updated guidelines for reporting parallel group randomised trials. British Medical Journal, 340, c869. doi:10.1136/bmj.c869
  • Neuman, W. L. (2014). Social research methods: Qualitative and quantitative approaches. Essex: Pearson Education.
  • Ryan, M., & Sheehan, R. (2009). Research articles in Australian Social Work from 1998–2007: A content analysis. Australian Social Work, 62, 525–542. doi:10.1080/03124070902964616
  • Smeeton, N., & Goda, D. (2003). Conducting and presenting social work research: Some basic statistical considerations. British Journal of Social Work, 33, 567–573. doi:10.1093/bjsw/33.4.567
  • Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use (4th ed.). Oxford: Oxford University Press.
  • Tabachnick, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6th ed.). New York, NY: Harper Collins.
  • Thyer, B. A. (2012). Quasi-experimental research designs. New York, NY: Oxford University Press. doi:10.1093/acprof:oso/9780195387384.001.0001
  • Vandenbroucke, J. P., von Elm, E., Altman, D. G., Gøtzsche, P. C., Mulrow, C. D., Pocock, S. J., … Egger, M. for the STROBE Initiative. (2007). Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and elaboration. PLoS Medicine, 4(10): e297. doi:10.1371/journal.pmed.0040297
  • Weinbach, R. W., & Grinnell, R. M. (2014). Statistics for social workers (9th ed.). Boston, MA: Pearson Education.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.