1,145
Views
0
CrossRef citations to date
0
Altmetric
Sports Performance

More than metrics: The role of socio-environmental factors in determining the success of athlete monitoring

ORCID Icon, , &
Pages 323-332 | Received 08 Mar 2023, Accepted 04 Mar 2024, Published online: 17 Mar 2024

ABSTRACT

The perceived value of athlete monitoring systems (AMS) has recently been questioned. Poor perceptions of AMS are important, because where practitioners lack confidence in monitoring their ability to influence programming, and performance is likely diminished. To address this, researchers have primarily sought to improve factors related to monitoring metrics, e.g., validity rather than socio-environmental factors, e.g., buy-in. Seventy-five practitioners (response rate: n = 30) working with Olympic and Paralympic athletes were invited to take part in a survey about their perceptions of AMS value. Fifty-two per cent (n = 13) was confident in the sensitivity of their athlete self-report measures, but only 64% (n = 16), indicated their monitoring was underpinned by scientific evidence. A scientific base was associated with improved athlete feedback (rS (23) = 0.487, p =0.014*) and feedback correlated with athlete monitoring adherence (rS (22) = 0.675, p =  <0.001**). If athletes did not complete their monitoring, 52% (n = 13) of respondents felt performance might be compromised. However, most respondents 56% (n = 14), had worked with internationally successful athlete(s) who did not complete their monitoring. While AMS can be a useful tool to aid performance optimisation, its potential value is not always realised. Addressing socio-environmental factors alongside metric-factors may improve AMS efficacy.

Introduction

Athlete monitoring systems (AMS) are tools used by coaches, multi-disciplinary teams and athletes to collect, analyse and provide information and feedback on the internal and external loads athletes are exposed to and their responses to them (Impellizzeri et al., Citation2019). Typically, practitioners (sport science and medicine personnel) use AMS with the aim of decreasing injury incidence and enhancing athletic performance and use the data gathered to support coaches’ decision-making (Halson, Citation2014; Saw et al., Citation2015c). Recently, aspects of athlete monitoring, such as customised athlete self-report measures (ASRM) (Jeffries et al., Citation2020), its use as an injury predictor (West et al., Citation2021), and analysis methods, i.e., acute to chronic workload ratio (Impellizzeri et al., Citation2020) have brought AMS under scrutiny. These issues have led to some researchers reporting that the evidence supporting the efficacy of monitoring systems is not high (Coyne et al., Citation2018; Heidari et al., Citation2018). Poor AMS efficacy matters, as this can result in a myriad of issues (Akenhead & Nassis, Citation2016; Neupert et al., Citation2019; Weston, Citation2018). In particular, practitioners may lack confidence in their AMS to deliver a primary objective of detecting changes in athletic training status and subsequently use this information to improve athletic performance (Halson, Citation2014). Where practitioners perceive AMS efficacy is poor, the likelihood they can positively influence training programme decisions is diminished, and the subsequent role of the AMS within the sporting organisation risks becoming unclear (Coyne et al., Citation2018).

The key reported barriers to AMS efficacy can be broadly split into two categories (Saw et al., Citation2015b). The first category metric-related factors include factors such as measure reliability, validity and scientific underpinning, data analysis and equipment choice. The second category socio-environmental factors, are factors which are external to the metric encapsulating both environmental and cultural aspects, e.g., stakeholder buy-in, culture and practitioner expertise (Saw et al., Citation2015b). To date, research in the area of athlete monitoring has primarily involved a more mechanistic investigation of the metric-related factors, i.e., the science supporting what to monitor, and how best to execute data collection to improve scientific rigour (Bailey, Citation2019). This focus is important, but arguably, it has been driven in part by practitioners’ positivist research leanings (Vaughan et al., Citation2019) and has come at the expense of investigating socio-environmental factors. Consequently, the value of socio-environmental factors and interventions to improve AMS efficacy such as: education sessions to upskill stakeholders (McGuigan et al., Citation2023), punitive consequences to improve athletes adherence (Saw et al., Citation2015b), or strategies to foster trust and improve buy-in to monitoring have yet to be fully established, despite their face validity (McGuigan et al., Citation2023; Neupert et al., Citation2019). Socio-environmental factors tend to be less tangible or easily measurable in comparison to metric-related factors, perhaps explaining why they have received less research attention.

Metric-related factors, including AMS design, content and time to complete the AMS, have been previously ranked by athletes as the top three barriers to their compliance and thus AMS efficacy (Saw et al., Citation2015a). In comparison, research examining practitioner perceptions of AMS efficacy has highlighted socio-environmental factors as the primary cause of poor AMS efficacy (Akenhead & Nassis, Citation2016; Neupert et al., Citation2019). For example, in professional soccer, the top two factors negatively impacting practitioner confidence in athlete monitoring were reported as limited human resources and poor coach buy-in (Akenhead & Nassis, Citation2016). Poor measure validity, a metric-related factor, was ranked third. These disparities in viewpoints likely reflect the different roles and responsibilities of these stakeholders, and the degree of influence they may have over the metric or socio-environmental factors. Few studies outside of professional sport (Akenhead & Nassis, Citation2016; Weston, Citation2018), have explored practitioner views of AMS efficacy or focussed on AMS socio-environmental factors. It is, however, increasingly apparent that the nature of the inter-personal relationships formed between practitioners and athletes form a key part of how an athlete engages with monitoring (McCall et al., Citation2023).

Stakeholder buy-in, a socio-environmental factor, is vital for the success of athlete monitoring (Akenhead & Nassis, Citation2016). While the term “buy-in” is perhaps implicitly understood by sports scientists, it has been poorly defined in relation to monitoring. In organisational change, buy-in refers to a continuum of cognitive and behavioural activities related to an individual’s commitment to change (Mathews & Crocker, Citation2014). Buy-in to an AMS could therefore be described as an individual’s cognitive (attitude and beliefs) and behavioural (actions) commitment to the AMS. Examples could include perceptions of AMS value, athlete AMS adherence or reporting truthfulness and responsiveness of the coaches/practitioners to meaningful changes in training status. Buy-in can arguably therefore provide a general indication of the value stakeholders place in their AMS and will likely be influenced by both metric and socio-environmental factors.

Athlete buy-in to an AMS is central to its success, but attaining buy-in can be problematic (Neupert et al., Citation2019), and athlete adherence can vary widely (Barboza et al., Citation2017; Cunniffe et al., Citation2009). Possible reasons for poor athlete buy-in include engagement differences by sport, variations in organisational infrastructure, inadequate feedback and the dynamics of the coach/athlete/practitioner relationship (Barboza et al., Citation2017; Jowett & Cockerill, Citation2003; McCall et al.,Citation2023; Saw et al., Citation2015a). Indeed, where an AMS has been executed poorly, the consequences can extend beyond problematic athlete buy-in, and potentially negatively impact athlete career progression and mental health (Manley & Williams, Citation2022). Accordingly, it is important that socio-environmental factors such as buy-in, and how to foster it, particularly in the context of supporting the athlete (McCall et al., Citation2023), are carefully considered when planning and implementing an AMS.

Coach buy-in to an AMS is also vital, as they are primarily responsible for modifying athlete training programmes, and AMS data can help inform this process (Halson, Citation2014). Coaches do, however, need to assimilate considerable amounts of information before making programmatic decisions, including their own expertise, insights, cognitive biases and understanding of the athlete’s training status and history (Collins et al., Citation2016). Athlete monitoring information therefore forms a part of a broader more complex picture which may influence coach buy-in to AMS. Previously, poor coach buy-in to sports science has been attributed to a failure to translate scientific findings into practice, and monitoring metrics usurping coaching craft (Buchheit, Citation2017). Given the negative attitudes of some athletes and coaches towards AMS, the reported problems with buy-in to athlete monitoring specifically are unsurprising (Akenhead & Nassis, Citation2016), and more consideration is required to understand why this might be the case.

There are growing concerns about the effectiveness of athlete monitoring, and evidence that this may subsequently pose problems establishing both the clarity of purpose and utility of monitoring for athletes, practitioner and coaches (Coyne et al., Citation2018; Jeffries et al., Citation2020). Understanding the perceptions practitioners have of their AMS and how these are influenced by socio-environmental factors, in particular, is an important step towards improving athlete monitoring practices given the focus on metric related factors to-date (Akenhead & Nassis, Citation2016). Thus, the aim of this study was to investigate elite sport practitioners’ perceptions of their AMS efficacy, with a particular focus on how socio-environmental factors, such as buy-in, may impact practitioner perceptions of AMS efficacy.

Methods

Participants and methodology

Seventy-five elite sport practitioners who worked with tier 3–5 Olympic and Paralympic athletes (national level to world class) in the United Kingdom (McKay et al., Citation2022) were invited to participate in an online password protected survey in 2017/18 (Online Surveys, JISC, Bristol) adhering to web survey guidelines (Appendix 1) (Eysenbach, Citation2004). The survey took ~20 min with questions primarily answered by checkboxes or Likert scale responses (Neupert et al., Citation2022). Reminders were sent out after two-weeks, and the survey return rate was 40% (n = 30). Respondents were selected through stratified and convenience sampling, and access was gained through gatekeepers at the respective sporting organisations. All respondents received a full written explanation of the study and were given the opportunity to voluntarily agree to participate after viewing the study information. Ethics approval was granted by the Faculty of Business Law and Sport Ethics committee, University of Winchester (Reference: BLS/17/26).

Statistics

As the Likert data were ordinal and not normally distributed, Spearman’s correlation coefficient was used to test the strength of relations (SPSS, V26, Armonk, NY: IBM Corp). Significance was set at 0.05 and Bonferroni corrected to 0.017. For clarity, p values reported in bold are significant at the corrected alpha level, and are otherwise reported in non-bold as *p<.05, and **p<.001. The data was separated into two categories which have been previously identified as important for AMS success (Saw et al., Citation2015b). Firstly, the relation between the scientific underpinning of an AMS and practitioner confidence and actions related to their AMS, and secondly factors influencing AMS engagement. Free text data were grouped into key themes and is represented by indicative quotes.

Results

Background information

Thirty sports science and medicine practitioners completed the survey, each representing a discreet discipline across 14 different Olympic and Paralympic sports, including Athletics, Para Athletics, Boxing, Canoeing (sprint and slalom), Para Canoeing, Cycling and Para Cycling, Gymnastics, Hockey, Judo, Rowing, Rugby 7’s, Sailing, Swimming, Taekwondo and Triathlon. Respondents had 8 ± 5 years (mean ± SD) experience of working in elite sport and collectively worked with 599 senior internationally competitive athletes. The majority (83%, n = 25) of respondents employed a customised monitoring system.

Practitioners perceptions of their athlete monitoring systems

Just over half (52%, n = 13) of respondents were quite or very confident in the sensitivity of their athlete self-report measures (ASRM) to detect meaningful change, with 36% (n = 9) neutral and 12% (n = 3) not confident. Respondents reported that scientific studies underpinned their ASRM in 64% (n = 16) of cases, with 24% (n = 6) disagreeing, and 12% (n = 3) neutral. A trend towards respondents expressing confidence in their ASRM sensitivity and reporting scientific evidence supported that their ASRM was apparent (rS (23) = 0.398, p = 0.049*). Reasons respondents gave for poor confidence in ASRM included untruthful athlete reporting practices and individual variability complicating the identification of meaningful change.

[My confidence in my ASRM] varies on an individual basis, it all depends on the athlete understanding the need for this system, and them being honest.

(P2)

Respondents suggested several ways to address their lack of ASRM confidence, including the production of best practice guidelines and improving the engagement and truthfulness of athlete ASRM reporting.

If the athletes were better engaged this would provide greater [practitioner] confidence in the accuracy of reports.

(P25)

Athletes were perceived to be truthful in their reporting practices by 56% (n = 14) of respondents, with the remainder neutral, 36% (n = 9), or in disagreement 8%, (n = 2). Some factors that were reported as influencing athlete reporting practices are outlined in .

Figure 1. Practitioners’ perceptions of what factors primarily influenced the honesty of athlete reporting in an AMS.

Figure 1. Practitioners’ perceptions of what factors primarily influenced the honesty of athlete reporting in an AMS.

Respondents were divided over whether action was taken, e.g., training programme modifications in response to the detection of meaningful changes within monitoring scores, with 44%, (n = 11) in agreement, 20% (n = 5), neutral and 36%, (n = 9) disagreeing. When action was taken, respondents were more likely to report a scientific underpinning to their measures (rS (23) = 0.490, p = 0.013*). Reasons given for not modifying training where meaningful change in ASRM data was detected included: the change being intentional and expected, and poor coach buy-in to the monitoring process preventing action being taken.

The coach has the final say, if they feel they are still able to train then they will.

(P2)

Coaches don’t understand the [monitoring] questions … and don’t respond to [athlete] answers.

(P16)

The reasons behind meaningful change in athlete monitoring scores were key, with one practitioner stating:

I would never pull an athlete entirely on the basis of scores. Would need interrogation incl. athlete + practitioner conversation to understand big picture. Scores (−ve) may be intentional.

(P26)

Respondents rated the degree to which key stakeholders supported their AMS (). The AMS providers and fellow practitioners were perceived as providing full support to ensure their AMS and were successful in 87% (n = 20) and 96% (n = 22) of cases, respectively. In comparison, respondents felt fully supported by their management, 74% (n = 17) and coaches 43%, (n = 10). Two respondents did not rate the support they received.

Figure 2. Respondents rated the degree to which they felt supported by different stakeholders to implement and ensure the ongoing success of their AMS.

Figure 2. Respondents rated the degree to which they felt supported by different stakeholders to implement and ensure the ongoing success of their AMS.

In relation to expected adherence rates, 64% (n = 16) of respondents indicated that athletes always or very frequently completed their AMS data, with 32% (n = 8) reporting that their AMS was rarely, or occasionally completed and 4% (n = 1), unsure. While the majority 56% (n = 14) of respondents felt that poor adherence could be tied to a specific timeframe, e.g., during competitions, there was no consensus on when this primarily occurred during the training calendar. Where practitioners reported that their metrics had a scientific underpinning, there was also a correlation with improved feedback to the athletes (rS (23) = 0.487, p =0.014*) with the provision of sufficient feedback also associated with improved athlete adherence (rS (22) = 0.675, p =  <0.001**). Additionally, reported athlete adherence was significantly correlated to perceptions that athletes had received enough AMS education (rS (22) = 0.547, p = 0.006*), but not coach AMS education (rS (22) = 0.278, p = 0.188). Nonetheless, over half of respondents reported that athletes 56% (n = 14), and coaches 60% (n = 15) had sufficient AMS education. When asked how to improve athlete adherence, respondents stated more coach-led feedback was required:

If the athlete reports anything it must be followed up, otherwise the trust in the process is gone.

(P19)

One respondent inferred increased athlete education was required:

Them [athletes] simply understanding the WHY (of monitoring).

(P2)

Respondents suggested methods to promote adherence, including imposing punitive consequences and rewards:

Write [adherence] into athlete agreement with consequences if not filled in (stick) and modified and individualised training based off it. (carrot)

(P8)

Nonetheless, one respondent whose sport did impose consequences for poor adherence commented:

Achieve buy-in instead of it being a programme requirement.

(P22)

Examples of punitive consequences reported included a reduction in one-to-one coaching sessions, removal from training, no training individualisation and funding withdrawal. Implementation of such consequences was reported by 24% (n = 6) of respondents. However, between those with consequences in place and not, good adherence levels differed little at 67% (n = 4) or 63% (n = 12), respectively.

Just over half of the respondents 52%, (n = 13) felt that athletes’ performance might be compromised if they did not complete their athlete monitoring, with 48%, (n = 12) disagreeing. Furthermore, 56% (n = 14) of respondents reported that they worked with internationally successful athletes (defined as those who had medalled at the Olympics, World Championships/Cups) who did not complete the monitoring required by their sporting organisation. Overall, from all 30 respondents, 60% (n = 18) felt that an improvement in athlete monitoring in their sporting organisation was required, with 30% (n = 9), undecided and 10% (n = 3) disagreeing. When given the opportunity to indicate what improvements they might wish to see, the most popularly cited suggestions included improving the evidence base behind measures, improving data analysis and feedback to athletes, integrating data from other objective sources, and addressing technical issues.

A better understanding of how best to analyse the data and improved strategies to enhance adherence. Improved methods of feedback to coaches and athletes.

(P10)

Discussion

Practitioners had mixed perceptions of their AMS. Only 52% indicated that they had confidence in their ASRM sensitivity, with 64% stating their metrics had a scientific underpinning (rS (23) = 0.398, p = 0.049*). This is the first time that the reported trend (Duignan et al., Citation2020; Jeffries et al., Citation2020), of a lack of scientific evidence underpinning an ASRM has been quantified. Only 44% of respondents agreed that meaningful changes in monitoring scores resulted in appropriate remedial action, with half of respondents reporting that removal of their AMS would not compromise athlete performance. Overall, the potential of an AMS to positively influence performance appeared not to be fully realised.

Practitioner confidence in monitoring

While 64% of respondents felt that their ASRM had a scientific underpinning, nearly a quarter (24%) disagreed. Having an ASRM with a clear underpinning scientific rationale was associated with improved practitioner confidence in the sensitivity of their measures (rS (23) = 0.398, p = 0.049*), greater athlete feedback (rS (23) = 0.487, p =0.014*), and improved responsiveness by key stakeholders, e.g., coaches to changes in training status (rS (23) = 0.490, p = 0.013*). Researchers have indicated that a successful AMS should influence training programming and planning (Halson, Citation2014). The findings from this study demonstrate the importance of ASRM scientific rigour, i.e., metric-related issues to improve practitioners’ ability to provide athlete feedback and influence training programming in “real-world” practice.

The confidence respondents reported in their ASRM was, however, divided, with only 52% confident in the sensitivity of their ASRM to discern meaningful change. Research has previously reported low practitioner confidence in monitoring systems due to the perception of athletes manipulating their self-report data and a reduced ability to monitor athletes during competition in elite football (Akenhead & Nassis, Citation2016; Saw et al., Citation2015b). Similar findings were reported in this study with difficulty identifying meaningful change within the data, and the perception of dishonest athlete reporting practices negatively impacting practitioner confidence in their metrics. Just over half (56%) of respondents from this survey felt athletes completed their monitoring honestly, with the remainder either unsure or reporting that athletes were untruthful. Conflictingly, elite athletes have indicated that they self-report mostly honestly (Neupert et al., Citation2019) but may “impression manage” (Manley & Williams, Citation2022).

The primary reasons given for untruthful reporting practices were poor athlete engagement or the potential of training programme consequences (). The majority of previously reported reasons for athletes manipulating their ASRM responses have focussed on athlete-related issues, such as: social desirability bias (Saw et al., Citation2015b), fear of inappropriate training programme modifications (Duignan et al., Citation2019), and avoidance of punitive consequences (Saw et al., Citation2015b). Putting the onus back on to practitioners to cultivate trusting relationships with athletes has, however, recently been suggested as a method to tackle dishonest reporting practices (McCall et al., Citation2023). To date, efforts to mitigate untruthful reporting practices have primarily advocated athlete education sessions, although these appear to have a variable impact (Duignan et al., Citation2019; Neupert et al., Citation2019). Implementing a social desirability response scale to adjust for bias (Tracey, Citation2016) may, in part, tackle concerns about data manipulation. Nevertheless, it simultaneously risks alienating athletes and propagating an ethos of hostile surveillance (Manley & Williams, Citation2022). Given that the manipulation of ASRM responses is reportedly less likely with senior team rather than junior team members (Duignan et al., Citation2019), the apparent pervasiveness of poor athlete reporting practices requires further reflection.

Overall, practitioner confidence in their ASRM can be adversely impacted by a range of socio-environmental and metric-related phenomena (Jeffries et al., Citation2020; Saw et al., Citation2017). A fundamental shift in athlete monitoring culture is, however, required to positively influence both the socio-environmental and practitioner confidence issues highlighted by this research. Putting athletes at the centre of monitoring and reframing it as a core principle of athlete healthcare with a focus on creating a psychologically safe environment should be explored in future research.

Engagement of end-users with monitoring

Enhancing performance has been described as a primary aim of AMS (Saw et al., Citation2018). Nevertheless, only half of the respondents in this survey indicated that performance would be compromised if no AMS was in place in their sport. When combined with 58% of respondents reporting that they worked with internationally successful athletes who did not complete their AMS, this inevitably leads to questions about key stakeholder engagement with and the utility of AMS.

outlines the degree of support respondents felt they received for their AMS. Fellow practitioners were the most likely to give full support for the AMS (in 96% of cases). Management was fully supported 74% of respondents, but 52% of respondents indicated that they did not have full support for their AMS from their coach. This is higher than the 37% of elite football practitioners reporting coach buy-in as a substantial barrier to the efficacy of their athlete monitoring (Akenhead & Nassis, Citation2016).

Research to date has mainly attributed poor coach engagement with athlete monitoring to failures to provide clear practical messaging, inaccessible scientific language, and internal politics caused by a perception of AMS usurping coaching craft in driving targets, funding, and performance assessment (Buchheit, Citation2017; Weston, Citation2018). Thus, despite coaches and practitioners having similar beliefs regarding the purpose and utility of athlete monitoring, these views do not necessarily translate into similar perceived benefits of, nor positive engagement with athlete monitoring (Weston, Citation2018). Strategies for achieving coach buy-in should be incorporated into AMS implementation guidelines (Saw et al., Citation2017) to avoid monitoring failing to meet expectations or causing conflict (Akenhead & Nassis, Citation2016; Starling & Lambert, Citation2018).

In this survey, the majority of respondents used a custom AMS. The brevity and sports specificity of custom AMS are often cited as promoters of athlete adherence (Saw et al., Citation2015b). However, this study and others (Barboza et al., Citation2017; Saw et al., Citation2015b) have shown that athlete adherence to monitoring is still problematic. While the figure of 67% of respondents reporting good adherence from this study is broadly similar to the rates of 56% and 79% reported elsewhere (Barboza et al., Citation2017; Cunniffe et al., Citation2009), it remains unclear whether custom AMS positively influences athlete adherence.

Perceptions of athlete adherence were, however, significantly related to whether the respondents reported that athletes received sufficient feedback (rS (22) = 0.675, p = <0.001**). This is an important and novel finding, as researchers have previously only proposed a potential relation between feedback and adherence (Barboza et al., Citation2017). Improving feedback processes may therefore provide a mechanism for practitioners to positively influence athlete adherence, as effective feedback loops can enhance decision-making (Barboza et al., Citation2017). Just under quarter (24%) of respondents indicated that their sporting organisation-imposed consequences for poor athlete adherence to AMS, typically in the form of training privilege removal. As highlighted elsewhere, these practices can be negatively viewed by athletes, and often have deleterious effects on AMS engagement (Saw et al., Citation2015b). Conflictingly, while some respondents from this study with no imposed consequences sought to have them implemented, others with consequences in place wanted them removed. These contradictory views should lead practitioners to reflect on the efficacy of coercion as a behaviour change strategy to promote AMS adherence, to avoid a “grass is greener” view. Overall, punitive consequences should be exercised with caution, and with a shared philosophy and consent from the athletes involved.

Based on the results from this study, the potential value of AMS is not always realised, as half of respondents indicated athlete performance would be compromised if their AMS did not exist, and 58% of respondents reported that they had worked with world-class athletes who did not complete their monitoring. In order for AMS to provide value, sporting organisations should therefore consider how to influence socio-environmental factors that may impact their AMS as well as metric-related factors. This finding is particularly important given the typically positivistic research philosophies of practitioners’ risks biasing their focus towards metric-related factors (Vaughan et al., Citation2019), and away from socio-environmental factors. Employing AMS as a method to reduce the uncertainty associated with performance enhancement and illness/injury prevention, rather than a panacea for injury prevention and performance optimisation, may assist practitioners situate it as one tool within a more multi-faceted coach decision-making framework.

Limitations in this study include respondent non-response bias and the transferability of findings to other sporting contexts. Similarities between different elite sports settings, particularly within amateur sport, can be cautiously presumed due to the variety of respondents and sports involved in this survey. Practitioners are encouraged to reflect on the applicability of these findings to their own settings (Smith, Citation2018). Familywise error rate was controlled through the use of Bonferroni corrections. The data were separated into factors relating to athlete AMS adherence and those related to the scientific underpinning of the metric. The partitioning of this data aimed to reduce issues related to multiple statistical comparisons whilst balancing the increased risk of Type 2 errors using Bonferroni corrections. Finally, the recent dramatic increase of monitoring technology has enhanced the ease with which large volumes of data about an athlete can be collected. Therefore, while it is a limitation that this data were collected in 2017/18 it is now, more than ever, important for practitioners to consider the broader context of monitoring beyond metric-related factors.

Practical applications

The information discussed in this manuscript is most likely to benefit practitioners monitoring elite amateur athletes but may also have generalisability to other professional sport settings. These findings are important in the international sport context because they assist practitioners in developing AMS that improve the monitoring experience and deliver better results for key stakeholders. These findings provide an important call to consider socio-environmental factors alongside metric related factors when evaluating the effectiveness of an AMS.

  • Ensure scientific evidence underpins any custom ASRM to promote both practitioner confidence in the metric and athlete feedback.

  • Formal AMS can provide value but are not necessarily required to develop internationally successful athletes.

  • Socio-environmental factors, such as buy-in, should be considered alongside metric related factors in an AMS.

Conclusion

Practitioners working across a range of elite sport in the UK reported their perceptions of their AMS efficacy in an online survey. Common issues included a lack of confidence in the sensitivity of ASRM which correlated with ASRM lacking a scientific underpinning. Difficulties establishing monitoring buy-in with coaches and athletes were also reported. Providing sufficient feedback to athletes was statistically correlated with increased athlete monitoring adherence. The difference between some practitioners' beliefs; (a lack of monitoring compromises performance) versus reality; (some internationally successful athletes do not complete monitoring) indicates that the efficacy of monitoring should be regularly reviewed to ensure it is providing value with an eye on both socio-environmental and metric-related factors.

Supplemental material

Questionnaire.xlsx

Download MS Excel (18.3 MB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online https://doi.org/10.1080/02640414.2024.2330178

Additional information

Funding

The author(s) reported that there is no funding associated with the work featured in this article.

References

  • Akenhead, R., & Nassis, G. P. (2016). Training load and player monitoring in high-level football: Current practice and perceptions. International Journal of Sports Physiology and Performance, 11(5), 587–593. https://doi.org/10.1123/ijspp.2015-0331
  • Bailey, C. (2019). Longitudinal monitoring of athletes: Statistical issues and best practices. Journal of Science in Sport and Exercise, 1(3), 217–227. https://doi.org/10.1007/s42978-019-00042-4
  • Barboza, S. D., Bolling, C. S., Nauta, J., Van Mechelen, W., & Verhagen, E. (2017). Acceptability and perceptions of end-users towards an online sports-health surveillance system. BMJ Open Sport & Exercise Medicine, 3(1), e000275. https://doi.org/10.1136/bmjsem-2017-000275
  • Buchheit, M. (2017). Houston, we still have a problem. International Journal of Sports Physiology and Performance, 12(8), 1111–1114. https://doi.org/10.1123/ijspp.2017-0422
  • Collins, D., Collins, L., & Carson, H. J. (2016). “If it feels right, do it”: Intuitive decision making in a sample of high-level sport coaches. Frontiers in Psychology, 7(APR), 1–10. https://doi.org/10.3389/fpsyg.2016.00504
  • Coyne, J. O. C., Gregory Haff, G., Coutts, A. J., Newton, R. U., & Nimphius, S. (2018). The Current state of subjective training load monitoring—a practical perspective and call to action. Sports Medicine - Open, 4(1). https://doi.org/10.1186/s40798-018-0172-x
  • Cunniffe, B., Griffiths, H., Proctor, W., Jones, K. P., Baker, J. S., & Davies, B. (2009). Illness monitoring in team sports using a web-based training diary. Clinical Journal of Sport Medicine: Official Journal of the Canadian Academy of Sport Medicine, 19(6), 476–481. https://doi.org/10.1097/JSM.0b013e3181c125d3
  • Duignan, C., Doherty, C., Caulfield, B., & Blake, C. (2020). Single-item self-report measures of team-sport athlete wellbeing and their relationship with training load: A systematic review. Journal of Athletic Training, 55(9), 944–953. https://doi.org/10.4085/1062-6050-0528.19
  • Duignan, C. M., Slevin, P. J., Caulfield, B. M., & Blake, C. (2019). Mobile athlete self-report measures and the complexities of implementation. Journal of Sports Science and Medicine, 18(3), 405–412.
  • Eysenbach, G. (2004). Improving the quality of web surveys: The checklist for reporting results of internet E-Surveys (CHERRIES). Journal of Medical Internet Research, 6(3), e132. https://doi.org/10.2196/jmir.6.3.e34
  • Halson, S. L. (2014). Monitoring training load to understand fatigue in athletes. Sports Medicine, 44(Suppl 2), S139–147. https://doi.org/10.1007/s40279-014-0253-z
  • Heidari, J., Beckmann, J., Bertollo, M., Brink, M., Kallus, K. W., Robazza, C., & Kellmann, M. (2018). Multidimensional monitoring of recovery status and implications for performance. International Journal of Sports Physiology and Performance, 14(1), 2–8. https://doi.org/10.1123/ijspp.2017-0669
  • Impellizzeri, F. M., Marcora, S. M., & Coutts, A. J. (2019). Internal and external training load: 15 years on. International Journal of Sports Physiology and Performance, 14(2), 270–273. https://doi.org/10.1123/ijspp.2018-0935
  • Impellizzeri, F. M., Tenan, M. S., Kempton, T., Novak, A., & Coutts, A. J. (2020). Acute: Chronic workload ratio: Conceptual issues and fundamental pitfalls. International Journal of Sports Physiology & Performance, 15(6), 907–913. https://doi.org/10.1123/ijspp.2019-0864
  • Jeffries, A. C., Wallace, L., Coutts, A. J., McLaren, S. J., McCall, A., & Impellizzeri, F. M. (2020). Athlete-reported outcome measures for monitoring training responses: A systematic review of risk of bias and measurement property quality according to the COSMIN guidelines. International Journal of Sports Physiology and Performance, 15(9), 1203–1215. https://doi.org/10.1123/ijspp.2020-0386
  • Jowett, S., & Cockerill, I. M. (2003). Olympic medallists’ perspective of the athlete-coach relationship. Psychology of Sport and Exercise, 4(4), 313–331. https://doi.org/10.1016/S1469-0292(02)00011-0
  • Manley, A., & Williams, S. (2022). ‘We’re not run on numbers, We’re people, We’re emotional people’: Exploring the experiences and lived consequences of emerging technologies, organizational surveillance and control among elite professionals. Organization, 29(4), 1–22. Published online 2019. https://doi.org/10.1177/1350508419890078
  • Mathews, C., & Crocker, T. (2014). Defining buy-in - introducing the buy-in continuum. Risk Management, 61(5), 22–26. https://doi.org/10.1108/17506200710779521
  • McCall, A., Wolfberg, A., Ivarsson, A., Dupont, G., Larocque, A., & Bilsborough, J. (2023). A Qualitative Study of 11 World-Class Team-Sport Athletes’ Experiences Answering Subjective Questionnaires: A Key Ingredient for ‘Visible’ Health and Performance Monitoring? https://doi.org/10.1007/s40279-023-01814-3
  • McGuigan, H. E., Hassmén, P., Rosic, N., Thornton, H. R., & Stevens, C. J. (2023). Does education improve adherence to a training monitoring program in recreational athletes? International Journal of Sports Science & Coaching, 18(1), 101–113. Published online January 24. https://doi.org/10.1177/17479541211070789
  • McKay, A. K. A., Stellingwerff, T., Smith, E. S., Martin, D. T., Mujika, I., Goosey-Tolfrey, V. L., Sheppard, J., & Burke, L. M. (2022). Defining training and performance caliber: A participant classification framework. International Journal of Sports Physiology and Performance, 17(2), 317–331. https://doi.org/10.1123/ijspp.2021-0451
  • Neupert, E. C., Cotterill, S. T., & Jobson, S. A. (2019). Training-monitoring engagement: An evidence-based approach in elite Sport. International Journal of Sports Physiology and Performance, 14(1), 99–104. https://doi.org/10.1123/ijspp.2018-0098
  • Neupert, E., Gupta, L., Holder, T., & Jobson, S. A. (2022). Athlete monitoring practices in elite sport in the United Kingdom. Journal of Sports Sciences, 40(13), 1–8. https://doi.org/10.1080/02640414.2022.2085435
  • Saw, A. E., Halson, S., Mujika, I. (2018). Monitoring athletes during training camps: Observations and translatable strategies from elite road cyclists and swimmers. Sports, 6(3), 63. https://doi.org/10.3390/sports6030063
  • Saw, A. E., Kellmann, M., Main, L. C., & Gastin, P. B. (2017, 12). Athlete self-report measures in research and practice: Considerations for the discerning reader and fastidious practitioner. International Journal of Physiology and Performance (Epub), 12(s2), S2–127 S2–135. https://doi.org/10.1123/ijspp.2014-0539
  • Saw, A. E., Main, L. C., & Gastin, P. B. (2015a). Impact of sport context and support on the use of a self-report measure for athlete monitoring. Journal of Sports Science and Medicine, 14(4), 732–739.
  • Saw, A. E., Main, L. C., & Gastin, P. B. (2015b). Monitoring athletes through self-report: Factors influencing implementation. Journal of Sports Science and Medicine, 14(1), 137–146. https://doi.org/10.1519/JSC.0000000000000499
  • Saw, A. E., Main, L. C., & Gastin, P. B. (2015c). Monitoring the athlete training response: Subjective self-reported measures trump commonly used objective measures: A systematic review. British Journal of Sports Medicine, 50(5), 281–291. https://doi.org/10.1136/bjsports-2015-094758
  • Smith, B. (2018). Generalizability in qualitative research: Misunderstandings, opportunities and recommendations for the sport and exercise sciences. Qualitative Research in Sport, Exercise and Health, 10(1), 137–149. https://doi.org/10.1080/2159676X.2017.1393221
  • Starling, L. T., & Lambert, M. L. (2018). Monitoring rugby players for fitness and fatigue: What do coaches want? International Journal of Sports Physiology and Performance, 13(6), 777–782. https://doi.org/10.1123/ijspp.2017-0416
  • Tracey, T. J. G. (2016). A note on socially desirable responding. Journal of Counseling Psychology, 63(2), 224–232. https://doi.org/10.1037/cou0000135
  • Vaughan, J., Mallett, C. J., Davids, K., Potrac, P., & López-Felip, M. A. (2019). Developing creativity to enhance human potential in sport: A wicked transdisciplinary challenge. Frontiers in Psychology, 10, 2090. https://doi.org/10.3389/fpsyg.2019.02090
  • West, S. W., Clubb, J., Torres-Ronda, L., Howells, D., Leng, E., Vescovi, J. D., Carmody, S., Posthumus, M., Dalen-Lorentsen, T., & Windt, J. (2021). More than a metric: How training load is used in Elite Sport for Athlete Management. International Journal of Sports Medicine, 42(4), 300–306. Published online 2020. https://doi.org/10.1055/a-1268-8791
  • Weston, M. (2018). Training load monitoring in elite English soccer: A comparison of practices and perceptions between coaches and practitioners. Science and Medicine in Football, 2(3), 216–224. https://doi.org/10.1080/24733938.2018.1427883

Appendix 1

Checklist for Reporting Results of Internet E-Surveys (CHERRIES)