401
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Longitudinal comparison of the self-administered ALSFRS-RSE and ALSFRS-R as functional outcome measures in ALS

, , , , , , , , , , , & show all
Received 05 Dec 2023, Accepted 19 Feb 2024, Published online: 19 Mar 2024

Abstract

Objective: Test the feasibility, adherence rates and optimal frequency of digital, remote assessments using the ALSFRS-RSE via a customized smartphone-based app. Methods: This fully remote, longitudinal study was conducted over a 24-week period, with virtual visits every 3 months and weekly digital assessments. 19 ALS participants completed digital assessments via smartphone, including a digital version of the ALSFRS-RSE and mood survey. Interclass correlation coefficients (ICC) and Bland-Altman plots were used to assess agreement between staff-administered and self-reported ALSFRS-R pairs. Longitudinal change was evaluated using ANCOVA models and linear mixed models, including impact of mood and time of day. Impact of frequency of administration of the ALSFRS-RSE on precision of the estimate slope was tested using a mixed effects model. Results: In our ALS cohort, digital assessments were well-accepted and adherence was robust, with completion rates of 86%. There was excellent agreement between the digital self-entry and staff-administered scores computing multiple ICCs (ICC range = 0.925–0.961), with scores on the ALSFRS-RSE slightly higher (1.304 points). Digital assessments were associated with increased precision of the slope, resulting in higher standardized response mean estimates for higher frequencies, though benefit appeared to diminish at biweekly and weekly frequency. Effects of participant mood and time of day on total ALSFRS-RSE score were evaluated but were minimal and not statistically significant. Conclusion: Remote collection of digital patient-reported outcomes of functional status such as the ALSFRS-RSE yield more accurate estimates of change over time and provide a broader understanding of the lived experience of people with ALS.

Introduction

In amyotrophic lateral sclerosis (ALS), disease severity and progression of disability are traditionally measured using the Revised Amyotrophic Lateral Sclerosis Rating Scale (ALSFRS-R) (Citation1). This rating scale is comprised of a staff-administered 12-item questionnaire that reviews function in four domains: bulbar, fine motor, gross motor, and respiratory. The ALSFRS-R is accessible for widespread use and also can be administered over the phone or online (Citation2, Citation3). Functional assessment via the ALSFRS-R is one of the most common outcome measures used in ALS clinical trials and is widely accepted as a clinically meaningful outcome measure (Citation4). Although initially developed as a trial outcome measure, this scale is now a part of routine clinical assessment and is increasingly being studied as a self-assessment tool for patients (Citation5–7).

Collecting outcome measures can be complex in ALS. Outcome measures are often collected at in-person visits, requiring participants to travel to specialized centers for assessment by trained providers. This can be time-intensive, cost-prohibitive, and physically and emotionally burdensome for people with ALS, especially as the disease progresses (Citation8). These factors often lead to missed visits and thus substantial missing data, which limits statistical power and reduces the ability to detect treatment response in trials (Citation9–11). Importantly, these challenges also limit trial enrollment of a geographically and socio-economically diverse pool of participants, undermining equitable participation and potentially restricting generalizability of trial results (Citation12–14). Similarly, remote research visits via phone calls can also be time-consuming and require dedicated and trained staff.

In recent years, the benefits of formally incorporating telehealth and remote assessments into patient care and clinical trials have become increasingly clear. An important aspect of this transition is the development and validation of scales and questionnaires that can be administered remotely to quantify clinically relevant symptoms. Adapted versions of the ALSFRS-R for self-administration, known as the ALSFRS-RSE (‘self-entry’) have been developed and shown to be reliable and highly concordant with the traditionally administered scale (Citation5–7, Citation15–17), supporting its use in routine ALS patient care and clinical trials.

The ubiquity of smartphones facilitates the use of smartphone applications to monitor disease progression and collect patient reported outcome measures (PROMs). Self-administration of scales and questionnaires using this approach can potentially reduce the burden of trial endpoints on participants, boost statistical power, simplify trial participation, reduce costs, and allow for a more personalized management of care (Citation18, Citation19). Remote monitoring in ALS using the ALSFRS-RSE has already been explored (), however the optimal sampling frequency of in-home assessments has not yet been determined. This study aimed to investigate the feasibility, adherence rates and optimal frequency of assessments using the self-administered ALSFRS-RSE via a customized smartphone-based app.

Table 1. Published reports comparing traditional ALSFRS-R to self-entry ALSFRS-RSE.

Methods

Study design

This single-center, longitudinal observational study was designed to evaluate a comprehensive battery of digital assessments of neurological function in both participants with ALS and control participants. The study was fully remote, with 3 virtual visits with clinic staff and weekly digital assessments completed independently by participants over a 24-week period (). The study was approved by the Mass General Brigham (MGB) Institutional Review Board (IRB) prior to initiation (Protocol #2018P002712). All participants provided written informed consent prior to initiation of any study procedures, and data were collected and stored in compliance with MGB policies and regulations and state and national laws. Data collection, management, and security were reviewed by the MGB Information Security Office prior to study initiation.

Figure 1. Study design.

Figure 1. Study design.

Recruitment

Participants with ALS and healthy controls were recruited from the multidisciplinary ALS clinic at MGH as well as through study recruitment materials posted on-site and sent via email blasts to an opt-in email distribution list through the Sean M. Healey & AMG Center for ALS at MGH. Individuals who had participated in previous observational studies who had agreed to be contacted regarding future research opportunities were also sent recruitment materials.

All participants were at least 18 years old. Participants with ALS met El Escorial Criteria for possible, probable lab-supported, probable, or definite ALS. Control participants did not carry a diagnosis of ALS or any other neurological condition expected to impact their performance on the digital assessments, and had no first degree relatives with known genetic forms of ALS.

Demographic and clinical measures

At baseline, a clinical history, including time since symptom onset, time since diagnosis, diagnostic criteria, site of onset, and current medications were obtained by phone interview with study staff and confirmed via a review of medical records. Additionally, during phone calls with study staff at baseline (week 1), week 13, and week 25 (± 7 days), participants completed the ALSFRS-R, Neurological Fatigue Index for Motor Neuron Disease (NFI-MND) (Citation20), and ALS-Specific Quality of Life Questionnaire-Brief Form (ALSSQOL-20) (Citation21), reviewed any adverse events related to the study, provided feedback on both the mobile application and its implementation on the smartphone devices, including usability and technical issues.

Digital outcome assessments

After providing informed consent, participants were provisioned an Apple smartphone (iPhone 8+ or 12 Pro; OS versions 12.1-15.1) and smartwatch (Apple Watch 4 or 5; OS versions 5.1.2-8.0). They were instructed on how to access the application, but instructions for each survey/task were provided within the application. On a weekly basis, participants completed both a mobile assessment battery customized for the study (BrainBaseline©, Clinical ink; Horsham, PA), and a point-and-click computer mouse task (Citation22). The mobile assessment battery was completed on the provisioned phone and smartwatch. The assessment battery was customized to capture patient reported outcomes of mood, fatigue, and symptoms of ALS. It also evaluated cognitive and motor function using prespecified tasks and captured speech for motor speech analysis. A summary of the tasks included in the mobile assessment battery are presented in Supplementary Table 1. Once a participant began a session, all activities for that window were to be completed within an hour. Data from completed activities were saved even if the full battery couldn’t be finished in an hour. These sessions were recorded as unsuccessful attempts. The partial dataset was saved (with data from any completed assessments).

For the purposes of the present analysis, we focused on patient-reported outcomes from the digital battery in ALS participants only. A symptom survey consisted of a series of 7-point Likert scales asking participants to report how they were feeling at that moment based on six questions about their mood, current fatigue, ability to think clearly, their previous night’s sleep, current sleepiness, and current tiredness. Scales were oriented horizontally with responses associated with greater symptom burden on the right. The digital format of the self-administered ALSFRS-RSE was adapted from a pen and paper version (Citation15). Care was taken to replicate the text, font, and punctuation of the questions and response options identically to the originally published scale. However, on the digital ALSFRS-RSE, for each question, the response options did not fit entirely on the smartphone screen, requiring participants to scroll down to see initially off-screen responses, which were associated with greater disability (and lower scores).

Each week participants had a three-day window to complete the assessments. Questions were presented one at a time. While in the survey, participants could navigate back to change an answer, however they could not proceed to the next question before answering the current one. If the survey was terminated early, partially completed survey data was not saved. The deidentified raw data from the phone were automatically transferred from participants’ devices to a secure cloud where the data could be accessed by study staff for analysis.

Statistical analysis

The ‘R’ statistical software package (version 4.2.0) and Python Programming Language was used for data analysis. Demographic and clinical characteristics were summarized with descriptive statistics. To evaluate the feasibility of remote ALSFRS-RSE, we computed adherence metrics as a proportion of the total expected number of assessments. Baseline ALSFRS-R was used to determine the pre-enrollment rate of decline (pts/month) defined as: (48-baseline total score)/(months since symptom onset).

Observed longitudinal change in ALSFRS-R and ALSFRS-RSE was evaluated separately using two approaches: (1) ANCOVA models with end-of-study change scores as the outcome, and (2) linear mixed effects models (fit using the lme4 package v1.1-30) on the raw scores, with fixed and random effects for time (months since baseline) and intercept to allow the regression to vary by participant while also estimating an overall population intercept and slope. All models were additionally adjusted for disease duration at baseline (months) and concomitant medication status (taking edaravone or riluzole vs. not) as fixed effects.

Intraclass correlation coefficients and Bland-Altman plots were used to assess the agreement between versions of the ALSFRS-R for making point estimates of function. Digital and traditional, phone-administered assessment pairs were included in the agreement analyses if both were successfully completed and if they occurred within 5 days of each other.

Bias in the total score and sub-scores of the digital, ALSFRS-RSE relative to the traditional, phone-administered ALSFRS-R was assessed wholistically on the entire dataset using a linear mixed effects models with fixed effects for digital versus traditional administration, time (months since baseline), and random effects for the intercept and time. The fixed effect estimate for digital versus traditional administration was used as the estimate of bias.

To determine the impact of the frequency of administration of the ALSFRS-RSE on the precision of the estimate slope, we fit the mixed effects model to both the original weekly ALSFRS-RSE data and to datasets that were down sampled at 2-, 4-, and 12-week intervals. End-of-study change estimates with 95% confidence intervals were computed and compared based on all models. In addition, all modeling approaches were compared using model-based standardized response mean (SRM) estimates, computed as SRMmodel=change estimate(std error of change estimate)*sample size

To determine the potential impact of increased sampling frequency on required sample size in a clinical trial, we computed power curves based on a hypothetical two-arm trial of the same 6-month duration as this observational study. Power was computed via a simulation-based approach; trial data were simulated based on the parameter estimates of the fitted mixed effects model from the full weekly ALSFRS-RSE data. Mean rate of decline in the placebo arm was assumed to be the same as that observed in our study, and the treatment effect size was assumed to be a halving of the linear rate of decline over the 6-month study period. Power to detect the assumed effect size was computed for quarterly, monthly, biweekly, and weekly assessment frequency, and for sample sizes between 40 and 100 per study arm. For each simulation setting, 5000 simulated trials were used to compute power.

To investigate the importance of mood and time of day in explaining variability in ALSFRS-RSE total score and subscale scores, we implemented mixed effects models similar to those described above, but with additional mood and time-of-day fixed effects. Because data from the mood question on the symptom survey was not uniformly distributed over the range of possible scores (higher scores were underrepresented), we dichotomized the outcome into 2 factors: happy (scores </=2) or sad (>2). Time of day was treated as categorical with the following levels: 5-11am, 11am-6pm, 6pm-5am.

The study design did not enable full characterization of potential bias due to off-screen answer choices in the ALSFRS-RSE. To partially characterize potential off-screen bias, a binomial generalized linear mixed effects model was fit to both ALSFRS-R and ALSFRS-RSE data with the following outcome – number of off-screen (in ALSFRS-RSE) answers selected (out of a total of 12 possible) – which was assumed to follow a Binomial (12, p) distribution. The fixed effect of interest was assessment type (self-entry versus traditional), and the model was additionally adjusted for total ALSFRS-R score and time in the study as fixed effects along with a participant-level random intercept.

Results

Participant demographics, clinical characteristics and adherence

The present analysis is focused on the ALS cohort only. Nineteen participants with ALS were enrolled and all were included in the analysis. Demographic and clinical characteristics of the cohort are summarized in . The mean age at time of enrollment was 60.6 years (SD = 5.6, range 41 – 75 years), with an average disease duration from symptom onset of 51.9 months (SD =34.7, range 14.65 − 127.02 months). The cohort consisted of participants with a wide range of clinical disease severity, with a mean total ALSFRS-R score at baseline of 31.8 (SD = 7.9, range 8 – 41) and a mean pre-enrollment rate of decline of −0.41 points per month.

Table 2. Participant characteristics.

Digital assessments were well-accepted, and adherence was robust in the ALS cohort (Suppl. Figure 1). Completion rates for digital ALSFRS-RSE were 86.1% (409 of the expected 475 assessments were completed), with the remaining being skipped within the battery (<1%), incomplete despite initiating the battery (1.1%), or associated with a fully missed assessment battery (12.6%). All participants completed the three scheduled study visits at baseline, 3 and 6 months with scores recorded for ALSFRS-R. There was variability in the participant level and weekly adherence rates throughout study duration (Suppl. Figure 1).

Agreement between the traditional ALSFRS-R and digital ALSFRS-RSE

Over the 6-month follow-up period, a total of 46 paired assessments from all 19 ALS participants were evaluated (Suppl. Figure 2). We computed multiple ICC forms and found excellent agreement between the traditional ALSFRS-R and digital ALSFRS-RSE scores (ICC Range = 0.925 to 0.961) (Suppl. Table 2). There was a constant bias of +1.3 points (LoA: −4.25 to 6.86) with ALSFRS-RSE scores being higher (). Using the full dataset to estimate bias associated with the digital ALSFRS-RSE relative to traditional ALSFRS-R yielded a similar result: average bias estimate (95% CI) of +1.00 (0.59, 1.41) points for the total score. Contributions to this bias were not uniform across subdomains, with mean bias estimates (95% CI) of +0.55 (0.36, 0.74) for fine motor, +0.34 (0.17, 0.50) for gross motor, +0.09 (-0.04, 0.22) for bulbar, and +0.04 (-0.13, 0.22) for respiratory sub scores. (Suppl. Figure 3).

Figure 2. (a) Agreement between 46 paired ALSFRS-RSE and ALSFRS-R assessments from all 19 ALS participants was high (ICC Range = 0.925 to 0.961). (b) Bland-Altman analysis revealed a constant bias of +1.3 points (LoA: -4.25 to 6.86) with ALSFRS-RSE scores being higher.

Figure 2. (a) Agreement between 46 paired ALSFRS-RSE and ALSFRS-R assessments from all 19 ALS participants was high (ICC Range = 0.925 to 0.961). (b) Bland-Altman analysis revealed a constant bias of +1.3 points (LoA: -4.25 to 6.86) with ALSFRS-RSE scores being higher.

Figure 3. Total scores over time in each participant from the traditional and self-entry versions of the ALSFRS-R. Data from all 19 ALS participants are represented in each panel.

Figure 3. Total scores over time in each participant from the traditional and self-entry versions of the ALSFRS-R. Data from all 19 ALS participants are represented in each panel.

Bias toward higher scores in the digital assessment did not appear to be due to the off-screen effect. Holding total ALSFRS-R score and time in study constant, the odds of a participant selecting off-screen answers associated with more severe symptoms was 11.3% greater in the digital assessment than in the traditional assessment (98% CI: −7.3%, 33.8%).

Longitudinal change and the impact of assessment frequency on responsiveness

Both traditional and self-reported scores decreased over time (). With assessments administered every three months, the global slope estimates (95% CI) for monthly change were similar between modalities: −0.55 (-0.93, −0.16) points per month for ALSFRS-R, and −0.66 (-1.08, −0.23) points per month for ALSFRS-RSE. In both ALSFRS-R and ALSFRS-RSE, this decline in total score was distributed across the four subdomains (bulbar, fine motor, gross motor, respiratory), with all subdomains showing decline. At this three-month sampling rate, confidence interval width was greater for ALSFRS-RSE than for ALSFRS-R, indicating higher natural variability in the digital assessments. However, more frequent of the digital assessments led to increased within-subject precision – thus narrowing the resulting CIs for rate of decline (). This effect was observed up through weekly assessments, though the narrowing in 95% CI from fortnightly to weekly sampling was minimal. Slope estimates for monthly ALSFRS-RSE change were similar across sampling frequencies. Precision in end-of-study mean change estimates was higher for all slope-based linear mixed effects models than for the change score ANCOVA models, as reflected by higher standardized response mean estimates ().

Figure 4. Mixed effects models were fit to both the traditional ALSFRS-R (administered every 3 months) and weekly ALSFRS-RSE data as well as to datasets that were down sampled at 2-, 4-, and 12-week intervals to estimate monthly change in total and subscale scores. Monthly change (95% CI) was similar between the 12-week interval ALSFRS-R and ALSFRS-RSE: -0.55 (-0.93, -0.16) and -0.66 (-1.08, -0.23) points per month, respectively. Decline was observed in each of the four subdomains of the ALSFRS-RSE: bulbar -0.12 (-0.27, 0.04), fine motor -0.26 (-0.40, -0.12), gross motor -0.21 (-0.35, -0.06), and respiratory -0.08 (-0.20, 0.05). Narrower confidence intervals are observed with more frequent sampling.

Figure 4. Mixed effects models were fit to both the traditional ALSFRS-R (administered every 3 months) and weekly ALSFRS-RSE data as well as to datasets that were down sampled at 2-, 4-, and 12-week intervals to estimate monthly change in total and subscale scores. Monthly change (95% CI) was similar between the 12-week interval ALSFRS-R and ALSFRS-RSE: -0.55 (-0.93, -0.16) and -0.66 (-1.08, -0.23) points per month, respectively. Decline was observed in each of the four subdomains of the ALSFRS-RSE: bulbar -0.12 (-0.27, 0.04), fine motor -0.26 (-0.40, -0.12), gross motor -0.21 (-0.35, -0.06), and respiratory -0.08 (-0.20, 0.05). Narrower confidence intervals are observed with more frequent sampling.

Figure 5. Linear mixed effects models to compare estimated end-of-study (EOS) study change (at approximately 6 months), between the ALSFRS-R and ALSFRS-RSE. EOS change was estimated based either on an ANCOVA model applied to 6-month change scores or using the linear slope mixed model results on the digital ALSFRS-RSE scores. All models were adjusted for disease duration at baseline and medication use. The change ANCOVA model is additionally adjusted for the baseline score.

Figure 5. Linear mixed effects models to compare estimated end-of-study (EOS) study change (at approximately 6 months), between the ALSFRS-R and ALSFRS-RSE. EOS change was estimated based either on an ANCOVA model applied to 6-month change scores or using the linear slope mixed model results on the digital ALSFRS-RSE scores. All models were adjusted for disease duration at baseline and medication use. The change ANCOVA model is additionally adjusted for the baseline score.

Power curves for detecting the hypothetical treatment effect (halving of rate of decline) based on the simulated trial scenarios are presented in . Modest but important gains in power are observed. For example, per arm sample size per required for 80% power is 82 when ALSFRS-RSE is sampled quarterly but only 67 when sampled weekly – an 18% decrease. Similar to the real data analysis, in this hypothetical trial scenario we also begin to see diminishing returns for weekly sampling versus fortnightly.

Figure 6. Power curves for the treatment effect (slowing of rate of decline) based on a hypothetical two-arm trial scenario with duration of 6 months and fitted model parameters with the data from this observational study, with an assumed treatment effect size equivalent to halving the observed linear rate of decline over the study period.

Figure 6. Power curves for the treatment effect (slowing of rate of decline) based on a hypothetical two-arm trial scenario with duration of 6 months and fitted model parameters with the data from this observational study, with an assumed treatment effect size equivalent to halving the observed linear rate of decline over the study period.

After accounting for time in study and between-individual variability, effects of patient-reported mood and assessment time-of-day on ALSFRS-RSE total score were neither clinically meaningful nor statistically significant (Suppl. Figure 4). Mood >2 versus < = 2; 5-11am vs 11am-6pm; and 6pm-5am vs 11am-6pm effects were all associated with than slightly higher total scores of less than 0.25 points on average, and all confidence intervals overlapped zero.

Discussion

The use of mobile and wearable technology, including smartphones, to gather biometric and patient reported outcome data is transforming both clinical practice and clinical trials in ALS (Citation23–26).

The ALSFRS-R was developed as a staff-administered scale, which limits the frequency at which it can reasonably be administered. Remote collection of a digital ALSFRS-R for Self-Entry (ALSFRS-RSE) has been shown to be feasible in prior studies (Citation5, Citation6, Citation15, Citation17, Citation27). In our current cohort, we achieved very high completion rates of 86% amongst participants with ALS. In this study, a great deal of thought went into the user interface and study design, and there was careful monitoring of data completion in real-time by the study team, suggesting that these are modifiable factors that can influence study retention and compliance with mobile digital endpoints. Despite these precautions, one participant appears to have responded incorrectly on the ALSFRS-RSE during their first two weeks and seemed to self-correct by week three. Because digital literacy, technical malfunction, and misunderstandings pose real-world challenges to the implementation of digital health technologies in clinical practice, we elected to present all data in the figure and analysis.

Moreover, we found excellent agreement between the traditional staff-administered ALSFRS-R and the digital ALSFRS-RSE. Our finding that participants’ ALSFRS-RSE scores were higher than ALSFRS-R is consistent with other studies (Citation5, Citation17, Citation27) suggesting that the ALSFRS-R and ALSFRS-RSE are not fully interchangeable.

Importantly, increasing ALSFRS-RSE assessment frequency decreased the uncertainty around the global slope estimate to the point where it was lower than that of the traditional ALSFRS-R. The reduction in uncertainty was achieved by improving within-subject precision; higher sampling frequency cannot affect between-subject precision, which can only be improved by increasing the number of subjects. However, as illustrated in our power analysis, this increase in within-subject precision may translate into important gains in power and potentially to smaller sample sizes for ALS trials. Different trial durations, assumed effect sizes, and statistical analysis approaches may change what frequency is optimal.

Additional PROs accompanying the ALSFRS-RSE indicated that mood and time of day did not significantly influence ALSFRS-RSE score. Finally, screen layout did not appear to substantially influence participants answer choices, with small and statistically insignificant effect, though the study design did not enable a full characterization.

This study has several strengths. The design was prospective, and the assessments formally scheduled for weekly administration. Follow up was six months, and the agreement analysis was based on a high number of paired assessments. Several limitations of the study include the relatively small sample size, as well as the potential for self-selection of participants with slowly progressing disease, which has been seen in other observational studies (Citation28–30). Moreover, while we did consider mood as a potential confounding factor, the accuracy of a self-reported scores may be compromised by cognitive deficits, which were not assessed in our study.

Overall, our findings suggest that remote collection of digital PROs like the ALSFRS-RSE is feasible and that attention to user interface and frequent engagement can ensure good participant adherence and retention. The ALSFRS-RSE yields accurate measures of disability relative to the ALSFRS-R, but the data are not interchangeable, which is consistent with previous studies. While there have been high correlations between the ALSFRS-R and ALSFRS-RSE total scores and slopes of decline, the ALSFRS-RSE scores are consistently higher than ALSFRS-R (Citation5, Citation17, Citation27, Citation31). Frequent sampling with the ALSFRS-RSE provided more precise estimates of functional change over time and could lead to more efficient clinical trials. While the data in this study may suggest that fortnightly data collection could strike an optimal balance between participant burden and statistical power, this balance could vary depending upon different study designs and participant characteristics.

Supplemental material

Supplemental Material

Download MS Word (893.7 KB)

Acknowledgements

We would like to thank our participants and their families for their kind contribution to research on amyotrophic lateral sclerosis.

Data availability statement

Data may be made available at a reasonable request.

Declaration of interest

MKE: at the time of this work, employee of and held stock/stock options in Biogen. NC: no conflicts of interest. RB: at the time of this work, employee of and held stock/stock options in Biogen. KMB: has received consulting fees from Cytokinetics, Inc. ZS: no conflicts of interest. AI: no conflicts of interest. AC: no conflicts of interest. MPH: no conflicts of interest. MK: no conflicts of interest. ASG: prior consultant for and received research funding from Biogen. SAJ: has received research support from the ALS Association. SC: at the time of this work, employee of and held stock/stock options in Biogen. JDB: reports research support from Biogen, MT Pharma Holdings of America, Alexion, Rapa Therapeutics, ALS Association, Muscular Dystrophy Association, ALS One, Tambourine, ALS Finding a Cure. He has been a paid member of an advisory panel for Regeneron, Biogen, Clene Nanomedicine, Mitsubishi Tanabe Pharma Holdings America, Inc., Janssen, RRT. He received an honorarium for educational events for Projects in Knowledge, Kaplan, and the Muscular Dystrophy Association. He has unpaid roles on the advisory boards for the non-profits Everything ALS and ALS One.

Additional information

Funding

This work was supported by Biogen (Cambridge, MA, USA). Biogen participated in the study design and the analysis of results presented herein.

References

  • Cedarbaum JM, Stambler N, Malta E, Fuller C, Hilt D, Thurmond B, et al. The ALSFRS-R: a revised ALS functional rating scale that incorporates assessments of respiratory function. BDNF ALS Study Group (Phase III). J Neurol Sci. 1999;169:13–21.
  • Kasarskis EJ, Dempsey-Hall L, Thompson MM, Luu LC, Mendiondo M, Kryscio R. Rating the severity of ALS by caregivers over the telephone using the ALSFRS-R. Amyotroph Lateral Scler Other Motor Neuron Disord. 2005;6:50–4.
  • Kaufmann P, Levy G, Montes J, Buchsbaum R, Barsdorf AI, Battista V, et al. Excellent inter-rater, intra-rater, and telephone-administered reliability of the ALSFRS-R in a multicenter clinical trial. Amyotroph Lateral Scler. 2007;8:42–6.
  • McElhiney M, Rabkin JG, Goetz R, Katz J, Miller RG, Forshew DA, et al. Seeking a measure of clinically meaningful change in ALS. Amyotroph Lateral Scler Frontotemporal Degener. 2014;15:398–405.
  • Berry JD, Paganoni S, Carlson K, Burke K, Weber H, Staples P, et al. Design and results of a smartphone-based digital phenotyping study to quantify ALS progression. Ann Clin Transl Neurol. 2019;6:873–81.
  • Johnson SA, Burke KM, Scheier ZA, Keegan MA, Clark AP, Chan J, et al. Longitudinal comparison of the self-entry Amyotrophic Lateral Sclerosis Functional Rating Scale-Revised (ALSFRS-RSE) and Rasch-Built Overall Amyotrophic Lateral Sclerosis Disability Scale (ROADS) as outcome measures in people with amyotrophic lateral sclerosis. Muscle Nerve. 2022;66:495–502.
  • Maier A, Boentert M, Reilich P, Witzel S, Petri S, Großkreutz J, et al. ALSFRS-R-SE: an adapted, annotated, and self-explanatory version of the revised amyotrophic lateral sclerosis functional rating scale. Neurol Res Pract. 2022;4:60.
  • Rutkove SB, Narayanaswami P, Berisha V, Liss J, Hahn S, Shelton K, et al. Improved ALS clinical trials through frequent at-home self-assessment: a proof of concept study. Ann Clin Transl Neurol. 2020;7:1148–57.
  • Atassi N, Yerramilli-Rao P, Szymonifka J, Yu H, Kearney M, Grasso D, et al. Analysis of start-up, retention, and adherence in ALS clinical trials. Neurology 2013;81:1350–5.
  • Iakovakis D, Hadjidimitriou S, Charisis V, Bostantzopoulou S, Katsarou Z, Hadjileontiadis LJ. Touchscreen typing-pattern analysis for detecting fine motor skills decline in early-stage Parkinson’s disease. Sci Rep. 2018;8:7663.
  • Van Eijk RPA, Beelen A, Kruitwagen ET, Murray D, Radakovic R, Hobson E, et al. A road map for remote digital health technology for motor neuron disease. J Med Internet Res. 2021;23:e28766.
  • Schwartz AL, Alsan M, Morris AA, Halpern SD. Why diverse clinical trial participation matters. N Engl J Med. 2023;388:1252–4.
  • Horton DK, Graham S, Punjani R, Wilt G, Kaye W, Maginnis K, et al. A spatial analysis of amyotrophic lateral sclerosis (ALS) cases in the United States and their proximity to multidisciplinary ALS clinics, 2013. Amyotroph Lateral Scler Frontotemporal Degener. 2018;19:126–33.
  • Bedlack RS, Pastula DM, Welsh E, Pulley D, Cudkowicz ME. Scrutinizing enrollment in ALS clinical trials: room for improvement? Amyotroph Lateral Scler. 2008;9:257–65.
  • Montes J, Levy G, Albert S, Kaufmann P, Buchsbaum R, Gordon PH, et al. Development and evaluation of a self-administered version of the ALSFRS-R. Neurology 2006;67:1294–6.
  • Maier A, Holm T, Wicks P, Steinfurth L, Linke P, Münch C, et al. Online assessment of ALS functional rating scale compares well to in-clinic evaluation: a prospective trial. Amyotroph Lateral Scler. 2012;13:210–6.
  • Johnson SA, Karas M, Burke KM, Straczkiewicz M, Scheier ZA, Clark AP, et al. Wearable device and smartphone data quantify ALS progression and may provide novel outcome measures. NPJ Digit Med. 2023;6:34.
  • Govindarajan R, Berry JD, Paganoni S, Pulley MT, Simmons Z. Optimizing telemedicine to facilitate amyotrophic lateral sclerosis clinical trials. Muscle Nerve. 2020;62:321–6.
  • Shefner JM, Bedlack R, Andrews JA, Berry JD, Bowser R, Brown R, et al. Amyotrophic lateral sclerosis clinical trials and interpretation of functional end points and fluid biomarkers: A review. JAMA Neurol. 2022;79:1312–8.
  • Gibbons CJ, Mills RJ, Thornton EW, Ealing J, Mitchell JD, Shaw PJ, et al. Development of a patient reported outcome measure for fatigue in motor neurone disease: the Neurological Fatigue Index (NFI-MND). Health Qual Life Outcomes. 2011;9:101.
  • Felgoise SH, Feinberg R, Stephens HE, Barkhaus P, Boylan K, Caress J, et al. Amyotrophic lateral sclerosis-specific quality of life-short form (ALSSQOL-SF): A brief, reliable, and valid version of the ALSSQOL-R. Muscle Nerve. 2018;58:646–54.
  • Gajos KZ, Reinecke K, Donovan M, Stephen CD, Hung AY, Schmahmann JD, et al. Computer mouse use captures ataxia and parkinsonism, enabling accurate measurement and detection. Mov Disord. 2020;35:354–8.
  • Biogen. A Phase 3 Randomized, Placebo-Controlled Trial With a Longitudinal Natural History Run-In and Open-Label Extension to Evaluate BIIB067 Initiated in Clinically Presymptomatic Adults With a Confirmed Superoxide Dismutase 1 Mutation [Internet]. clinicaltrials.gov; 2023 Jul [cited 2022 Dec 31]. Report No.: NCT04856982. Available from: https://clinicaltrials.gov/study/NCT04856982
  • Cudkowicz ME. HEALEY ALS Platform Trial [Internet]. clinicaltrials.gov; 2023 Oct [cited 2022 Dec 31]. Report No.: NCT04297683. Available from: https://clinicaltrials.gov/study/NCT04297683
  • Shefner JM, Al-Chalabi A, Andrews JA, Chio A, De Carvalho M, Cockroft BM, et al. COURAGE-ALS: a randomized, double-blind phase 3 study designed to improve participant experience and increase the probability of success. Amyotroph Lateral Scler Frontotemporal Degener. 2023;24:523–34.
  • Walk D, Nicholson K, Locatelli E, Chan J, Macklin EA, Ferment V, et al. Randomized trial of inosine for urate elevation in amyotrophic lateral sclerosis. Muscle Nerve. 2023;67:378–86.
  • Chew S, Burke KM, Collins E, Church R, Paganoni S, Nicholson K, et al. Patient reported outcomes in ALS: characteristics of the self-entry ALS Functional Rating Scale-revised and the Activities-specific Balance Confidence Scale. Amyotroph Lateral Scler Frontotemporal Degener. 2021;22:467–77.
  • Baxi EG, Thompson T, Li J, Kaye JA, Lim RG, Wu J, et al. Answer ALS, a large-scale resource for sporadic and familial ALS combining clinical and multi-omics data from induced pluripotent cell lines. Nat Neurosci. 2022;25:226–37.
  • Benatar M, Zhang L, Wang L, Granit V, Statland J, Barohn R, et al. Validation of serum neurofilaments as prognostic and potential pharmacodynamic biomarkers for ALS. Neurology 2020;95:e59–e69.
  • Meyer R, Spittel S, Steinfurth L, Funke A, Kettemann D, Münch C, et al. Patient-reported outcome of physical therapy in amyotrophic lateral sclerosis: Observational online study. JMIR Rehabil Assist Technol. 2018;5:e10099.
  • Bakker LA, Schröder CD, Tan HHG, Vugts SMAG, van Eijk RPA, van Es MA, et al. Development and assessment of the inter-rater and intra-rater reproducibility of a self-administration version of the ALSFRS-R. J Neurol Neurosurg Psychiatry. 2020;91:75–81.