5,097
Views
3
CrossRef citations to date
0
Altmetric
Reviews

From concept to practice: a scoping review of the application of AI to aphasia diagnosis and management

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 1288-1297 | Received 13 Nov 2022, Accepted 30 Mar 2023, Published online: 12 May 2023

Abstract

Purpose

Aphasia is an acquired communication disability resulting from impairments in language processing following brain injury, most commonly stroke. People with aphasia experience difficulties in all modalities of language that impact their quality of life. Therefore, researchers have investigated the use of Artificial Intelligence (AI) to deliver innovative solutions in Aphasia management and rehabilitation.

Materials and methods

We conducted a scoping review of the use of AI in aphasia research and rehabilitation to explore the evolution of AI applications to aphasia, the progression of technologies and applications. Furthermore, we aimed to identify gaps in the use of AI in Aphasia to highlight the potential areas where AI might add value. We analysed 77 studies to determine the research objectives, the history of AI techniques in Aphasia and their progression over time.

Results

Most of the studies focus on automated assessment using AI, with recent studies focusing on AI for therapy and personalised assistive systems. Starting from prototypes and simulations, the use of AI has progressed to include supervised machine learning, unsupervised machine learning, natural language processing, fuzzy rules, and genetic programming.

Conclusion

Considerable scope remains to align AI technology with aphasia rehabilitation to empower patient-centred, customised rehabilitation and enhanced self-management.

IMPLICATIONS FOR REHABILITATION

  • Aphasia is an acquired communication disorder that impacts everyday functioning due to impairments in speech, auditory comprehension, reading, and writing.

  • Given this communication burden, researchers have focused on utilising artificial intelligence (AI) methods for assessment, therapy and self-management.

  • From a conceptualisation era in the early 1940s, the application of AI has evolved with significant developments in AI applications at different points in time.

  • Despite these developments, there are ample opportunities to exploit the use of AI to deliver more advanced applications in self-management and personalising care.

Introduction

Approximately fifteen million strokes occur annually, with up to 40% of stroke survivors diagnosed with aphasia [Citation1,Citation2]. Aphasia is a chronic acquired communication disability commonly caused by damage to the brain after stroke but also results from a head injury, brain tumours or neurodegeneration. Aphasia impacts individuals differently across areas of spoken language, auditory comprehension, reading, and writing.

People with aphasia (PWA) experience life-altering psychosocial consequences. They experience changes in relationships, social isolation, and difficulty reintegrating into community life [Citation3–6]. Consequently, in comparison with non-aphasic stroke survivors, they are more likely to suffer from reduced health-related quality of life, reduced rate of functional recovery, and increased incidence of post-stroke depression [Citation7–9]. Due to the complexity of living with chronic aphasia, a comprehensive approach to rehabilitation is needed that addresses both the language impairment and the broader impacts on people’s lives, but there are many challenges to achieving this.

Aphasia can manifest in highly variable ways with individuals demonstrating varying degrees of impairment in different areas of communication. Therefore, it is crucial for rehabilitative interventions and support programs to provide personalised modifications to tackle the large variability in patient presentations. However, these highly personalised and frequently interdisciplinary interventions can place a financial burden on families and the economy, with healthcare costs post-stroke considerably higher in PWA in comparison to patients without aphasia [Citation10,Citation11]. Speech-language pathologists (SLPs) play an important role in the evaluation and classification of aphasia subtypes and in establishing interventions with personalized goals that will meet an individual’s communication and well-being needs. However, SLPs have identified barriers to the provision of comprehensive, personalised care. Barriers include staffing shortages and limited availability of community support programmes which impacts discharge planning [Citation12]. In part, this is due to the current model of service delivery that emphasises acute and sub-acute health care, with little funding focused on long-term and community services for people with chronic aphasia [Citation11].

Technology has an increasingly core role in aphasia management. Computer programmes and applications have been widely integrated into therapy and as a tool for supplemental home practice, and assistive devices have benefited from advancing technology [Citation13]. Tele practice has proven to be an additional effective method of service delivery at the individual and group level of intervention [Citation14]. A recent scoping review identified technology as a key approach in enabling the self-management of aphasia [Citation15]. PWA credited technology with augmenting their communication in activities of daily living, increasing access to information, and providing entertainment [Citation16]. However, the current research on the management of aphasia has not been universally translated into accessible and sustainable models of care. Further, the prediction of recovery from aphasia and response to treatment continues to remain elusive [Citation17–19].

Artificial intelligence (AI) is a form of advanced technology that has the potential to enable individually tailored, sustainable services that can independently adapt to the heterogeneity of aphasia and the changing needs of people with aphasia over time. AI advancements in recovery and self-management may have a critical role in narrowing the evidence-practice gap caused by healthcare funding shortfalls and financial limitations of families. However, the current application of AI to aphasia management is limited and has likely not been adequately utilized to improve service quality, access, and reach.

Therefore, in order to systematically explore the research done in this area, we conducted a scoping review, with the aim of describing the use of AI in aphasia to date in terms of technology approaches and their application, trends over time, and identifying potential areas where AI can add value to outline future avenues for research and evidence implementation.

Methods

Research design

This scoping review followed the process outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) statement for scoping reviews (Supplementary Appendix A) [Citation20].

Search strategy

Potential publications were identified by conducting a comprehensive search on PubMed and Google Scholar. Key terms used for the search were related to AI and aphasia. Terms were related to AI [“AI”, “artificial intelligence”, “machine learning”, “supervised learning”, “unsupervised learning”, “classification”, “mobile application”, “virtual reality”] and aphasia [“aphasia”, “dysphasia”, “anomia”]. Since certain mobile applications and virtual reality applications may incorporate AI features, the terms; “mobile application”, “virtual reality” were also included in the search process. The publications were assessed for their applicability in the screening stage.

These identified key terms for AI and aphasia were combined using AND. The search was carried out to gather studies from 1980 to early 2022. This process was conducted on 5 May 2022.

Screening

Studies were included if they met the following eligibility criteria:

  1. directly related to aphasia (any aetiology: stroke, cancer, progressive, etc.)

  2. related to AI, where the technology applications incorporate AI functions

  3. full text available in English

  4. peer-reviewed, either research papers or conference abstracts

The screening was conducted by the first author (AA) and cross-checked by two further authors (NH and JEP) to confirm inclusion.

Data extraction and analysis

The data extraction process collected article characteristics such as published date, authors, title and the full text. Next, the full text was reviewed to determine the AI technology and techniques used and the objectives of the research. Data extraction for the objectives was conducted by two authors (NH and JEP) independently and discrepancies were resolved through discussion.

To explore the data at a granular level, we employed a data analysis technique using an interactive dashboard tool [Citation21]. The dashboard was created using the extracted dataset as the input containing several attributes: date, authors, title, full text, and identified characteristics of the paper (objective, AI techniques and management stage). It allowed interactive visualisations that combined multiple attributes of the dataset, thereby highlighting associations, insights, and patterns. The developed dashboard was also used for the historical analysis of publications over time, to evaluate the evolution of different AI techniques and the progression of research objectives in Aphasia. This dashboard has been published to be accessed by the readersFootnote1.

demonstrates screen captures of the dashboard developed.

Figure 1. Screen capture of the data analysis dashboard.

A snapshot of an interactive dashboard created to visualise publication categories, AI techniques, keywords, research objectives and trends over time.
Figure 1. Screen capture of the data analysis dashboard.

Results

Search results and selection

illustrates the search yield and study selection process. The number of publications retrieved by the search process was 194, with 107 publications from PubMed and 87 from Google Scholar. Once duplicate records were removed, 180 publications were screened. In the first phase of the screening, 38 publications were excluded as they did not match the primary inclusion criteria; 24 studies were not related to aphasia and 19 studies were clinical trials without any use of AI. This resulted in 137 studies for which full texts were sought. Full texts were available for 127 studies. These full texts were further screened by the authors to confirm eligibility; ultimately, 77 publications aligned with the defined inclusion criteria of the review. The reasons for rejection are shown in .

Figure 2. PRISMA flow chart used for the scoping review.

The PRISMA-Scoping Review flow chart showing the process followed to select studies for the review.
Figure 2. PRISMA flow chart used for the scoping review.

The agreement between the two authors during the extraction of the research objectives was 58/77 (75%) after the initial round (κ = 0.73, 95% CI 0.62-0.85). This level of agreement is considered moderate and was challenging due to some level of inference required in determining the objectives of the research as they relate to aphasia. For example, papers that aimed to improve automatic speech recognition may have done so for the purpose of assessment and diagnosis or for therapy feedback, but more often, the long-term aim was not explicitly stated. Following a consensus discussion, the agreement was 100%.

Overview of results

We present the findings of this review first chronologically, outlining the thematic and technical progression of research over time, and then summarise the distribution of AI technology type and the “stages” of aphasia management according to the research objectives.

The evolution of AI applications to aphasia

We identified four time “epochs” of the application of AI to aphasia, illustrated in . The earliest retrieved studies (1984–1993) discussed automated therapy software focused on the possibility of the software making clinical decisions about therapy tasks and stimuli, and then updating decisions according to patient performance. Some early software prototypes were described but these were limited to text displays and written input. The practical application of AI technology for aphasia commenced around 1994. From 1994 to 2005, several research studies applied AI techniques to aphasia identification and subtype diagnosis, recovery prediction, early prototypes of assistive software, and computational models of language. Between 2006 and 2014, a broader range of AI approaches were applied to the assessment and differential diagnosis of aphasia and its subtypes. Speech analysis for the purposes of automated treatment feedback was first attempted in this epoch, as was the use of machine learning for lesion-symptom mapping. Most recently (2015 – 2022), research has moved towards automation and advanced AI, with a wider range of applications emerging, such as conversational agents (chatbots), rehabilitation software, and affect recognition. Advances have been made in deep learning and language models and these are being more extensively applied within aphasia.

Figure 3. The evolution of AI in Aphasia research.

A diagram to show the evolution of AI in Aphasia over time with significant time periods/ research objectives and AI techniques used.
Figure 3. The evolution of AI in Aphasia research.

AI technology approaches in aphasia

shows a timeline of publications with labels where specific AI techniques first emerged. The growth in publications since the earliest study retrieved (1984) is approximately exponential and mirrors the progression of AI technology over time. For example, with the rise of deep learning and language models after 2015, aphasia researchers have adopted deep neural networks such as recurrent neural networks and convolutional neural networks [Citation22–25]. These networks have the ability to model sequential data (such as speech) and the ability to remember previous outputs as inputs for the next step. These models have been used specifically to improve speech recognition and automated speech assessment. Below, the results are summarised in relation to the main branches of AI: supervised learning, unsupervised learning, natural language programming (NLP), fuzzy rules and optimisation ().

Figure 4. The evolution of AI techniques.

The trend chart to show the evolution of different AI techniques over time.
Figure 4. The evolution of AI techniques.

Figure 5. The AI landscape in Aphasia research.

A mindmap to show the AI landscape in Aphasia research in terms of different techniques.
Figure 5. The AI landscape in Aphasia research.

Supervised learning

The majority of studies in this review (69%) used supervised machine learning models for predictions and classifications. Since 2015, there has been a significant rise in the deep learning models comprising deep, recurrent, and convolutional neural networks that can handle a large volume of data with increased dimensionality. However, supervised machine learning models are dependent on labelled datasets which can be challenging to acquire in real-world settings, including in aphasia.

Natural language processing

NLP was a smaller, but growing, AI approach within retrieved studies (18/77), with the majority of these studies published since 2014 (12/18). With the use of lexical analysis and basic linguistic analysis techniques, the application of NLP has recently progressed towards language models, sentiment analysis and chatbot technology that uses deep learning and novel natural language understanding techniques.

Unsupervised learning

Unsupervised machine learning techniques do not rely on labelled datasets; instead, the algorithms can self-learn and adapt based on the characteristics of the data. Although it provides abundant research opportunities, the use of unsupervised machine learning in current aphasia research has been limited to date: just 12% of papers in this review. Until the last decade, unsupervised learning was mostly used for data exploration, clustering, and dimensionality reduction purposes. However, recent research exploited the ability of unsupervised learning to simulate semantic and phonological learning representations using self-organizing maps (SOM) [Citation26], highlighting novel opportunities for the use of these approaches in aphasia research.

Fuzzy rules, optimisation

Fuzzy rules and optimisation techniques were prominent before 2015 but are less used in present-day research.

The use of AI relatingto stages of aphasia management

The objectives of each study were mapped onto stages of aphasia management during data extraction. We defined three stages as Assessment, Therapy, and Self-Management to enable an overview of how AI has been used. An additional stage, Discovery, was defined for early or theoretical work that did not nominate an explicit real-world application. shows the publication counts based on the categorisation derived in this study, counts per each stage are shown in bold, italic values.

Table 1. Publication counts by categorisation of research objectives.

The majority of the studies (35/77) in the review yield were focused on using AI for diagnostic purposes, with the goals of automating the detection of aphasia, the diagnosis of a subtype, or the classification of aphasia severity. Data used for diagnosis was typically audio recordings, assessment results or language transcriptions. Seven studies explored the use of AI to enhance speech analysis within aphasia – a challenging task for standard speech recognition engines [Citation27]. In three, the purpose was to improve the accuracy of feature recognition from acoustic speech samples in order to improve the identification or classification of aphasia [Citation25,Citation28,Citation29]. The remaining four aimed at providing automated feedback within therapy software, including automated judgements of utterance quality [Citation25] or within immersive VR [Citation30].

Three papers explored the use of AI to predict individual responses to aphasia therapy from data sources including demographic, behavioural and imaging data [Citation26,Citation31]. In one study, a computational model of bilingual language representations was created based on self-organising maps, a lesion simulated, and predictions of therapy response were tested against clinical data, with promising results [Citation26].

Seven retrieved studies simulated aspects of normal or disrupted language processing through computational models, in order to understand the neural foundations of language representations and aphasic errors [Citation32–38]. Another group of studies examined the neurological correlates of aphasia symptoms through analysis of combined neuroimaging modalities (structural or functional) and pathology data compared to clinical presentations [Citation39–42]; others went further and explored machine-derived aphasia subtypes from the same data – these three studies focused on Primary Progressive Aphasia subtypes [Citation43–45].

Discussion

To our knowledge, there have been no previous explorations of the application of AI to aphasia rehabilitation. Therefore, we undertook a scoping review of studies related to the use of AI in aphasia rehabilitation to describe current applications and highlight future directions for research. The total yield of 77 studies is relatively low given the potential applications of AI in aphasia. One of the limitations of this scoping review is that this study excludes applications of AI within industry that are not being reported in the literature. It is probable that AI-based assistive technology developed for the general population is being utilised by PWA and not reported in research; for example, voice assistants/smart speakers use NLP to understand the requests of users and these have been co-opted by people with other communication disorders for various purposes [Citation46].

Assessment and diagnosis of aphasia was, by far, the most researched use of AI. This might be explained by the fact that categorisation of data, especially into categories developed by humans, is a relatively “clean” problem compared to the complexities of aphasia treatment and real-life communication. Nonetheless, it seems that no assessment systems are yet implemented in clinical practice. One potential barrier could be that publicly available aphasia data (speech and videos) are often not comprehensively annotated given the complexities associated with impaired language as well as the practical difficulties with managing large volumes of data that are being collected. Although there are several features (acoustics, linguistics, facial expressions, gestures) that can be derived from these data, the lack of annotated data poses challenges in developing supervised machine learning models. Efficiency and cost-effectiveness would need to be demonstrated before these systems are implemented.

A small number of studies had used AI to explore prognosis, either predicting general recovery of aphasia severity or in response to therapy. Several projects are analysing large-scale datasets to improve the accuracy of prognostic algorithms, combining neuroimaging, clinical data and demographic factors [Citation47]. Machine learning is one method of analysing this highly complex and interactional data and applying it to new cases of aphasia.

While many studies in this review trained machine learning models for differentiating traditional aphasia subtypes, an alternative approach explored in a smaller number of studies was unsupervised learning of subtypes. In this approach, no labels were provided, and the machine identified groupings of aphasia cases using patterns in any relevant dimensions – these are not necessarily intuitive to humans. Landrigan et al. (2021) found three profiles in post-stroke aphasia that were primarily distinguished by semantic and phonological processing, with traditional features such as fluency or expressive/receptive abilities distributed across clusters [Citation45]. Within primary progressive aphasia (PPA), initial results suggest there could be five or six distinct PPA subtypes, as opposed to the three clinically derived subtypes currently in use [Citation43]. This data-driven approach may ultimately help explain the heterogeneity of treatment response in aphasia by linking clinical presentation more closely to neuropathology.

Interestingly, while the earliest papers conceptualised therapy software that would emulate clinical decision-making, few retrieved studies focused on applications to aphasia treatment. There is a shortage of funding for aphasia intervention relative to the number of people living with aphasia [Citation48], and intervention is an important area where AI could produce adaptive, personalised therapy software that requires less direct clinician input [Citation49]. For example, while traditional therapy software can use fixed rules to alter the difficulty of the task according to the accuracy and response time of the user [Citation50], as initially envisaged by Katz (1990) [Citation51], AI can allow nuanced adjustments based on a deep understanding of the tasks, stimuli, patient profiles and other dimensions [Citation50]. Constant Therapy is one example where such a system has been implemented in broadly used commercial therapy software [Citation49]. The ability to make ongoing adjustments as aphasia symptoms change over time is important for self-managed therapy or maintenance tasks, particularly when clinician input is not available. Such self-managed maintenance after discharge from therapy services is now recognised as crucial to maintaining gains made in therapy [Citation52].

Recently, several AI technologies have advanced considerably and could have applications for therapy software, immediately and into the future as the technologies progress. NLP is now able to both process and generate complex language of considerable length. Language models built using transformers, such as GPT-3 [Citation53], allow the AI to attend to meaning beyond the phrase or sentence levels, even across paragraphs, allowing a greater understanding of context and referential language [Citation54]. Currently, most therapy software is focused on the word level, or where sentences are used, stimuli and correct responses are pre-programmed. NLP advancements open the door to AI software generating sensible sentence- or paragraph-level stimuli and accurately checking the patient’s comprehension of compared to its own parsing.

NLP also has the potential to enhance virtual therapists. Virtual therapists have been developed and trialled within aphasia [Citation37,Citation38], and NLP could allow more natural, open-ended, chatbot-style interaction with the users. In time, AI-enabled virtual therapists could provide training and correction of conversational language in real-time. While virtual reality and virtual therapists are in use within aphasia, they are currently being used independently. The integration of these components with appropriate data fusion techniques could be used to maximise authentic practice opportunities [Citation29]. The integration of chatbot interaction with language models, the most notable example currently being ChatGPT [Citation53], also points to future opportunities for clinicians and those with less severe aphasia to request specific and personalised materials for rehabilitation and practise. For example, GPT-3 and ChatGPT are both capable of generating a list of common sentences using the word “o’clock”. These language models are trained on a broad range of texts but can be fine-tuned with more specific training. Similarly, text-to-image generation models may become increasingly accurate at generating visual stimuli suitable for assessment, confrontation naming or communication purposesFootnote2. Both text and image generated by AI have the advantage of being copyright free in most cases. In the near future, multimodal models that can generate material across images, text, video and audio are likely to become available and could be used for creation of richer and more stimulating aphasia treatment materials. As the sophistication, accuracy and usability of AI tools expand, more users will be able to access the benefits of AI without requiring programming or technical skills.

In addition to very few therapy-focused papers in this review, there was also a low number of studies exploring self-management/assistive options. Those that were explored do not appear to have progressed to real-world implementation. The barriers are not clear from the literature, but PWA have identified that existing technology offers a range of options in self-management [Citation55]. AI could potentially further assist in communicatively complex situations. Using NLP, language models can process complex texts and generate summaries with high readability; with training, this process could automate aphasia-friendly text production. Language models can also recognise and correct grammatical errors, and with a greater understanding of topic and context, may also be able to accurately identify and correct paraphasias and other language-based errors. Future applications in aphasia could leverage the abilities of advanced NLP techniques such as transformers, language models and conversation AI to build more robust, customizable alternative and augmentative communication applications to assist with communication difficulties. This will allow the software to learn patterns at an individual level and compensate accordingly.

Another largely unexplored application of AI in aphasia is the emotional domain. Recognizing emotional health is crucial in PWA, who are at high risk of depression and anxiety [Citation56]. However, in our review, only one AI study explored the assessment of the emotions of PWA [Citation57]. AI has successfully been utilised in recognising a variety of mental health disorders in non-aphasic speakers [Citation58] and could allow early flagging of mood disorders in PWA based on combined analysis of physical and social activity, facial recognition, eye tracking [Citation59] and linguistic data. Chatbots and conversational agents could then be used to consider options for escalating mental health challenges to relevant healthcare professionals for management.

Challenges remain in applying AI to aphasia. For example, speech recognition is complex, with additional challenges within aphasia being the need to recognise and appropriately take account of paraphasic errors, neologisms, revisions, greater pause times and agrammatism. However, as the sophistication of AI technology advances, it will become more suited to the complexity of communication in PWA. One avenue to improve accuracy is the fusion of data from multiple modalities. For example, while analysis of auditory data alone may provide imperfect word recognition, the addition of facial and gesture recognition, emotion capture and understanding of the visual surrounding of the person may provide enough additional context to improve accuracy. This will lead to applications with a better understanding of individual users, employing concepts such as human-centric AI [Citation60] and digital twins [Citation61].

Digital inclusion is a key factor that needs to be considered for PWA to ensure accessibility of AI-enabled solutions [Citation62]. Co-design of solutions alongside PWA is one way to ensure the usability and accessibility of the final products [Citation63,Citation64] thereby providing personalised rehabilitation, management and care options for PWA.

Conclusion

Over time, the use of AI in aphasia research has broadened in terms of the technology used as well as how it has been applied to aphasia. Although AI has progressed exponentially over time, the implementation of AI into aphasia management is relatively slow-paced. Many PWA see technology ‘as an enabler of self-management, autonomy and life participation’ [Citation16], and the ongoing enhancement of AI within aphasia could facilitate access to personalised care and expand opportunities for self-management.

Supplemental material

Supplemental Material

Download PDF (498.3 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available via an interactive dashboard hosted at https://bit.ly/3hzLZL3

Additional information

Funding

This work was supported by the National Health and Medical Research Council Ideas [Grant APP2003210].

Notes

2 Current prominent examples include Dall-E2, Stable Diffusion and Imagen.

References

  • Pedersen PM, Jørgensen HS, Nakayama H, et al. Aphasia in acute stroke: incidence, determinants, and recovery. Ann Neurol. 1995;38(4):659–666.
  • Simmons-Mackie N, Cherney LR. Aphasia in North America: highlights of a white paper. Arch Phys Med Rehabil. 2018;99(10):e117.
  • Code C, Herrmann M. The relevance of emotional and psychosocial factors in aphasia to rehabilitation. Neuropsychol Rehabil. 2003;13(1-2):109–132.
  • Ebrahim S, Barer D, Nouri F. Affective illness after stroke. Br J Psychiatry. 1987;151:52–56.
  • Engelter ST, Gostynski M, Papa S, et al. Epidemiology of aphasia attributable to first ischemic stroke: incidence, severity, fluency, etiology, and thrombolysis. Stroke. 2006;37(6):1379–1384.
  • Gainotti G. Emotional, psychological and psychosocial problems of aphasic patients: an introduction. Aphasiology. 1997;11(7):635–650.
  • Laska AC, Hellblom A, Murray V, et al. Aphasia in acute stroke and relation to outcome. J Intern Med. 2001;249(5):413–422.
  • Paolucci S, Antonucci G, Pratesi L, et al. Functional outcome in stroke inpatient rehabilitation: predicting no, low and high response patients. Cerebrovasc Dis. 1998;8(4):228–234.
  • Tilling K, Sterne JAC, Rudd AG, et al. A new method for predicting recovery after stroke. Stroke. 2001;32(12):2867–2873.
  • Ellis C, Simpson AN, Bonilha H, et al. The one-year attributable cost of post stroke aphasia. Stroke. 2012;43(5):1429–1431.
  • van der Gaag A, Brooks R. Economic aspects of a therapy and support service for people with long-term stroke and aphasia. Int J Lang Commun Disord. 2008;43(3):233–244.
  • Rose M, Ferguson A, Power E, et al. Aphasia rehabilitation in Australia: current practices, challenges and future directions. Int J Speech Lang Pathol. 2014;16(2):169–180.
  • Repetto C, Paolillo MP, Tuena C, et al. Innovative technology-based interventions in aphasia rehabilitation: a systematic review. Aphasiology. 2021;35(12):1623–1646.
  • Weidner K, Lowman J. Telepractice for adult speech-language pathology services: a systematic review. Perspect ASHA SIGs. 2020;5(1):326–338.
  • Nichol L, Hill AJ, Wallace SJ, et al. Self-management of aphasia: a scoping review. Aphasiology. 2019;33(8):903–942.
  • Nichol L, Pitt R, Wallace SJ, et al. “There are endless areas that they can use it for”: speech-language pathologist perspectives of technology support for aphasia self-management. Disabil Rehabil Assist Technol. 2022;0:1–16.
  • Jordan N, Deutsch A. Why and how to demonstrate the value of rehabilitation services. Arch Phys Med Rehabil. 2022;103(7S):S172–S177.
  • Harvey SR, Carragher M, Dickey MW, et al. Treatment dose in post-stroke aphasia: a systematic scoping review. Neuropsychol Rehabil. 2021;31(10):1629–1660.
  • Watila MM, Balarabe B. Factors predicting post-stroke aphasia recovery. J Neurol Sci. 2015;352:12–18.
  • Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–473.
  • What is Power BI | Microsoft Power BI. What is Power BI | Microsoft Power BI; 2022. Available from: https://powerbi.microsoft.com/en-us/what-is-power-bi/.
  • An efficient deep learning based method for speech assessment of Mandarin-speaking aphasic patients | IEEE Journals & Magazine | IEEE Xplore; 2022. Available from: https://ieeexplore.ieee.org/document/9146149.
  • Qin Y, Lee T, Feng S, et al. Automatic speech assessment for people with aphasia using TDNN-BLSTM with multi-task learning. Interspeech. 2018. DOI:10.21437/Interspeech.2018-1630
  • Fraser KC, Meltzer JA, Graham NL, et al. Automated classification of primary progressive aphasia subtypes from narrative speech transcripts. Cortex. 2014;55:43–60.
  • Qin Y, Wu Y, Lee T, et al. An end-to-End approach to automatic speech assessment for Cantonese-speaking people with aphasia. J Sign Process Syst. 2020;92(8):819–830.
  • Grasemann U, Peñaloza C, Dekhtyar M, et al. Predicting language treatment response in bilingual aphasia using neural network-based patient models. Sci Rep. 2021;11(1):10497.
  • Barbera DS, Huckvale M, Fleming V, et al. NUVA: a naming utterance verifier for aphasia treatment. Comput Speech Lang. 2021;69:101221.
  • Aishwarya J, Kundapur PP, Kumar S, et al. Kannada speech recognition system for aphasic people. In: 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Bangalore, India. 2018. p. 1753–1756.
  • Le D, Licata K, Mercado E, et al. Automatic analysis of speech quality for aphasia treatment. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy. 2014. p. 4853–4857.
  • Egaji O, Asghar I, Griffiths M, et al. Digital speech therapy for the aphasia patients: challenges, opportunities and solutions. 2019. p. 85–88.
  • Gu Y, Bahrani M, Billot A, et al. A machine learning approach for predicting post-stroke aphasia recovery: a pilot study. In: Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments [Internet]. New York, NY, USA: Association for Computing Machinery; 2020. p. 1–9.
  • Hm G. Studies in artificial aphasia: experiments in processing change. Comput Methods Programs Biomed. 1986;22(1):43–50.
  • Wright JF, Ahmad K. The connectionist simulation of aphasic naming. Brain Lang. 1997;59(2):367–389.
  • Monaghan P, Shillcock R. Connectionist modelling of the separable processing of consonants and vowels. Brain Lang. 2003;86(1):83–98.
  • Jrvelin A, Juhola M, Laine M. Applying machine learning methods to aphasic data: academic dissertation. Tampere: department of Computer Sciences, University of Tampere; 2008.
  • Westermann G, Ruh N. A neuroconstructivist model of past tense development and processing. Psychol Rev. 2012;119(3):649–667.
  • Järvelin A, Juhola M, Laine M. A neural network model for the simulation of word production errors of finnish nouns. Int J Neural Syst. 2006;16(4):241–254.
  • Gigley HM, Duffy JR. The contriubtion of clinical intelligence and artificial aphasiology to clinical aphasiology and artificial intelligence. In: Clinical Aphasiology: Proceedings of the Conference, BRK Publishers; 1982. p. 170–177.
  • Zhang Y, Kimberg DY, Coslett HB, et al. Multivariate lesion-symptom mapping using support vector regression. Hum Brain Mapp. 2014;35(12):5861–5876.
  • Spinelli EG, Mandelli ML, Miller ZA, et al. Typical and atypical pathology in primary progressive aphasia variants. Ann Neurol. 2017;81(3):430–443.
  • Yuan B, Zhang N, Yan J, et al. Resting-state functional connectivity predicts individual language impairment of patients with left hemispheric gliomas involving language network. Neuroimage Clin. 2019;24:102023.
  • Kristinsson S, Zhang W, Rorden C, et al. Machine learning-based multimodal prediction of language outcomes in chronic aphasia. Hum Brain Mapp. 2021;42(6):1682–1698.
  • Matias-Guiu JA, Díaz-Álvarez J, Cuetos F, et al. Machine learning in the clinical and language characterisation of primary progressive aphasia variants. Cortex. 2019;119:312–323.
  • Álvarez JD, Matias-Guiu JA, Cabrera-Martín MN, et al. An application of machine learning with feature selection to improve diagnosis and classification of neurodegenerative disorders. BMC Bioinf. 2019;20(1):491.
  • Landrigan J-F, Zhang F, Mirman D. A data-driven approach to post-stroke aphasia classification and lesion-based prediction. Brain. 2021;144(5):1372–1383.
  • Kulkarni P, Duffy O, Synnott J, et al. Speech and language practitioners’ experiences of commercially available voice-assisted technology: web-based survey study. JMIR Rehabil Assist Technol. 2022;9(1):e29249.
  • Seghier ML, Patel E, Prejawa S, et al. The PLORAS database: a data repository for predicting language outcome and recovery after stroke. NeuroImage. 2016;124(Pt B):1208–1212.
  • Code C, Petheram B. Delivering for aphasia. Int J Speech Lang Pathol. 2011;13(1):3–10.
  • Constant therapy’s neuroperformance engine. Constant Therapy Health; 2022. Available from: https://constanttherapyhealth.com/science/technology/
  • Dadgar-Kiani E, Anantha V. Continuously-encoded deep recurrent networks for interpretable knowledge tracing in Speech-Language and cognitive therapy. medRxiv; 2020;2020-11. DOI:10.1101/2020.11.08.20206755
  • Katz RC. Intelligent computerized treatment or artificial aphasia therapy? Aphasiology. 1990;4(6):621–624.
  • Menahemi-Falkov M, Breitenstein C, Pierce JE, et al. A systematic review of maintenance following intensive therapy programs in chronic post-stroke aphasia: importance of individual response analysis. Disabil Rehabil. 2022;44(20):5811–5826.
  • Introducing ChatGPT [Internet]; 2023. Available from: https://openai.com/blog/chatgpt.
  • Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. arXiv; 2017. http://arxiv.org/abs/1706.03762.
  • Nichol L, Wallace SJ, Pitt R, et al. People with aphasia share their views on self-management and the role of technology to support self-management of aphasia. Disabil Rehabil. 2021;0:1–14.
  • Kauhanen ML, Korpelainen JT, Hiltunen P, et al. Aphasia, depression, and non-verbal cognitive impairment in ischaemic stroke. Cerebrovasc Dis. 2000;10(6):455–461.
  • Gillespie S, Laures-Gore J, Moore E, et al. Identification of affective state change in adults with aphasia using speech acoustics. J Speech Lang Hear Res. 2018;61(12):2906–2916.
  • Adikari A, Gamage G, de Silva D, et al. A self-structuring artificial intelligence framework for deep emotions modeling and analysis on the social web. Future Gener Comput Syst. 2021;116:302–315.
  • Ashaie SA, Cherney LR. Eye tracking as a tool to identify mood in aphasia: a feasibility study. Neurorehabil Neural Repair. 2020;34(5):463–471.
  • Rožanec JM, Novalija I, Zajec P, et al. Human-centric artificial intelligence architecture for industry 5.0 applications. arXiv; 2022. http://arxiv.org/abs/2203.10794.
  • Hassani H, Huang X, MacFeely S. Impactful digital twin in the healthcare revolution. BDCC. 2022;6(3):83.
  • Menger F, Morris J, Salis C. Aphasia in an internet age: wider perspectives on digital inclusion. Aphasiology. 2016;30:112–132.
  • Galliers J, Wilson S, Roper A, et al. Words are not enough: empowering people with aphasia in the design process. In: Proceedings of the 12th Participatory Design Conference Research Papers, Vol 1. 2012; New York, NY, USA: Association for Computing Machinery. p. 51–60.
  • Wilson S, Roper A, Marshall J, et al. Codesign for people with aphasia through tangible design languages. CoDesign. 2015;11(1):21–34.