2,271
Views
2
CrossRef citations to date
0
Altmetric
Original Articles

A multimodal analysis of enactment in everyday interaction in people with aphasia

&
Pages 1441-1461 | Received 12 Feb 2019, Accepted 15 Jul 2019, Published online: 31 Jul 2019

ABSTRACT

Background: “Multimodal communication” is a relatively common term in aphasia research. However, the scope of studies on multimodal interaction in aphasia is generally restricted to one or two multimodal resources, and the type of discourse analysed is often not representative of authentic interaction. Finally, the interpersonal (versus referential) functions of multimodal resources are frequently overlooked.

Aims: The purpose of this study was to explore the multimodal realisation of enactments by people with aphasia in everyday interaction.

Methods & Procedures: Authentic interactions of six people with aphasia interacting with communication partners of their choice were systematically analysed. Frameworks originating from non-brain-damaged studies were applied to examine the characteristics and functions of linguistic, multimodal, and stance-taking resources used to realise enactments.

Outcomes & Results: Even though the participants used the same multimodal resources as non-brain-damaged communicators, the frequencies and characteristics were different. The relationship between multimodal resources and interpersonal functions was different as well.

Conclusions: People with aphasia use the same multimodal resources as non-brain-damaged communicators, indicating their retained strengths. However, their higher use of intonation, gesture, and – to a lesser extent – facial expression indicates that these may be important “meaning making” resources for them, which could be utilised more in therapeutic endeavours.

Introduction

The use of “non-verbal” resources has received considerable attention in aphasia research. Previous studies have examined the forms, potential and/or utility of gestures (e.g., Hogrefe, Ziegler, Weidinger, & Goldenberg, Citation2017; Pritchard, Dipper, Morgan, & Cocks, Citation2015; Rose, Citation2006; van Nispen, van de Sandt-Koenderman, Sekine, Krahmer, & Rose, Citation2017); pantomime (e.g., Nispen, Sandt-Koenderman, Mol, & Krahmer, Citation2014) and pointing (e.g., Klippi, Citation2015). Even though these studies have provided invaluable insights into communication abilities other than spoken language in aphasia, many other modalities such as gaze, facial expression, and posture, have often been overlooked. Studies resulting from Conversation Analysis principles, on the other hand, do acknowledge the importance of, for example, intonation, facial expression (e.g., Fromm et al., Citation2011; Laakso, Citation2014; Lind, Citation2002), and “body language” (Fromm et al., Citation2011, p. 1434). However, the frameworks applied in these studies tend to be of a more descriptive nature because of their qualitative, data-driven approach, where research questions originate from the data itself rather than being posed prior to carrying out the analysis. In aphasia therapy, “multi-modal therapy” – involving either compensation techniques when spoken communication fails to be restored, or facilitation technique to re-establish language and speech – usually refers to the use of drawing, gesture, reading and writing (e.g., Rose, Attard, Mok, Lanyon, & Foster, Citation2013; Rose, Mok, Carragher, Katthagen, & Attard, Citation2016). However, even though the term is used frequently in the aphasia literature, there is no consensus regarding its definition either (Pierce, O’Halloran, Togher, & Rose, Citation2018).

Another striking characteristic of studies assessing multimodal communication in aphasia that have been carried out so far is their approach to function. All communication is, and has always been, multimodal (Kress & Van Leeuwen, Citation1996). Therefore, analyses focused solely or primarily on language cannot adequately account for meaning in interaction (Jewitt, Citation2014). However, the traditional opposition of “verbal” and “non-verbal” communication in aphasia research presumes that the verbal is primary and therefore the “most important” (Jewitt, Bezemer, & O’Halloran, Citation2016). Just like in non-brain-damaged (NBD) research the point of reference and focus of analysis in aphasia research has traditionally hinged on speech, its central units usually being linguistic units (e.g., “intonation units”) or units defined in linguistic terms (e.g., a “turn” is defined in terms of “who is speaking”) (Bezemer & Jewitt, Citation2010). As such, modes of communication other than language are considered accompaniment, support, or even substitutes for language rather than modes which are crucial to understand communication.

This is clearly illustrated by Kong, Law, and Chak (Citation2017) discussion of the “functional roles” (p. 2032) of gestures in communication. Kong et al. (Citation2017) distinguish eight functions of gestures, all of which reflect their contribution to the semantic content of communication (e.g., providing “additional information to the message being conveyed”, “enhancing speech content”, “providing alternative means of communication” (p. 2032, our italics)); gestures are considered helpers to speech. Interpersonal functions – referring to the fact that speakers not only talk about something but are always talking to and with others with a particular perspective or stance in mind – are considered “nonspecific” or “noncommunicative” (p. 2032). This overview of potential functions of gestures in interaction involving people with aphasia (PWA) demonstrates how little is known about the role multimodal communication plays in aphasia. As stressed by Adami (Citation2016) and Jewitt (Citation2014), the examination of relations among modes is considered key to understanding any instance of communication.

The current study

The current study was designed to explore the relations among interactional modes in everyday interactions involving PWA. To do so, a discourse phenomenon in which multimodality has been argued to play a crucial role was assessed. Various terms have been introduced to refer to this phenomenon, such as demonstration (Clark & Gerrig, Citation1990), constructed dialogue (Debras, Citation2015), fictive interaction (Stec, Citation2016), and enactment (Groenewold & Armstrong, Citation2018; Kindell, Sage, Keady, & Wilkinson, Citation2013; Wilkinson, Beeke, & Maxim, Citation2010). Since enactment best reflects the multimodal nature of the phenomenon under study, this term will be applied here.

Enactment

When enacting, communicators depict to recipients aspects of a reported scene or event by employing direct reported speech and/or other behaviour such as gesture, body movement and/or prosody (Goodwin, Citation1990; Streeck & Knapp, Citation1992; Wilkinson et al., Citation2010). As such, enactment provides clear opportunities for multimodal communication. Previous research has shown that in PWA the use of enactment is not only preserved (Hengst, Frame, Neuman-Stritzel, & Gannaway, Citation2005; Ulatowska & Olness, Citation2003; Ulatowska, Reyes, Santos, & Worle, Citation2011; Wilkinson et al., Citation2010) but also increased (Berko Gleason et al., Citation1980; Groenewold, Bastiaanse, & Huiskes, Citation2013). One of the candidate explanations for this increase is that enactment is usually heavily marked with paralinguistic and non-linguistic behaviours, allowing the PWA to add information to talk that would otherwise be too complex to put into words (Groenewold & Armstrong, Citation2018; Günthner, Citation1999; Hengst et al., Citation2005). In other words: enactment could be a complementary or even compensatory device for PWA (Groenewold, Citation2015; Wilkinson et al., Citation2010). The current study will further explore the characteristics of this interactional resource for people with aphasia.

In Transcript 1, an example is provided of a PWA (H, all initials are pseudonyms) using an enactment to explain to his friend (F) how other members of an association used to look at him when he struggled with his language.

Transcript 1:

1.   H:     and then eh

2.          and then eh

3.          well eh ((mimicks facial expression))

4:  F:      they look [at you]

5:  H:     [yes yes yes]

Enactments, by their very nature, are interpersonal: There is a point to a person presenting the information in a certain way that provides the enacted/enacting communicator’s stance on that information. Conveying evaluative, modalising meanings rather than referential content (Olness & Englebretson, Citation2011), enactment goes beyond using multimodal resources for concrete compensatory information purposes only.

Since the importance of multimodal resources – and the way in which they are mutually combined – has been described more extensively in the typical interaction literature, frameworks originating from studies into non-brain-damaged (NBD) interaction will be applied. Below the characteristics that have been shown to be of relevance in the study of enactment in aphasic and/or NBD interaction will be discussed. Attention will be paid to the linguistic characteristics, the multimodal characteristics, and the relationship between the multimodal realisation and the interpersonal functions of enactments.

Linguistic characteristics of enactment

Previous research showed that enactments produced by PWA are often notable in terms of the distinctive grammatical practices within which they are produced (Wilkinson et al., Citation2010). More specifically, people with agrammatic aphasia exhibit a preference for enactments without a reporting verb and/or a person reference (“bare” enactments, e.g., Transcript 1). Such enactments are grammatically relatively easy to produce (Groenewold et al., Citation2013). Person references (i.e., a personal pronoun or a name), and reporting verbs (e.g., say, be like, whisper, shout, etc.) can be present or absent (independently of each other). Another characteristic is the question of who is being enacted (e.g., self, addressee, absent third party, prototypical person, etc.).

Multimodal characteristics of enactment

Stec (Citation2016) identified five multimodal articulators (i.e., intonation, gesture, facial expression, speaker gaze, and body posture) that play a role in the realisation of enactments by NBD communicators. Relevant literature on these articulators is discussed below. The categorisation system will be further discussed in the Methods section.

Intonation

Even though intonation is a topic that has not been addressed systematically yet in the aphasia literature, some studies have acknowledged its importance. Disturbed prosody, for example, has been identified as one of the characteristics of agrammatism (e.g., Seddoh, Citation2004). Furthermore, Goodwin (Citation2010) described a man with a vocabulary of only three words who could combine lexico-syntactic structure with prosody in such a way that a whole that is greater than any of its parts was created. In a Norwegian case study, Lind (Citation2002) showed that prosody can play an important role in the pragmatic contextualisation of direct reported speech. Altogether these studies demonstrated that prosody can play an important role in meaning-making in communication in PWA.

Gesture

Despite the broad potential relevance of gesture in aphasia, the topic is often claimed to be understudied (see Linnik, Bastiaanse, & Höhle, Citation2016 for an overview). Whereas most researchers agree that the processes underlying gesture and language production are shared or closely related (Dipper, Cocks, Rowe, & Morgan, Citation2011; Goodwin, Citation2000; Mol, Krahmer, & van de Sandt-Koenderman, Citation2013), others suggest that the gesture system can remain functional even when language production is severely impaired (e.g., Akhavan, Göksun, & Nozari, Citation2018). It has been shown that a great proportion of gestures produced by most PWA is crucial for understanding their communication (van Nispen et al., Citation2017). Strikingly, even though gestures play an important role in everyday communication (Kendon, Citation1997), most studies into the role of gestures in communication involving PWA have relied on discourse elicitation tasks such as semi-structured conversations (e.g., van Nispen et al., Citation2017), procedural discourses (e.g., Pritchard et al., Citation2015), responses to a communicative scenario (e.g., Mol et al., Citation2013), video clip retellings (e.g., Hogrefe, Ziegler, Wiesmayer, Weidinger, & Goldenberg, Citation2013), story retell samples (e.g., Sekine & Rose, Citation2013), cartoon descriptions (e.g., Dipper et al., Citation2011) or personal narratives (e.g., Sekine, Rose, Foster, Attard, & Lanyon, Citation2013) rather than authentic interactions. As a consequence, the meaning-making potential of gestures in conversations involving PWA remains understudied. All studies suggest, however, that gestures play an important role in meaning-making in individuals with aphasia.

Facial expression

Little is known about facial expression in aphasia since only a few studies on this topic have been carried out. Furthermore, when assessing facial expression in aphasia there has been a main focus on perception (e.g., Duffy & Watkins, Citation1984; Feyereisen & Seron, Citation1982) rather than production (but see, e.g., Buck & Duffy, Citation1980; Duffy & Buck, Citation1979). In addition, the way “spontaneous” nonverbal expressiveness in the production studies was elicited was through responses to different types of affective slides (i.e., familiar people, pleasant landscapes, unpleasant scenes, and strange photographic effects). These latter studies suggested that PWA are equal to or more expressive than control participants (Buck & Duffy, Citation1980; Duffy & Buck, Citation1979).

Speaker gaze

In exploring topics such as word-searching behaviour (Laakso & Klippi, Citation1999), the role of gaze in interaction involving PWA has received some attention. For example, through applying Conversation Analysis, research has demonstrated that PWA use gaze for interaction management, such as turning gaze to the recipient to solicit assistance from the recipient (Damico, Oelschlaeger, & Simmons-Mackie, Citation1999; Goodwin & Goodwin, Citation1986; Wilkinson, Citation2007). Furthermore, PWA have been argued to use shifts in gaze to hold or yield a turn in conversation (Laakso, Citation2014). Such gaze practices, and also those of withdrawing gaze to display a word search as one’s own, self-directed activity, are similar to those used by NBD communicators (e.g., Laakso, Citation1997).

In the organisation of enactments in NBD communicators, gaze is argued to play a crucial role. At the start of enactments, NBD communicators often direct their gaze away from the listeners, indicating that they are entering into a part of the story that will be enacted rather than narrated (Sidnell, Citation2006).

Body posture

Body posture is another topic that has received little attention in aphasia research. An exception to this is a study carried out by Laakso (Citation2014), who examined how PWA display affect in conversation. She argues that shifts in body postures are one of the most common affect displays and that PWA use affect displays in close coordination with shifts in body posture that reflect turn organisation (Laakso, Citation2014). This finding resembles an observation by Bloch and Beeke (Citation2008), who described how a PWA used body posture – in combination with gaze and gesture – to project something of the meaning of a turn not only before and during, but also after its verbal content.

Apart from the different forms of multimodality and their reciprocal relations, their relationship with stance-taking has been described in the NBD enactment literature (e.g., Debras, Citation2015). Stance is “a display of socially recognized point of view or attitude” (Ochs, Citation1993, p. 288). Debras (Citation2015) argues that, when enacting, NBD communicators use changes in voice pitch and pantomime when they distance themselves from a stance attributed to the enacted person. When communicators endorse a stance attributed to an absent third party by enactment, there is continuity in gesturing style and intonation (Debras, Citation2015).

The current study aims to address the issues raised above in relation to linguistic and multimodal patterns of enactment in PWA through analysis of authentic interactions, asking the following questions:

  1. What are the linguistic and multimodal characteristics of enactments produced by PWA in everyday interaction?

  2. Is there a relationship between the amount of linguistic information and the number of multimodal articulators (intonation, gesture, facial expression, gaze, posture change)?

  3. To what extent do the characteristics of these enactments resemble findings for NBD communicators?

    1. To what extent are the qualitative characteristics similar?

    2. To what extent are the quantitative characteristics similar?

    3. To what extent is the relationship between stance and shifts in intonation and/or gesturing style similar?

Methods

Enactments occurring in a corpus of authentic, everyday interactions collected by the first author in The Netherlands in 2012 were analysed. In this section, an overview of the corpus collection and annotation procedures is provided.

Corpus

The corpus consists of approximately 8 h of data and comprises 18 videos ranging in length from 6 min to 1 h and 22 min (M = 26 min, SD = 22 min) and 112 enactments (M = 18.7, SD = 9.4) produced by PWA. Six people with chronic aphasia (>6 months post-onset) were recorded talking to conversation partners of their choice (see ).

Table 1. Participant characteristics.

Their scores for the Boston Diagnostic Aphasia Examination (third edition, BDAE-3, Goodglass, Kaplan, & Barresi, Citation2001) aphasia severity rating scale ranged from 1 (“All communication is through fragmentary expression; great need for inference, questioning, and guessing by the listener. The range of information that can be exchanged is limited, and the listener carries the burden of communication”) to 3 (“The patient can discuss almost all everyday problems with little or no assistance. However, reduction of speech and/or comprehension make conversation about certain material difficult or impossible”) (M = 1.67, SD = 0.82). In order to provide more insight into the PWA’s speech characteristics, the aspects of the BDAE-3 rating scale profile of speech characteristics (Goodglass et al., Citation2001) that could be assessed based on the interactional data (i.e., all but repetition and auditory comprehension) were rated. These ratings are presented in .

Table 2. PWA’s speech characteristic scores for the BDAE-3 rating scale profile of speech characteristics (Goodglass et al., Citation2001) that could be assessed based on the interactional data. P: participant. Note that Paraphasia in running speech is only rated if phrase length is 4 or more (Goodglass et al., Citation2001).

All participants completed an informed consent procedure, involving an oral and written aphasia-friendly explanation of the goal and nature of the research allowing for an informed decision about participation, and several options regarding the use of the collected materials. All participants agreed to participate voluntarily and granted permission to use written transcripts of the recordings for scientific publications, and video-recordings for scientific conferences and education purposes.

Participants were asked to make video recordings of conversational activities, representative of types of activities that would occur if the video equipment were absent or if requests for data had not been made. The first author visited the participants at home to provide the camera, explain and complete the informed consent procedure, and explain how to position and operate the camera. No schedules or topics were predetermined, and video recording occurred at the participants’ discretion. Even though the participants were invited to record dialogues, since the purpose of the study was to examine enactment as it occurs in authentic everyday interaction, multi-party interactions (n = 5) were included for analysis as well. The remaining 13 conversations were two-party interactions, representing casual conversations between the PWAs and a friend (n = 5), child (n= 4), spouse (n = 3), and sister-in-law (n = 1).

Analysis

The first author identified, transcribed, and coded all enactments. Around 20% of the identified enactments were re-coded by an independent rater, who was familiar with the notions of enactment and its multimodal characteristics. Enactments were identified based on the presence of quoting predicates such as say or be like, or in the case of bare quotes, a shift in indexicals (e.g., personal pronouns, demonstratives, deictics, time reference), or prosodic or non-verbal markers, such as the occurrence of pauses in speech, shifts in posture, gaze, facial expression, voice quality and pitch height (see also Groenewold, Bastiaanse, Nickels, Wieling, & Huiskes, Citation2014; Lind, Citation2002).

The scheme for analysis is based on annotation schemes developed by Stec (Citation2016) and Debras (Citation2015) (see ). It contains variables for linguistic features pertaining to enactment, multimodal features which contribute to enactment (Stec, Citation2016), and labels for stance-taking (Debras, Citation2015).

Table 3. Scheme for analysis used for this study (adapted from Stec (Citation2016) and Debras (Citation2015)).

Linguistic realisation of enactment

First, the linguistic realisation of enactment was analysed. Based on the person reference (or local context in case of absence of a person reference), the enacted character was annotated. A communicator can enact him/herself, but also the addressee, or someone who is absent. Furthermore, a communicator can enact a generic or prototypical person (e.g., one), an animal or non-animate object (e.g., the “sound” of a needle in Transcript 2, line 6), or multiple persons (e.g., Mary and Pete in Transcript 3, line 10).

Transcript 2: ZZLLP

1.  K:  me (.) sewing (.) Mary?

2.  M:  well I (.) I can do it you’ve got [eh]

3.  K:   [yeah:]

4.  M:  I am good at (.) shortening and eh but it takes long (.) and nowadays

5.     [with those] (.) new machines

6.  K:  [zzlp]

7.     (1.0)

8.  M:  then you do double needles

Transcript 3

1.  N:    I said go there

2.  V:    yes of course, no problem

3.  N:    and

4.  V:    of course

5.  N:    yes but eh Mary was here

6.  V:    yes

7.  N:    and Pete

8.  V:    hmm

9.  N:    and eh no

10.  N:    eh said no then you said eh come then

11.  N:    well and well hm hm hm

12.  N:    eh I said go there

13.  V:    of course, no problem

Next, the absence or presence of an explicit person reference (i.e., name or pronoun) and reporting verb (e.g., say, be like) was coded.

Multimodal resources used to realise enactment

After coding the linguistic characteristics, the multimodal characteristics of the enactments were coded.

For character intonation, it was noted whether the communicator used special intonation which was noticeably different from the communicator’s narrative voice, e.g., a change in pitch height, duration, intensity, or voice quality, to indicate aspects of the enacted character’s speech. In some cases (e.g., onomatopoeia), there is no character whose intonation can be enacted (e.g., the needle “sound” in Transcript 2). Rather than annotating their character intonation as absent, we added a new category NA to mark these instances.

For gesture (Hands in Stec (Citation2016)) a distinction was made between character viewpoint gesture (communicator demonstrates a gesture performed by the enacted character), other gesture (gestures which do not reflect a gesture performed by the enacted character), and no gesture.

Character facial expression could be present (the communicator’s facial expression changes to demonstrate the enacted character), absent, or unclear. Just as for character intonation, NA was added to mark instances for which mimicking of facial expression is not possible (e.g., objects).

For gaze, the coding system applied by Stec (Citation2016) was used, complemented with the value late change towards addressee, indicating the communicator’s gaze moves towards the addressee after the enactment started (see Transcript 4).

Transcript 4: Hi-jack!

1.  H:   one shou– one should never eh ((gazes away from D))

2.  H:   say eh: ((shifts gaze towards D))

3.  H:   pilot eh hi Jack! ((gazes at D))

4.  H:    ((laughs)) ((gazes at D))

5.  D:   ((laughs))

The final multimodal resource described by Stec (Citation2016), posture change, was coded without modifications or additions. This variable indicates the direction of movement or shift in body orientation made by communicators during the enactment and may reflect movements made by the hands, torso and/or hands. All movements which were not self-adaptors (e.g., re-adjusting seated position) were analysed. The values for posture change were horizontal, vertical, sagittal (from back to front or vice versa), unclear, and none.

For an illustration of gesture, posture change, and facial expression, consider Transcript 5. This is taken from one of the interactions between B, a person with aphasia, and A, B’s sister-in-law. In this excerpt, head movements, character viewpoint gestures, and facial expression are used to enact Hannah, one of the aphasia centre volunteers.

Transcript 5: Cleaning

1.  A:    so who does the vacuuming cleaning there?

2.  B:    eh cleaning lady

3.  A:    right

4.  B:    cleaning lady

       (2.7)

5.  B:    no eh Hannah eh ((raises hand))

6.  B:    ((raises eyebrows, tilts head back, moves

7.      horizontally, short waves))

8.  B:    ((laughs))

9.  A:    well I think she’s right in that she says

10.  A:    I’m not cleaning the toilets

Number of articulators used to realise enactment

Following Stec (Citation2016), based on the multimodal resources annotations, a variable called articulator count was created which counts the number of multimodal articulators (resources) pertaining to enactments. It ranges from 0 to 5 to indicate no articulators (0) through all articulators (5). For example, an enactment with character intonation (1), no gesture (0), character facial expression (1), gaze change towards addressee (1), and sagittal posture change (1) would be counted as 4.

‘Meaningful’ use of multimodal resources

For the assessment of meaningful use of multimodal resources the procedures and interpretations suggested by Stec (Citation2016) were followed. She suggested that meaningful use of multimodal resources is indicated by change. Thus, away from addressee, quick shift and late change are active uses of gaze while maintaining gaze with the addressee is not. Similarly, for character intonation, it was noted whether the communicator used special intonation which was noticeably different from the communicator’s normal or narrative voice (e.g., a change in pitch, volume, accent, rate). Cases labelled unclear or NA (e.g., a completely non-verbal enactment) were counted as not active.

(Dis)affiliation

Finally, based on the analysis of the sequential context, the raters indicated whether the communicator agreed (affiliation) or disagreed (disaffiliation) with the enacted character (Debras, Citation2015). As in the case of character intonation and character facial expression, an extra value NA was added to the coding scheme to categorise those instances where the predetermined values are not possible (e.g., when a hypothetical person is enacted).

IRR reliability

The corpus consisted of 112 enactments in total. All enactments were coded by the first author. Eighteen percent (20 enactments) of the data was compared with annotations made by an independent rater. Cohen’s κ was run to determine if there was agreement between the two raters on the nine variables. For two of the linguistic variables (enacted character and person reference), there was a perfect agreement between the two raters, κ = 1.00 (p < 0.001). There was an almost perfect agreement for the third linguistic variable, reporting verb, κ = 0.806 (95% CI, 0.547 to 1.00), p < 0.001. For character intonation Cohen’s κ could not be calculated because one of the rater’s ratings had no variation. The percentage agreement for this variable was 0.95. There was substantial agreement for gesture, κ = 0.751 (95% CI, 0.508 to 0.994), p < 0.001, character facial expression, κ = 0.645 (95% CI, 0.273 to 0.784), p= 0.01, and gaze, κ = 0.745 (95% CI, 0.525 to 0.965). The inter-rater agreement for posture change was almost perfect, κ = 0.851 (95% CI, 0.663 to 1.000), p < 0.001. Finally, the agreement for affiliation was almost perfect as well, κ = 0.847 (0.655 to 1.000), p< 0.001.

Results

In this section, the linguistic and multimodal characteristics for enactments produced by the PWA are reported. Second, the relationship between the linguistic characteristics and the number of multimodal articulators is presented. Finally, a comparison is drawn between the current findings and those previously reported for NBD communicators.

Linguistic and multimodal characteristics for PWA

The PWA most frequently enacted other people (39.3%) or themselves (38.4%) (). However, they also regularly enacted multiple characters (e.g., “and then they said, alright, let’s go then” (participant 5)), and generic or prototypical characters (e.g., “on a plane one should never say, Hi Jack! Hi-jack!” (participant 2)).

Table 4. Enacted characters.

Total of 56.3% of all produced enactments were so-called bare enactments (i.e., no person reference and no reporting verb, e.g., “and then eh, well eh ((mimics facial expression of group of people))”) (). Approximately a fifth of the enactments were marked by a person reference and a reporting verb (e.g., I said, “buy ticket”). Nearly another fifth of the enactments were introduced by a person reference only (e.g., I, “sewing, Mary?!”). Finally, enactments introduced by a reporting verb only (e.g., then, said, “hey, man!”) occurred less frequently (n= 5, 4.5%).

Table 5. Person reference and reporting verbs in enactments.

The multimodal resources used to realise enactments are presented in . This overview contains the results for all enactments, regardless of whether or not they were linguistically marked. The participants enacted the enacted characters’ intonation in almost 80% of all instances. In some cases (8%) enactment of character intonation was coded as NA because the type of enactment did not allow for intonation enactment, such as instances which consisted exclusively of movement and/or facial expression.

Table 6. Multimodal resources used to realise enactments.

Nearly half of the enactments (48.2%) were not accompanied by a gesture. PWA produced a higher percentage of character viewpoint gestures (37.5%) than gestures which do not reflect a gesture performed by the enacted character (14.3%).

Character facial expression was enacted in more than half of all enactments (51.8%). In some cases (10.7%) this variable was coded as unclear because the participant’s facial expression was not clearly visible or the face was turned away.

The most frequently occurring value for gaze during enactment was maintained with addressee (32.1%), followed by away from addressee (25.0%). In nearly 20% of the cases, gaze was coded as late change towards addressee, which did not occur in Stec (Citation2016)’s categorisation system.

In more than half of the cases, enactments were not accompanied by a posture change. In the case of posture changes, it was most frequently a sagittal movement (20.5%), followed by a horizontal movement (17.9%).

Relationship between linguistic characteristics and number of multimodal articulators

In , the relationship between the absence or presence of linguistic markers (person reference and/or reporting verb) and the number of multimodal articulators used to realise enactment is presented. Bare enactments were most often accompanied by three active multimodal articulators (33.33%). The same goes for enactments that were preceded by only a person reference. Enactments that were preceded by only a reporting verb often co-occurred with three (40.0%) or four (40.0%) multimodal articulators. The most frequently occurring number of active multimodal articulators for enactments preceded by both a person reference and a reporting verb was four (30.43). Interestingly, this type of enactment was the only one that was in some cases (8.70%) produced without active multimodal articulators.

Table 7. Number of multimodal articulators used in enactments produced by PWA involving different linguistic markers.

In , the relationship between the presence of linguistic markers and the mean number of multimodal articulators is presented. Enactments preceded by both a person reference and a reporting verb were produced using the least multimodal articulators (M = 2.52), and the enactments preceded by only a reporting verb were produced using the largest number of multimodal articulators (M = 3.20). Bare enactments had a higher mean number of active articulators (2.84) than enactments preceded by (only) a person reference (2.67).

Table 8. Relationship between linguistic markers of enactment and mean number of articulators used by PWA.

Of the 63 bare enactments (i.e., those that were not preceded by a name, pronoun and/or reporting verb, 48 (76.2%)) were preceded by a shift in intonation. Twenty-six (41.3%) were accompanied by a character viewpoint gesture, and 8 (12.7%) were accompanied by an “other” gesture. The facial expression of the enacted character was mimicked for the realisation of 26 (41.3%) of the bare enactments. Finally, 33 (52.4%) of the bare enactments were accompanied by a posture change (horizontal/vertical/sagittal). The absence of a reporting verb (n = 84, 75% of all enactments) co-occurred with the absence of an intonation shift in only nine cases (10.7%).

PWA vs. NBD communicators

In the comparison between the use of multimodal articulators to realise enactment between the participants of the current study and the NBD participants of the study reported by Stec (Citation2016) (n =, p. 26) is presented. Compared to the NBD participants, the PWA use relatively more character intonation, gesture, and character facial expression, and less shifts in gaze and posture.

Table 9. Frequencies and percentages of enactments involving multimodal resources for PWA and NBD communicators (reported by Stec (Citation2016, p. 149)).

In , the mean number of multimodal articulators accompanying enactments produced by the PWA is compared to that of enactments produced by Stec (Citation2016)’s NBD participants. The average quantity for the PWA (2.76) is very similar to that for the NBD participants (2.80).

Table 10. Mean number of articulators used to realise enactment by PWA vs. NBD participants (reported by Stec, Citation2016, p. 150).

Multimodality and stance-taking

Finally, the occurrence of affiliation and disaffiliation, and their relationships with shifts in intonation and gesture (Debras, Citation2015) were assessed. In 58 instances of enactment (51.8%), the PWA affiliated with the enacted character. In 40 instances (35.7%) the PWA disagreed with the enacted character. In four cases (8.9%) the participant did not affiliate nor disaffiliate because the enacted character was fictive, inanimate or an animal. In and the relationships between stance and the use of intonation and gesture in the realisation of enactment by PWA are presented. Enactments representing both disaffiliation and affiliation co-occurred with intonation shifts at the same rate (77.5% and 77.6%, respectively) (). However, the percentage of co-occurring character viewpoint gestures was higher for enactments representing disaffiliation (45.0%) than those for affiliation (34.5%) ().

Table 11. Relationship between intonation and stance.

Table 12. Relationship between gesture and stance.

Discussion

This study is the first to systematically examine the co-occurrence of a range of important multimodal resources in authentic everyday interactions in PWA, and to recognise the interpersonal, rather than purely referential, role these resources play. While the scope of previous multimodal aphasia research has generally been limited to the use of gesture, pantomime and/or pointing, this study has also explored the concurrent roles of intonation, gaze, facial expression and posture. In addition, rather than seeing these resources as simply a support or even substitute for language, this study examined their roles as independent but interrelated “meaning making” resources.

Linguistic and multimodal characteristics

In terms of our first research question, we found that most enactments were not preceded by a person reference and/or reporting verb (bare enactments). This finding is unsurprising given the nature of aphasia and is in line with previous studies (e.g., Groenewold et al., Citation2013). It would appear then that the PWA can indeed often successfully convey reported events without explicitly marking these linguistically at all. The fact that PWA used the same multimodal resources (gesture, facial expression, gaze, and body posture) to realise enactments as NBD communicators suggests that these are perhaps the most crucial components or at least important supplementary components and that these modalities can all be considered retained strengths for PWA.

Relationship between linguistic characteristics and number of multimodal articulators

In terms of our second research question – as expected, the mean number of multimodal articulators was lowest for enactments preceded by both a person reference and a reporting verb. Furthermore, the number of multimodal articulators for enactments preceded by only a person reference is lower than that for those preceded by only a reporting verb. This could be due to the PWA needing to indicate who is being enacted in the latter situation, whereas enactments preceded by a person reference require less multimodal marking to “flag” enactment. However, the number of multimodal articulators used by the PWA to realise bare enactments is even lower than that for enactments that are preceded by a reporting verb. This is a somewhat surprising but significant finding that speaks to the issues of modalities other than speech being simply used in a compensatory manner. In other words, there is no clear relationship between the levels of “linguistic bareness” and “multimodal markedness” of enactments in PWA. It could be that reporting verbs are too abstract to simply be compensated for in a non-verbal form and that the abstractness of them may challenge PWA. Degrees of abstraction may be an area for future research in terms of potential meanings involved.

PWA vs NBD communicators

In comparing PWA and NBD communicators, our third research question is answered by the fact that while overall use of multimodal articulators appears to be similar in terms of quantity, PWA clearly utilise the resources differently. Their higher use of intonation, gesture and, to a lesser extent, facial expression (in line with previous findings by Buck and Duffy (Citation1980) and Duffy and Buck (Citation1979)) demonstrates that these are indeed semantic resources for PWA, which could be utilised more in therapeutic endeavours.

PWA also used gaze and posture frequently to realise enactment, but to a lesser extent and in a different way than NBD speakers. For example, whereas NBD speakers direct their gaze away from the listener(s) to indicate that they are entering into a part of a story that will be enacted rather than narrated (Sidnell, Citation2006), the PWA instead often maintained their gaze with the addressee, or even shifted their gaze towards the addressee. This could be due to the PWA needing to ensure engagement with their communication partner while undertaking the relatively abstract and potentially “difficult” communicative act of enactment. It could also be due to the PWA ensuring that they hold the turn (see, e.g., Laakso, Citation2014). Finally, it could be due to the PWA needing to check on the partner’s comprehension during this act.

Research in NBD communication on the realisation of enactments has suggested that (shifts in) intonation and gesturing style are important markers for stance-taking: communicators use such shifts to distance themselves from a stance attributed to an absent third party by enactment (Debras, Citation2015). The findings of the current study indicate a similar pattern for gestures (i.e., relatively more shifts for enactments representing disaffiliation than for enactments representing affiliation), but a different one for intonation (i.e., no difference). This again raises further questions in terms of the use of intonation in meaning-making in PWA and the use of intonation for different functions.

Contribution of this research

Whereas the quantity of the multimodal articulators used by the PWA is similar to that reported for NBD communicators, the qualitative characteristics (i.e., the preference for the type of multimodal articulators, and the interpersonal functions they fulfil) are different. It is the nature of the use of particular patterns of multimodal resources by PWA, and the difference in this usage compared with NBD speakers that are of interest in this study. In addition, showing how PWA “orchestrate” different modalities together to convey evaluative, modalising meanings rather than referential content alone, the current paper contributes to our understanding of interpersonal and stance-taking processes in PWA.

More generally, this research contributes to the literature by raising issues regarding the important notion of multimodality in aphasia research. As argued by Adami (Citation2016) and Jewitt (Citation2014), the examination of relations among modes is key to understanding communication. Applying frameworks that were developed for the examination of multimodal enactment in NBD communication, this study increased our understanding of the co-occurrence of communication modalities such as (shifts in) intonation, gaze, gesture, facial expression, and body posture in PWA. Moreover, unlike most studies assessing multimodality in aphasia, this study relied on authentic interactions, ensuring its ecological validity. Finally, it introduced a useful framework to systematically examine the multimodal characteristics of everyday interaction in aphasia, which could be applied by other researchers.

Limitations

Although this study raised important issues regarding the notion of multimodality and provided innovative insights into the roles multimodal resources can play in interpersonal communication, the findings should be interpreted with caution. Whereas the strength of the study lies in the systematic analysis of naturally occurring interactions between PWA with multiple communication partners in multiple contexts, the findings based on six PWA cannot be generalised. Furthermore, the fact that four of the participants had a (right-sided) hemiparesis may have affected their gesturing and posture movement skills. Whereas such an effect would only reinforce the pattern found for (increased use of) gesture by PWA, it is unclear whether and how it would affect the outcomes with regard to body posture. The relatively small number of enactments produced by each speaker also limits generalisation. While the authentic, spontaneous nature of the data used in this study is one of its strengths, future studies might well consider ways to optimise elicitation of enactments through perhaps guiding topics discussed by participants (e.g., reporting on conversations with others) or simply taking longer samples.

Further research on the multimodal articulators analysed here can reveal to what extent the findings of the current study also apply to different PWA, interactional contexts, and interactional phenomena. Such research would lead to new insights into the interplay of these potentially important “meaning making” resources, informing both aphasia research and intervention.

Acknowledgments

The authors would like to thank the participant volunteers who took part in this research.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work is part of the research programme The use of direct speech as a compensatory device in aphasic interaction with project number [446-16-008], which is financed by the Netherlands Organisation for Scientific Research.

References

  • Adami, E. (2016). Multimodality. In O. García, N. Flores, & M. Spotti (Eds.), The Oxford handbook of language and society. Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780190212896.013.23
  • Akhavan, N., Göksun, T., & Nozari, N. (2018). Integrity and function of gestures in aphasia. Aphasiology, 32, 1310–1335. doi:10.1080/02687038.2017.1396573
  • Berko Gleason, J., Goodglass, H., Obler, L., Green, E., Hyde, R., & Weintraub, S. (1980). Narrative strategies of aphasic and normal-speaking subjects. Journal of Speech, Language, and Hearing Research, 23, 370–382. doi:10.1044/jshr.2302.370
  • Bezemer, J., & Jewitt, C. (2010). Multimodal analysis: Key issues. In L. Litosseliti (Ed.), Research methods in linguistics (pp. 180–197). London: Continuum.
  • Bloch, S., & Beeke, S. (2008). Co-constructed talk in the conversations of people with dysarthria and aphasia. Clinical Linguistics & Phonetics, 22, 974–990. doi:10.1080/02699200802394831
  • Buck, R., & Duffy, R. J. (1980). Nonverbal communication of affect in brain-damaged patients. Cortex, 16, 351–362. doi:10.1016/S0010-9452(80)80037-2
  • Clark, H. H., & Gerrig, R. J. (1990). Quotations as demonstrations. Language, 66, 764–805. doi:10.2307/414729
  • Damico, J. S., Oelschlaeger, M., & Simmons-Mackie, N. (1999). Qualitative methods in aphasia research: Conversation analysis. Aphasiology, 13, 667–679. doi:10.1080/026870399401777
  • Debras, C. (2015). Stance-taking functions of multimodal constructed dialogue during spoken interaction. Paper presented at the GESPIN 4, Nantes, France.
  • Dipper, L., Cocks, N., Rowe, M., & Morgan, G. (2011). What can co-speech gestures in aphasia tell us about the relationship between language and gesture?: A single case study of a participant with conduction aphasia. Gesture, 11, 123–147. doi:10.1075/gest
  • Duffy, J. R., & Watkins, L. B. (1984). The effect of response choice relatedness on pantomime and verbal recognition ability in aphasic patients. Brain and Language, 21, 291–306. doi:10.1016/0093-934X(84)90053-1
  • Duffy, R. J., & Buck, R. W. (1979). A study of the relationship between propositional (pantomime) and subpropositional (facial expression) extraverbal behaviors in aphasics. Folia Phoniatrica Et Logopaedica, 31, 129–136. doi:10.1159/000264160
  • Feyereisen, P., & Seron, X. (1982). Nonverbal communication and aphasia: A review: I. Comprehension. Brain and Language, 16, 191–212. doi:10.1016/0093-934X(82)90083-9
  • Fromm, D., Holland, A., Armstrong, E., Forbes, M., MacWhinney, B., Risko, A., & Mattison, N. (2011). “Better but no cigar”: Persons with aphasia speak about their speech. Aphasiology, 25, 1431–1447. doi:10.1080/02687038.2011.608839
  • Goodglass, H., Kaplan, E., & Barresi, B. (2001). The Boston Diagnostic Aphasia Examination (BDAE) (3rd ed.). Baltimore: Lippincott Williamson & Wilkins.
  • Goodwin, C. (2000). Gesture, aphasia, and interaction. In D. McNeill (Ed.), Language and gesture (pp. 84–98). Cambridge: Cambridge University Press.
  • Goodwin, C. (2010). Constructing meaning through prosody in aphasia. In D. Barth-Weingarten, E. Reber, & M. Selting (Eds.), Prosody in interaction (pp. 373–394). Amsterdam: John Benjamins.
  • Goodwin, C., & Goodwin, M. H. (1986). Gesture and coparticipation in the activity of searching for a word. Semiotica, 62, 51–75. doi:10.1515/semi.1986.62.1-2.51
  • Goodwin, M. H. (1990). He-said-she-said: Talk as social organization among black children. Bloomington: Indiana University Press.
  • Groenewold, R. (2015). Direct and indirect speech in aphasia: Studies of spoken discourse production and comprehension. Groningen: University of Groningen.
  • Groenewold, R., & Armstrong, E. (2018). The effects of enactment on communicative competence in aphasic casual conversation: A functional linguistic perspective. International Journal of Language & Communication Disorders, 53, 836–851. doi:10.1111/1460-6984.12392
  • Groenewold, R., Bastiaanse, R., & Huiskes, M. (2013). Direct speech constructions in aphasic Dutch narratives. Aphasiology, 27, 546–567. doi:10.1080/02687038.2012.742484
  • Groenewold, R., Bastiaanse, R., Nickels, L., Wieling, M., & Huiskes, M. (2014). The effects of direct and indirect speech on discourse comprehension in Dutch listeners with and without aphasia. Aphasiology, 28, 862–884. doi:10.1080/02687038.2014.902916
  • Günthner, S. (1999). Polyphony and the ‘layering of voices’ in reported dialogues: An analysis of the use of prosodic devices in everyday reported speech. Journal of Pragmatics, 31, 685–708. doi:10.1016/S0378-2166(98)00093-9
  • Hengst, J. A., Frame, S. R., Neuman-Stritzel, T., & Gannaway, R. (2005). Using others’ words: Conversational use of reported speech by individuals with aphasia and their communication partners. Journal of Speech, Language, and Hearing Research, 48, 137–156. doi:10.1044/1092-4388(2005/011)
  • Hogrefe, K., Ziegler, W., Weidinger, N., & Goldenberg, G. (2017). Comprehensibility and neural substrate of communicative gestures in severe aphasia. Brain and Language, 171, 62–71. doi:10.1016/j.bandl.2017.04.007
  • Hogrefe, K., Ziegler, W., Wiesmayer, S., Weidinger, N., & Goldenberg, G. (2013). The actual and potential use of gestures for communication in aphasia. Aphasiology, 27, 1070–1089. doi:10.1080/02687038.2013.803515
  • Jewitt, C., Bezemer, J., & O’Halloran, K. (2016). Introducing multimodality. Oxon and New York: Routledge.
  • Jewitt, C. (Ed.) (2014). The Routledge Handbook of Multimodal Analysis. London: Routledge.
  • Kendon, A. (1997). Gesture. Annual Review of Anthropology, 26, 109–128. doi:10.1146/annurev.anthro.26.1.109
  • Kindell, J., Sage, K., Keady, J., & Wilkinson, R. (2013). Adapting to conversation with semantic dementia: Using enactment as a compensatory strategy in everyday social interaction. International Journal of Language & Communication Disorders, 48, 497–507. doi:10.1111/1460-6984.12023
  • Klippi, A. (2015). Pointing as an embodied practice in aphasic interaction. Aphasiology, 29, 337–354. doi:10.1080/02687038.2013.878451
  • Kong, A. P.-H., Law, S.-P., & Chak, G. W.-C. (2017). A comparison of coverbal gesture use in oral discourse among speakers with fluent and nonfluent aphasia. Journal of Speech, Language, and Hearing Research: JSLHR, 60, 2031–2046. doi:10.1044/2017_JSLHR-L-16-0093
  • Kress, G., & Van Leeuwen, T. (1996). Reading images. The grammar of visual design. London: Routledge.
  • Laakso, M. (1997). Self-initiated repair by fluent aphasic speakers in conversation. Helsinki: The Finnish Literature Society.
  • Laakso, M. (2014). Aphasia sufferers’ displays of affect in conversation. Research on Language and Social Interaction, 47, 404–425. doi:10.1080/08351813.2014.958280
  • Laakso, M., & Klippi, A. N. U. (1999). A closer look at the ‘hint and guess’ sequences in aphasic conversation. Aphasiology, 13, 345–363. doi:10.1080/026870399402136
  • Lind, M. (2002). The use of prosody in interaction: Observations from a case study of a Norwegian speaker with a non-fluent type of aphasia. In F. Windsor, M. L. Kelly, & N. Hewlett (Eds.), Investigations in clinical phonetics and linguistics (pp. 373–389). Mahwah, NJ: Lawrence Erlbaum Associates Ind.
  • Linnik, A., Bastiaanse, R., & Höhle, B. (2016). Discourse production in aphasia: A current review of theoretical and methodological challenges. Aphasiology, 30, 765–800. doi:10.1080/02687038.2015.1113489
  • Mol, L., Krahmer, E., & van de Sandt-Koenderman, M. (2013). Gesturing by speakers with aphasia: How does it compare? Journal of Speech, Language, and Hearing Research, 56, 1224–1236. doi:10.1044/1092-4388(2012/11-0159)
  • Nispen, K., Sandt-Koenderman, M., Mol, L., & Krahmer, E. (2014). Should pantomime and gesticulation be assessed separately for their comprehensibility in aphasia? A case study. International Journal of Language & Communication Disorders, 49, 265–271. doi:10.1111/1460-6984.12064
  • Ochs, E. (1993). Constructing social identity: A language socialization perspective. Research on Language and Social Interaction, 26, 287–306. doi:10.1207/s15327973rlsi2603_3
  • Olness, G. S., & Englebretson, E. F. (2011). On the coherence of information highlighted by narrators with aphasia. Aphasiology, 25, 713–726. doi:10.1080/02687038.2010.537346
  • Pierce, J. E., O’Halloran, R., Togher, L., & Rose, M. L. (2018). What do Speech Pathologists mean by ‘Multimodal Therapy’ for aphasia? Paper presented at the Aphasiology Symposium of Australasia 2018, Sunshine Coast, Australia. Abstract Retrieved from https://shrs.uq.edu.au/files/5509/Abstract%20booklet%20%28Compressed.pdf
  • Pritchard, M., Dipper, L., Morgan, G., & Cocks, N. (2015). Language and iconic gesture use in procedural discourse by speakers with aphasia. Aphasiology, 29, 826–844. doi:10.1080/02687038.2014.993912
  • Rose, M., Mok, Z., Carragher, M., Katthagen, S., & Attard, M. (2016). Comparing multi-modality and constraint-induced treatment for aphasia. A Preliminary Investigation Of Generalisation to Discourse. Aphasiology, 30, 678–698. doi:10.1080/02687038.2015.1100706
  • Rose, M. L. (2006). The utility of arm and hand gestures in the treatment of aphasia. Advances in Speech Language Pathology, 8, 92–109. doi:10.1080/14417040600657948
  • Rose, M. L., Attard, M. C., Mok, Z., Lanyon, L. E., & Foster, A. M. (2013). Multi-modality aphasia therapy is as efficacious as a constraint-induced aphasia therapy for chronic aphasia: A phase 1 study. Aphasiology, 27, 938–971. doi:10.1080/02687038.2013.810329
  • Seddoh, S. A. (2004). Prosodic disturbance in aphasia: Speech timing versus intonation production. Clinical Linguistics & Phonetics, 18, 17–38. doi:10.1080/0269920031000134686
  • Sekine, K., & Rose, M. L. (2013). The relationship of aphasia type and gesture production in people with aphasia. American Journal of Speech-language Pathology / American Speech-language-Hearing Association, 22, 662–672. doi:10.1044/1058-0360(2013/12-0030)
  • Sekine, K., Rose, M. L., Foster, A. M., Attard, M. C., & Lanyon, L. E. (2013). Gesture production patterns in aphasic discourse: In-depth description and preliminary predictions. Aphasiology, 27, 1031–1049. doi:10.1080/02687038.2013.803017
  • Sidnell, J. (2006). Coordinating gesture, talk, and gaze in reenactments. Research on Language and Social Interaction, 39, 377–409. doi:10.1207/s15327973rlsi3904_2
  • Stec, K. (2016). Visible quotation: The multimodal expression of viewpoint. Groningen: University of Groningen.
  • Streeck, J., & Knapp, M. L. (1992). The interaction of visual and verbal features in human communication. In F. Poyatos (Ed.), Advances in non-verbal communication: Sociocultural, clinical, esthetic and literary perspectives (pp. 3–23). Amsterdam, The Netherlands: John Benjamins.
  • Ulatowska, H. K., & Olness, G. S. (2003). On the nature of direct speech in narrative of African Americans with aphasia. Brain and Language, 87, 69–70. doi:10.1016/S0093-934X(03)00202-5
  • Ulatowska, H. K., Reyes, B. A., Santos, T. O., & Worle, C. (2011). Stroke narratives in aphasia: The role of reported speech. Aphasiology, 25, 93–105. doi:10.1080/02687031003714418
  • van Nispen, K., van de Sandt-Koenderman, M., Sekine, K., Krahmer, E., & Rose, M. L. (2017). Part of the message comes in gesture: How people with aphasia convey information in different gesture types as compared with information in their speech. Aphasiology, 31, 1078–1103. doi:10.1080/02687038.2017.1301368
  • Wilkinson, R. (2007). Managing linguistic incompetence as a delicate issue in aphasic talk-in-interaction: On the use of laughter in prolonged repair sequences. Journal of Pragmatics, 39, 542–569. doi:10.1016/j.pragma.2006.07.010
  • Wilkinson, R., Beeke, S., & Maxim, J. (2010). Formulating actions and events with limited linguistic resources: Enactment and iconicity in agrammatic aphasic talk. Research on Language and Social Interaction, 43, 57–84. doi:10.1080/08351810903471506