1,824
Views
3
CrossRef citations to date
0
Altmetric
Articles

“Do You See What I Hear?”: Designing for Collocated Patient–Practitioner Collaboration in Audiological Consultations

&

Abstract

Patient-centered care encourages active involvement of patients in their own treatment and a collaborative perspective on the relationship between patient and practitioner. However, to achieve constructive patient–practitioner collaboration in medical consultations the partakers need to successfully interact across conceptual boundaries that can impede intersubjectivity, i.e., the construction of shared meanings and understandings in communicative activities. We present a synthesis of a user-centered approach to designing interactive technology supporting collaboration in face-to-face consultations related to audiological (hearing) rehabilitation. Specifically, we focus on the case of hearing aid tuning, and on the design and utility assessment of a prototype sound environment simulator intended to support the process by helping the patient and the practitioner build a joint understanding of the individual patient’s hearing problem and perceived effects of treatment actions. We describe an empirical and qualitative investigation that calls specific attention to the multi-dimensional boundaries involved in collocated patient–practitioner interactions, and to the explorative and situated nature of the consultation as a collaborative problem-solving process. Here, various micro-practices play a key role in gradually forming a better understanding of the problem at hand and in identifying appropriate treatment steps. Our findings suggest that patient–practitioner collaboration can benefit from interactive technology, which is sufficiently flexible or open-ended in terms of use to accommodate, or be appropriated, to the immediate needs of the situation. We argue that designing technology with the aim of enhancing existing practices of intersubjectivity, rather than doing away with them, improves the chances of enriching collocated patient–practitioner interaction and reduces risk of obstructing it. The main research contribution is an increased understanding of the medical consultation as an instance of collocated collaborative work and learning, and the challenges and opportunities that lie in co-designing interactive solutions that can help the patient take an active and contributing part in the situation.

CONTENTS

1. INTRODUCTION

2. CASE STUDY BACKGROUND

 2.1. Hearing Loss and Hearing Aids

 2.2. Hearing Aid Tuning

3. UNDERSTANDING THE AUDIOLOGICAL CONSULTATION AS A PROBLEM OF INTERSUBJECTIVITY

4. INTERSUBJECTIVE BOUNDARIES IN CONSULTATION SITUATIONS

 4.1. Five Boundaries

 The Knowledge Boundary

 The Language Boundary (The “Voice” Boundary)

 The Time and Place Boundary

 The Physical Boundary

 The Normative Boundary

 4.2. The Boundary-Reinforcing Effect of Technology

5. RESEARCH DESIGN

6. PRELIMINARY FIELD STUDY

 6.1. The Consultation Process

 6.2. Observed Intersubjective Micro-Practices

 Use of Simple Sound References

 Recreation of Specific Conditions

 Use of Demonstration

 6.3. Limitations of Observed Practices

7. DESIGNING THE SOLUTION

 7.1. Co-Design Workshops with Hearing Aid Users and Audiologists

 Re-Contextualizing the Hearing Problem

 Individual Tailoring

 Shared Control

 7.2. From Mock-Ups to Functional Prototype Simulator

8. UTILITY ASSESSMENT OF THE PROTOTYPE SIMULATOR

 8.1. Experimental Setup

 8.2. Data Gathering and Analysis

 8.3. Observed Effects on Patient–Practitioner Interactions

 The Role of the Simulated Listening Environments

 The Role of the Visual Representations

 Control Aspects

 Breakdown Incidents

 8.4. Perceived Usefulness

 The Patient Perspective

 The Practitioner Perspective

 8.5. Summarizing How the Prototype Helped Bridge the Five Boundaries

9. DISCUSSION

 9.1. The Role of the Prototype Simulator in Facilitating Intersubjectivity

 9.2. Designing for Patient–Practitioner Collaboration in Consultations

 Taking a Holistic View of Intersubjective Boundaries

 Designing for Explorative Problem-Solving

 9.3. Design Recommendations

 9.4. Reassessing the Value of Collocated Patient–Practitioner Encounters

10. CONCLUSION

1. INTRODUCTION

The medical consultation is a meeting between the patient and the medical practitioner that involves activities such as gathering of information about the patient’s health, assessment of his or her health condition, and treatment actions. It is an encounter that can be considered a classic example of what the space-time taxonomy of Ellis et al. (Ellis, Gibbs, & Rein, Citation1991) defines as “same place, same time” interaction between two stakeholders. Different health-care models, however, offer diverse perspectives on the medical consultation as a collaborative activity, and on the patient’s role in such a situation. The shift toward patient-centered care has encouraged practitioners to consider patients, and the patient to consider themselves, active partakers in their own treatment, with individual needs and preferences (Bernabeo & Holmboe, Citation2013; Epstein & Street, Citation2011; Grenness, Hickson, Laplante-Lévesque, & Davidson, Citation2014). As such, patient-centered care contrasts with the traditional, paternalistic approach to health care, in which the practitioner has the dominant role as decision maker and the patient is understood as a passive, trusting, and compliant stakeholder (Bernabeo & Holmboe, Citation2013; Sandman & Munthe, Citation2010).

Active involvement of patients in their own health care has been associated with a number of positive outcomes including high patient satisfaction, adherence to treatment, and improved health (Stewart et al., Citation2000; Street, Gordon, Ward, Krupat, & Kravitz, Citation2005). However, constructive patient–practitioner collaboration in consultations relies intimately on the concept of intersubjectivity. Intersubjectivity can, in a broad sense, be understood as the establishment of a shared reference space, or “common ground” (Clark & Brennan, Citation1991), that allows partakers to negotiate and construct new understandings as they interact. In the context of care, intersubjectivity between the patient and the practitioner plays a key role in building a common understanding of the patient’s health problem and in the making of joint decisions about treatment actions—both aspects being hallmarks of person-centered care (Barry & Edgman-Levitan, Citation2012). However, achieving intersubjectivity in medical consultations can be challenging, as the partakers need to interact across conceptual constraints, or boundaries, that can thwart the negotiation process.

The absence of appropriate collaborative tools to support communication and understanding between the patient and the practitioner has been identified as one central reason why patients may find it difficult to become engaged in consultation situations (Kjeldsen & Matthews, Citation2008; Luff & Heath, Citation1998; Matthews & Heinemann, Citation2009). The question of how interactive technology can support patient–practitioner interactions across relevant boundaries of intersubjective space, and help facilitate collaboration in medical consultations, presents an intriguing challenge to human-computer interaction (HCI) and Computer-Supported Collaborative Work (CSCW) research. It is also a research topic, which potentially can have significant impact on medical consultation practice and on patients’ health and quality of life.

In this paper, we present a synthesis of a user-centered approach to designing interactive technology that aims to support patient–practitioner collaboration in the medical domain of audiology, or hearing health care. Specifically, we focus on the case of audiological consultations involving hearing aid tuning for patients with impaired hearing, and on the design and evaluation of a prototype sound environment simulator intended to help the patient play an active role in the process. By considering the audiological consultation as a problem of intersubjectivity, this paper aims to provide a qualitative understanding of potential obstacles to constructive patient–practitioner interactions in these situations and how interactive technology may help form communicative “bridges” that support intersubjectivity between the two partakers.

Our investigation calls attention to the multi-dimensional boundaries involved in collocated patient–practitioner interactions, and to the explorative and situated nature of the consultation situation as a collaborative problem-solving process. In this process, various micro-practices play a key role in gradually forming a better understanding of the problem at hand, and in identifying appropriate treatment steps. We argue that in such conditions, patient–practitioner collaboration can benefit from technology, which is sufficiently flexible to swiftly accommodate, or be appropriated (Dix, Citation2007), to the changing needs of the consultation situation.

The main research contribution of this paper is an increased understanding of the medical consultation as an instance of collocated collaborative work, and the challenges and opportunities that lie in co-designing interactive solutions that can strengthen patient involvement.

The work presented here originates from previously published studies (Dahl & Hanssen, Citation2016; Dahl, Linander, & Hanssen, Citation2014; Hanssen & Dahl, Citation2016). The current article expands our previous work significantly by giving a more comprehensive account of the theoretical grounding for our work, by providing a richer empirical basis to support our arguments, and by offering an extensive data analysis and discussion of what we see as the implications of the results from our research.

The article is structured as follows. We continue in Section 2 by describing relevant background information about the case we address in this study. Next, in Section 3, we outline our perspective on the audiological consultation as a problem of intersubjectivity. Drawing on relevant research literature, Section 4 accounts for central boundaries at play in medical consultation situations, and how conventional computer technology in many cases may reinforce these boundaries. In Section 5, we present the user-centered research design of our study. Section 6 describes results from a preliminary field study, and gives an overview of various observed practices performed in audiological consultations to interact across relevant boundaries. Section 7 provides an overview of the results from a set of co-design workshops with relevant stakeholders, and the functional prototype sound environment simulator that was designed based on the workshop results. In Section 8, we describe a utility assessment of the prototype and account for observations as to how the prototype helped breach boundaries and support intersubjectivity. The same section also accounts for subjective responses from participants concerning perceived usefulness of the prototype. Section 9 discusses the capabilities of the prototype in terms of bridging communication and understanding between the patient and the practitioner and the key implication of our findings concerning how technology can support patient–practitioner collaboration and intersubjectivity between the two stakeholders. Finally, Section 10 concludes the article.

2. CASE STUDY BACKGROUND

As noted earlier, this paper focuses on a specific case, or instance, of medical consultations, i.e., audiological consultations involving hearing aid tuning for patients with hearing impairments. In the following, we provide a brief description of relevant background to the case, including effects of hearing loss, how hearing aids work and challenges related to hearing aid tuning.

2.1. Hearing Loss and Hearing Aids

Hearing loss is a common problem that often develops with age, but which also may come from instant damage to any part of the ear. Hearing loss generally affects a person’s ability to communicate. In particular, it can reduce the ability to discriminate between combinations of sounds, including speech, and in particular speech in combination with other sounds in a given environment. Untreated hearing loss has been associated with many problems including emotional, physical, social, cognitive, and behavioral (Arlinger, Citation2003; Knutson & Lansing, Citation1990).

Hearing aids are the most common treatment option for a person with sensorineural hearing loss, i.e., reduced hearing that results from damage to sensory cells (hair cells) in the inner ear. A hearing aid is an electronic sound amplification device that can be attached in or behind the ear, and that has the potential to compensate for impaired hearing by amplifying specific segments of the sound spectrum, and other forms of advanced corrections. While there are many different types of hearing aids, modern hearing aids consist of three basic electronic components: a microphone, an amplifier, and a loudspeaker (Elberling & Worsoe, Citation2006). The hearing aid receives sound waves through the microphone, which converts them to electrical signals, and transmits them to the amplifier. The amplifier increases the power of the signals and sends them, via a speaker, to the inner ear. The amplified sound is then detected by intact hair cells and converted into electrical signals, which are conveyed by the auditory nerve to the brain. The brain then interprets the signals as meaningful sound.

A hearing aid will offer only limited help to a person with sensorineural hearing loss. It can only amplify sound to stimulate remaining or partially functioning hair cells. This means that sounds that may be undesirable in a specific situation, such as background noise during a conversation, may also be amplified (and thus compromise speech recognition). Feedback, distortion, and altered quality and nature of sounds are also challenges that hearing aid users may face (Agnew, Citation1998). In order to learn how to focus and filter sounds, the human brain often needs time to adapt to the sound produced by a newly acquired or reconfigured hearing aid (R. L. Martin, Citation2004). Refitting and fine-tuning are also often required over time (Abrams, Edwards, Valentine, & Fitz, Citation2011).

Hearing aid technology has progressed significantly over the last few years, mainly due to the maturing of digital technology. Directional sound enhancement, digital speech enhancement, and digital noise reduction are examples of features, which in addition to increase processing speed, have made hearing aids more useful for many (Elberling & Worsoe, Citation2006). Nevertheless, the uptake and use of hearing aids are relatively low compared with the number of people with hearing loss (McCormack & Fortnum, Citation2013). This situation has motivated research on various aspects of the hearing rehabilitation process, including patient–practitioner interaction in consultation situations (e.g., (Deppermann, Citation2012)), which forms the particular case addressed in this article.

2.2. Hearing Aid Tuning

The audiology consultation is a key step to receiving treatment for hearing loss. To optimize hearing aid benefit, a hearing aid must be properly shaped to a user’s ear and tuned to accommodate the individual user’s hearing characteristics, needs, and preferences. Hearing aid tuning is typically performed in a clinical environment by a person with audiological background (e.g., an audiologist), as the process requires an in-depth understanding of the human auditory system, including both mechanical phenomena and mental processes of hearing. Additionally, hearing aid tuning requires technical understanding of how a hearing aid works, and knowledge of the tools (software and hardware) used in the process.

The hearing aid tuning process generally involves a combination of clinical tests to measure patients hearing capability (e.g., using an audiometer to produce an audiogram, which is a graph showing the audible thresholds over different frequencies), followed by a dialog between the patient and the practitioner. As part of the dialog the patient is typically given the opportunity to express his or her experience of a given hearing impairment, e.g., perception of sound, challenging listening situations, and personal hearing aid preferences (Humes, Citation1999; Kjeldsen & Matthews, Citation2008). The dialog also offers the practitioner the opportunity to ask follow-up questions, give explanations, and offer advice. As such, the hearing aid tuning process generally involves a combination of objective measurements (provided by assessment tools) and subjective information (provided by the patient). shows a schematic diagram of the hearing aid tuning process as described above.

FIGURE 1. schematic diagram of the hearing aid tuning process.

FIGURE 1. schematic diagram of the hearing aid tuning process.

Within audiology, as well as in other health-care domains, patient-centered care has contributed to increase the awareness of the individual patient’s preferences, needs, and concerns (Öberg, Citation2008) and how patient–practitioner interaction affects both stakeholders’ understanding of a given health problem and effects of treatment (Heinemann, Matthews, & Raudaskoski, Citation2012; Matthews & Heinemann, Citation2009). The work presented in this article, as further elaborated below, is in many ways motivated by the same factors.

3. UNDERSTANDING THE AUDIOLOGICAL CONSULTATION AS A PROBLEM OF INTERSUBJECTIVITY

The increased awareness toward the patient’s listening experiences and individually perceived challenges related to hearing aid use can in many ways be seen as a response to the problems of relying on objective, context-independent measurements to guide treatment in general and hearing aid tuning in particular. For example, patients who, according to their audiograms, have similar measured hearing loss characteristics may experience different degrees of benefit from the same hearing aid (Goff, Citation2013; Meddis, Lecluyse, Tan, Panda, & Ferry, Citation2010). Challenges, such as the above, highlight that listening experiences and perceptions of sound are necessarily subjective and need to be understood from a phenomenological perspective. While hearing assessment tools can help the practitioner better understand certain aspects of a hearing impairment (e.g., the patient’s hearing thresholds), the practitioner cannot know how the patient perceives sound and listening experiences in everyday life based on data derived from such devices, or in what situations or environments the patient typically experiences disability due to his or her hearing loss. In other words, the practitioner cannot experience (hear) the world as the patient experiences (hears) the world. In audiology, successful treatment therefore relies to a large extent on the patient being able to describe his or her hearing problem (Egbert & Matthews, Citation2012).

Taking a phenomenological perspective on a patient’s hearing and listening experiences calls into consideration the issue of how a patient and a practitioner can communicate and share knowledge on the matter, and potentially co-construct new and enriched understandings of the patient’s hearing problem and how to deal with it. Considering the consultation situation as a problem of interaction between the partakers (as opposed to one about forming an objective understanding of the patient’s condition) implicitly draws attention to the concept of intersubjectivity. Intersubjectivity has been a debated topic throughout the recent history of philosophy (Stahl, Citation2016) and is tied to the question of how one can increase one’s self-understanding with the understanding of others through communicative practice including verbal and non-verbal acts of communication—in other words—how individuals can build new understandings of a phenomenon by combining different perspectives. With respect to audiological consultations, then, the notion of intersubjectivity draws particular attention to two central questions: (1) How the practitioner can come to understand the patient’s hearing problem better through the patient’s narratives of relevant subjective experiences, and thereby provide better treatment; and (2) how the patient, guided by the medical expertise of the practitioner, can come to understand his own hearing problem and treatment prospects better (including limitations of treatment), and thus become empowered to take informed decisions related to one’s own health and well-being.

It should be noted that intersubjectivity is open to ambiguous interpretation (Stahl, Citation2016). With respect to the audiological consultations (and other medical consultations) it allows patient–practitioner interaction to be conceptualized in different ways. From one perspective, intersubjectivity can be viewed as the problem of how two stakeholders—in our case the patient and the practitioner—coordinate their actions and understanding of each other as they work together from their individual cognitive positions. According to such a view, the two stakeholders can be considered to cooperate through a division of tasks required to solve the problem. For example, the practitioner can be viewed to have resources such as audiological knowledge and required tools for adjusting the hearing aids that the patient does not have, and the patient can provide a subjective assessment of the hearing aid adjustments made by the practitioner. From such a perspective intersubjectivity can be regarded as coordination of contributions in a joint activity. The joint activity (i.e., the consultation) provides the conditions and possibly support for the development of new understandings, but is not an intrinsic part of such achievements—the development of new understandings, are essentially a process within the individual mind.

From an alternative, and more social view, intersubjectivity can be understood as two (or more) subjects collaborating in a single, shared cognitive process. This joint cognition goes beyond, unites, or even serves as a basis for the cognition of the participating individuals. Accordingly, consultation activities such as hearing aid tuning can be understood as a single accomplishment where the patient and the practitioner work together (collaborate) at each step, as opposed to performing distinct actions. Which of the stakeholders provides what resource is of little significance. The resources—i.e. audiological instruments, the practitioner’s medical knowledge, and the patient’s experiences and narratives from living with hearing loss—obtain their meaning from the joint process as it unfolds.

Clark and Brennan (Citation1991) used the metaphor of common ground to describe how stakeholders involved in a collaborative process continuously attempt to construct and maintain a mutual understanding through communicative practices or contributions. In Clark’s contribution theory (Op. cit.), intersubjectivity corresponds to “overlap of understandings” between stakeholders in a collaborative activity. Other social views on the concept of intersubjectivity highlight that shared understandings between stakeholders in collaborative activities can only be partially attained and largely depend on a common pre-understanding between stakeholders, i.e., what is taken for granted or presupposed (Rommetveit, Citation1979). Yet other social perspectives on the concept of intersubjectivity hold that the development of new understandings is not only accomplished through the interactions and information sharing between participants. According to, for example, Koschmann et al. (Citation2005), the interactions between collaborating stakeholders form an intrinsic part of the development of new understandings, i.e., the interactions themselves shape the understandings.

While intersubjectivity can be studied both from an individual focus and from a social focus, the two perspectives are, as argued by Stahl (Citation2016), fundamentally connected—the individual is a social product (i.e., affected by the actions, behaviors, and opinions of others), but intersubjectivity also has the individual “at its poles”. This duality has motivated us to investigate both how interactive technology may shape observed patient–practitioner interaction, but also how the technology is perceived by the patient, on the one side, and the practitioner, on the other.

4. INTERSUBJECTIVE BOUNDARIES IN CONSULTATION SITUATIONS

Having presented a view of the audiological consultation situation as a problem of intersubjectivity, we now turn the attention to communicative challenges, or boundaries, that may impede collocated patient–practitioner collaboration and the process of negotiating shared understandings in such encounters and in medical consultations in general. Similar to Star and Griesemer’s use of the term boundary in the theory of Boundary Objects (Star & Griesemer, Citation1989), we use the term here to refer to factors that do not necessarily block communication and understanding between stakeholders (such as a barrier), but rather something that to various degrees can be preceded and ventured across. However, while Star and Griesemer used the term boundary to denote the distinction between different social worlds, or communities of practices, we use the term here to refer to various dimensions that two or more subjects need to interact across in order to form an intersubjective understanding of a phenomenon – in this case a patient’s hearing ability and experience of sound.

Drawing on existing research literature, we describe below five such boundaries. We also provide examples of how each of the boundaries may manifest themselves in practice.

Our description of relevant boundaries is not intended to serve as a complete or extensive account of aspects that may influence patient–practitioner interactions in consultation situations, but to illustrate the different dimensions of collaborative challenges in such a setting. Most of these boundaries can be considered to influence patient–practitioner interactions in a more direct manner (forming a metaphorical interface between the two stakeholders), while some can influence collaboration more indirectly (acting as a metaphorical interface between the clinical environment in which the consultation takes place—typically in the practitioner’s office—and the patient’s daily living environments).

4.1. Five Boundaries

The Knowledge Boundary

The knowledge boundary, in this context, refers to the differences in the patient’s (lay) and the practitioner’s (expert) understanding, or mental models, of a health issue (for example an acquired hearing impairment). The concepts of the medical world and the lifeworld have been used to distinguish the two types of knowledge (Lauritzen & Hyden, Citation2007). Knowledge, within the medical world, generally corresponds to insights that have been scientifically derived, and which has been made explicit, i.e., formalized or codified. A medical practitioner typically acquires this form of knowledge through his or her formal medical education, and later develops it through practice. It provides the practitioner a scientific understanding of the human body, how the body is affected by illness and injury, and how illness and injury can be treated.

The patient’s understanding of a health condition, on the other hand, is to a large extent rooted in his or her lived, everyday experience, i.e., his or her lifeworld (Lauritzen & Hyden, Citation2007). In contrast to medical knowledge, lifeworld knowledge is subjective, informal, and contextualized in nature. Thus, the lifeworld perspective may explain why patients, who from a medical perspective suffer from similar health conditions (e.g., a similar type of hearing loss) and who are provided the same treatment (e.g., identical hearing aids), may experience different levels of benefit from treatment.

The knowledge boundary, and particularly what Deppermann (Citation2012) identifies as the “asymmetries in professional knowledge” between the practitioner and the patient, is a central source to problems in patient–practitioner communication in consultation situations. To make use of his or her medical knowledge in a consultation situation, the practitioner is required to structure the interaction (e.g., perform examinations, ask questions, set diagnosis and suggest treatment). Without the same type of knowledge, the patient will often not understand the motivation for the structure that the practitioner imposes. The medical relevance of examinations, questions asked, and information given by the practitioner therefore risk remaining opaque to the patient. Consequently, the patient may fail to understand why specific examinations take place, and how diagnostics are concluded and decisions regarding treatment are taken (Deppermann, Citation2012).

The lack of medical knowledge may also make it challenging for the patient to know what aspects of his lifeworld knowledge are relevant to the practitioner (e.g., in the context of setting diagnosis) unless specifically asked.

The Language Boundary (The “Voice” Boundary)

Several studies (e.g., (Gilligan & Weinstein, Citation2014; Grenness, Hickson, Laplante-Lévesque, Meyer, & Davidson, Citation2015)) have identified the verbal communication and the different languages used by the patient and practitioner as a key barrier to forming a shared understanding in a medical encounter. Language, in this context, refers primarily to the jargon and terminology used in dialog. The language employed by the patient and the practitioner often reflects their distinct knowledge worlds. Mishler (Citation1984) distinguished between the Voice of medicine, which refers to context-independent and domain-specific jargon of practitioners, and the Voice of the lifeworld, which denotes the contextualized and lay language of patients. The Voice of medicine, or the medical “language”, of the practitioner is acquired during medical training as his or her medical knowledge is developed. This language provides the practitioner with a consistent terminology for describing the human body and its functions and how different concepts relate.

Kjeldsen and Matthews (Citation2008) provide an illustrative example of how audiology specific terminology can reinforce the language boundary between the practitioner and the patient. The authors point to the problem of the practitioner using terms such as “decibel” and “frequency” when explaining hearing loss to patients, which to many is an unfamiliar way of describing sound. Most people tend instead to describe and distinguish sounds by their sources.

However, the language boundary between a patient and a practitioner is not unidirectional. For the practitioner the language boundary implies that he or she needs to “translate” the patient’s narrative of a health issue (the Voice of the lifeworld) in order to understand what it means in medical terms. This task may be even further complicated by problems patients may face in articulating their lifeworld experiences. Audiological patients can, for example, have difficulties putting their listening experiences into words (Jenstad, Van Tasell, & Ewert, Citation2003; Kjeldsen & Matthews, Citation2008). Moreover, the terms individual patients use to describe their perception of sound (e.g., a “sharp”, “loud” or “high” sound) may not refer to the same phenomenon, and the underlying problems can be diverse (Heinemann et al., Citation2012). The patient’s choice of vocabulary when describing a problem has also been found to be a determining factor for resulting diagnosis and treatment (Op. cit.). Aspects such as those described above may further complicate translation of the patient verbal reports into new hearing aid configurations.

The Time and Place Boundary

Another type of boundary that may affect patient–practitioner interactions in consultation situations relates to the differences between the environment in which the consultation takes place (e.g., a clinical office environment) and the everyday living environments in which the patient’s health problem and effects of treatment are experienced. For treatment of health conditions such as hearing loss, the decontextualized setting in which the health problem is addressed, and treatment is identified can present a problem. In particular, it can increase the difficulties a patient may have with respect to describing how a given health problem is experienced and effects of treatment are perceived (Goff, Citation2013; Jenstad et al., Citation2003). Perceived lack of hearing aid benefit can thus be associated with the limited ecological validity of the clinical environment in which the problem is addressed (Cord, Baskent, Kalluri, & Moore, Citation2007; Jerger, Citation2009).

The Physical Boundary

Physical aspects of the consultation environment can also affect patient–practitioner interaction and act as a boundary. The form factor and placement of objects in the environment such as furniture and tools have been found to affect position and orientation vis-à-vis each other and also their attention, which again may reduce the patient’s possibilities for active participation in the encounter (Chen, Ngo, Harrison, & Duong, Citation2011; Dahl & Svanæs, Citation2008; Matthews & Heinemann, Citation2009). Non-verbal communicative behavior such as eye contact may also improve listening abilities of the medical practitioner and may enhance attentiveness toward patients’ emotional cues (Roter & Hall, Citation2006, p. 123). Studies of, for example, optometric consultations have revealed that the bodily comportment of a practitioner can shape and determine the quality of the patient’s response and participation (Webb, Heath, Vom Lehn, & Gibson, Citation2013).

The Normative Boundary

Finally, patient-centered care and active involvement of patients in consultations imply that traditional normative boundaries imposed by paternalistic care models and the “asymmetries of power” (Deppermann, Citation2012) between the patient and the practitioner need to be reduced or eliminated. Such asymmetries carry with them assumptions speakers make about what recipients know or need to know (G. Martin, Citation2014, p. 495). These boundaries reinforce traditional patient–practitioner relationships where the patient is the passive, uninformed, and subordinate partner, and the practitioner is the authority, expert, and decision-maker. Reducing normative boundaries requires a transformation of the role the practitioner plays from one that is characterized by authority to one that has the objectives of partnership, empathy, and collaboration (English, Citation2005).

4.2. The Boundary-Reinforcing Effect of Technology

Information and communication technology is increasingly used as part of health care, e.g., to record patient information, diagnose health conditions, and assess effects of treatment. However, various studies suggest that conventional hardware and software solutions have shortcomings when it comes to supporting patient–practitioner interactions on collocated medical encounters. In many cases, as further described below, conventional computerized tools may even reinforce boundaries between the patient and the practitioner.

Matthews and Heinemann (Citation2009) pointed out that software tools commonly used by audiologists when treating patients, e.g., hearing aid programming tools, are typically designed exclusively for the practitioner. The domain-specific and technical terminology reflected in these tools generally make them inappropriate as explanatory aids, and risks making the audiologists actions opaque to the patient. Being designed specifically for persons with audiological training, these tools may be of limited use when it comes to bridging, for example, the language boundary.

Conventional computer hardware tools may also introduce or reinforce physical boundaries. Form factor issues and challenges related to, e.g., concurrent sharing of screen information have been found to represent a barrier for patients who wish to follow the course of treatment in a consultation situations (Matthews & Heinemann, Citation2009). The obtrusive effect ergonomic and physical aspects of design solutions can have on collocated patient–practitioner interactions have also been reported in several other HCI studies (e.g., (Alsos, Das, & Svanæs, Citation2012; Alsos & Svanæs, Citation2006; Chen et al., Citation2011; Dahl & Svanæs, Citation2008; Luff & Heath, Citation1998)). These studies show that patient–practitioner interaction can be affected by, for example, placement of technology, screen size and orientation, accessibility, portability, and supported interaction styles.

Being designed primarily for the practitioner, computerized technology used in medical consultations may also be considered to reinforce the normative boundary. Especially, this design bias may add to the traditional perception of the practitioner as the “expert” leading the process, and the patient as the “uninformed” and passive actor.

5. RESEARCH DESIGN

This article presents results derived from two years of user-centered studies on how interactive technology can support patient–practitioner collaboration in audiological consultations. In the subsequent three sections of this paper, we describe research activities and results from each of the three major stages of the user-centered design cycle, i.e. the analysis, design and evaluation phase. The analysis phase mainly consisted of a preliminary field study. In the design phase we conducted a set of co-design workshops with hearing aid users and audiologists, in which the participants build low-fidelity mock-ups of how they envisioned future patient–practitioner collaborative technology. Based on the findings from the workshops a functional prototype sound simulator was implemented. Lastly, in the evaluation phase, which helped form the main empirical basis for the current work, we assessed the effects of the prototype on patient–practitioner interaction in a set of experimental consultations involving hearing aid tuning, and collected feedback about the user-perceived value.

Details concerning the user-centered methods we have applied are provided in the respective sections describing each phase.

6. PRELIMINARY FIELD STUDY

To form an initial first-hand empirical understanding of patient–practitioner interaction in audiological consultations, and especially practices performed to construct an intersubjective understanding of the patient’s hearing problem, we conducted a preliminary field study in a hearing clinic. We observed a total of seven audiological consultations (seven patients distributed across two different audiologists). All the consultations were follow-up controls of patients who had previously been prescribed hearing aids. The study was conducted in three days.

The field study consisted of direct, passive observation of consultations involving hearing aid tuning. We also conducted unstructured interviews with the audiologists between the consultations. The consultations and interviews were audio recorded, and field notes of observations were taken.

The consultations took place in a clinical office environment with the patient and the audiologist sitting face-to-face on opposite sides of an office desk, or with the patient seated more diagonally vis-à-vis the audiologist (). Through the tuning process the audiologist used a standard desktop computer (facing the audiologist) with software for programming hearing aids.

FIGURE 2. patient (left) and practitioner (right) in audiological consultation.

FIGURE 2. patient (left) and practitioner (right) in audiological consultation.

6.1. The Consultation Process

At an overall level the consultations we observed involved the following steps: (1) Audiological sound test providing the practitioner a digital audiogram describing the patients hearing loss, (2) a dialog about the test results and the patient’s general experiences from hearing aid use, (3) real-time configuration (tuning) of the patients hearing aids, and (4) a concluding dialog summing up the treatment action taken.

6.2. Observed Intersubjective Micro-Practices

Throughout the tuning process we observed that the audiologist typically employed a set of practices or techniques to form a better understanding of the patient's hearing and listening experience, but also to help convey to the patient certain aspects of his or her hearing loss. These practices were typically performed on-the-fly, requiring little or no preparations, thus helping to reduce immediate communication difficulties in a simple and time-efficient manner. To reflect the temporality of these goal-directed actions, we refer to them as intersubjective micro-practices.

Use of Simple Sound References

During the tuning of a patient’s hearing aids, we observed that the audiologist typically used her own voice to provide the patient a concrete reference when assessing new hearing aid settings set by the audiologist. The patient’s verbal feedback to the listening experience was then used (in combination with information from the audiogram) to determine if and how to further tune the hearing aid. Transcript excerpt 1 provides a typical example of the dialog between the audiologist (Au) and the patient (Pt) during above procedure.

Transcript excerpt 1:

01 ((The audiologist changes the hearing aid settings from the PC and turns toward the patient))

02 Au: Did that change the sound of my voice in any way?

03 Pt: Yes, you [your voice] became a little weaker, I believe.

04 Au: Did I [my voice] become weaker?

05 ((The audiologist turn towards her PC screen))

06 Au: Let’s see. We can fix that, you know. Let’s turn you up a little.

07 ((The audiologist changes the hearing aid settings))

08 Au: I wonder if I shall leave out the sharper [sound] areas when I turn you up. Now, I’m turning you up a couple of clicks.

09 ((The audiologist changes the hearing aid settings, and turns toward the patient))

10 Au: Did I [my voice] become stronger now, or am I still weak?

11 Pt: No, you [your voice] became stronger.

12 Au: Yes? We have to do a little bit of trial and failure here inside [the clinic], where it is so quiet—where you can only sit and hear my voice [the audiologist chuckles].

On two occasions, we see how the audiologists use her voice, in combination with a specific question related to the listening experience, to evoke feedback from the patient (lines 02 and 10). On both occasions, the patient provides a relatively short reply to the audiologist’s question indicating the loudness of the audiologist’s voice is perceived (lines 03 and 11).

Interestingly, the transcript also shows the audiologist ends the tuning process by informing the patient about the potential shortcomings of the approach (line 12), i.e., that the tuning process takes place in an environment, which does not represent a common listening environment for the patient. We observed that information about the limitations of the tuning process, and also the possibility that the patient would need to revisit the clinic for further hearing aid tuning, was routinely given to patients.

We found that the level of detail the patients provided about their hearing experience varied from patient to patient. A general observation was that the extensiveness of the tuning process (i.e., the number of different configurations tried out and the related turn-taking between the audiologist (requesting for feedback) and the patient (providing feedback) was relatively limited. To a large extent the extensiveness of the process depended on the patient being able to articulate their hearing and listening experience in such a way that it gave the audiologist a clue as to how to configure the hearing aids (e.g., line 03).

From interviews with the audiologist we learned that feedback from the patients regarding their listening experiences during the tuning process played a crucial role in maximizing hearing aid benefit. Patients who were unable to, or had difficulties, providing such feedback were generally perceived as more challenging to treat, and often required multiple revisits to the clinic in order to get their hearing aid properly tuned.

Other forms of micro-practices that were used to provide the patient sound references during the tuning process to promote feedback included clinking two teaspoons against the inside of a cup. This was typically performed to verify with the patient that the hearing aid’s automatic noise reduction functionality worked adequately.

We also learned of other similar simple techniques the audiologists used to generate sounds at specific frequencies, e.g., knocking or tapping against a tabletop or rustling of paper.

Recreation of Specific Conditions

Other practices employed by the audiologist to better understand hearing problems reported by the patient involved spontaneous attempts to recreate specific conditions. For example, in one of the consultations we observed the patient told the practitioner that the hearing aid had produced a whining sound when she had put on the hood of her raincoat as she was walking to hearing clinic. To investigate the reported problem, the audiologist encouraged the patient to put on her raincoat and hood again while in the clinic, so as to recreate the conditions the patient described.

Use of Demonstration

Some of the practices we observed, in addition to informing the audiologist, served specific pedagogic purposes. For example, to provide the patient an awareness of how much he or she relied on lip reading to compensate for suboptimal hearing aid settings, the audiologist would put her hand in front of her mouth while talking to the patient, thus disabling the possibility for the patient to lip read. In this way, the audiologist was able demonstrate to the patient the need for reconfiguring the hearing aid.

6.3. Limitations of Observed Practices

The practices described above reflect in many ways audiologists’ creative use of simple techniques and tools to either form a better understanding of the patient's hearing ability and experience of sound, or to help the patient become aware of certain aspects of his or her hearing impairment or hearing aid. However, our observations also highlight that there are shortcomings associated with these routine micro-practices when it comes to venturing across the boundaries of intersubjective space, and building shared understandings of the patient’s hearing problem.

The limited availability of sound stimuli, especially combination of sounds from different sources, narrowed the possibilities for the patient to assess new hearing aid settings and provide rich feedback to the audiologist about the listening experience. Consequently, the audiologist was provided limited cues as to how to optimize the patient’s hearing aid benefit, which again reduced the number of iterations in which new configurations were tried out.

7. DESIGNING THE SOLUTION

7.1. Co-design Workshops with Hearing Aid Users and Audiologists

To come up with potential ideas and concepts as to how technology can support communication and understanding between the patient and the audiologist in hearing aid tuning processes, we conducted a set of three co-design workshops with representatives from both stakeholder groups. In each workshop, two audiologists and two hearing aid users worked together in pairs to build mock-ups representing their visions of future solutions. Each pair consisted of one hearing aid user and one audiologist. As part of workshops, the mock-up solutions were presented and discussed among the participants. Below, we provide a brief summary of the key design considerations and ideas emerging from the workshops (an extensive description of the workshops and the results produced are described in an earlier article (Dahl et al., Citation2014)).

Re-contextualizing the Hearing Problem

One central problem addressed in the workshops was related to the time and place boundary affecting the patient–practitioner collaboration during the tuning process. To help overcome the perceived problem that the tuning process took place in a “de-contextualized” setting (i.e., at the hearing clinic), the participants suggested to “re-contextualize” the hearing problem by means of a sound environment simulator. By allowing calibrated sound recordings from relevant listening environments to be presented to the patients during hearing aid tuning, the participants envisioned that patients could be provided a richer sound references against which to assess changes made to the hearing aids during the tuning process. As part of the workshops the participants built mock-ups of touch-based user interfaces for interactive tabletops, from which playbacks of various sound environments could be controlled (). Our decision to focus on tabletop user interfaces in the design workshops was based on the potential collaborative benefits associated with large interactive displays. These benefits include, for example, shared responsibility and participation (Rogers & Lindley, Citation2004), different ways of using tabletop territoriality in collaboration (Scott, Carpendale, & Inkpen, Citation2004), and the possibility to speak while simultaneously using a tabletop interface to visually show suggestions to bystanders gathered around the tabletop (Fleck et al., Citation2009).

FIGURE 3. hearing aid users and audiologists co-designing the tabletop user interface.

FIGURE 3. hearing aid users and audiologists co-designing the tabletop user interface.

Individual Tailoring

The participants highlighted the added value of being able to customize simulated listening environments to recreate certain situations or aspects relevant to the individual patient (i.e. reflect the patient’s lifeworld). It was suggested that individual tailoring could be achieved by allowing users of the simulator to add specific listening and noise sources to the playback of the different listening environments ().

FIGURE 4. hearing aid user demonstrating how a noise source potentially can be added to the playback of a specific listening environment.

FIGURE 4. hearing aid user demonstrating how a noise source potentially can be added to the playback of a specific listening environment.

Shared Control

To accommodate different patients and individual preferences and desire for user control, the participants emphasized the need for flexibility with respect to controlling the simulator user interface. Making it possible to control the simulator from opposite sides of the tabletop was considered an essential feature by several of the participants.

7.2. From Mock-Ups to Functional Prototype Simulator

Based on the ideas and considerations raised among the workshop participants, we designed a functional prototype sound simulator system. The prototype consisted of the following main components: (1) professionally calibrated sound recordings from eight different everyday listening environments; (2) a tabletop user interface with draggable images representing each listening environment and controls for starting, pausing, and customizing the associated playback (); (3) a 5.1 surround-speaker system allowing for playback of the listening environments during hearing aid tuning process.

FIGURE 5. Implemented tabletop user interface.

FIGURE 5. Implemented tabletop user interface.

The selection of simulated listening environments included in the prototype was based on suggestions from participants in the co-design workshops and generally reflected environments that patients, according the participating audiologists, often describe as challenging. Examples of listening environments made accessible via the prototype were car interior during driving, a canteen with people chattering, a bus stop with passing traffic, and a kitchen during dishwashing. Each listening environment was typically dominated by either high-frequency (treble) or low-frequency (bass) sounds.

To accommodate the need for individual tailoring of the playback of each listening environment, the prototype allowed users to add (via push-buttons) up to three extra listening and noise sound sources. The listening sources were either a male (low frequency speech) or a female (high frequency speech) storyteller, which made it possible to simulate speech-in-noise for patients with hearing impairments in either end of the hearing spectrum. The listening sources could be played through the center-front speaker of the simulator.

The noise sources that could be added to a specific listening environment were elements typical for that particular listening environment. This allowed for the simulation of more complex sound environments. For example, the playback of the canteen environment could be expanded by adding the sound of table setting.

8. UTILITY ASSESSMENT OF THE PROTOTYPE SIMULATOR

8.1. Experimental Setup

To investigate how the prototype simulator shaped patient–practitioner interaction in consultation situations we studied its role in twelve experimental consultations, which involved tuning of hearing aids. The evaluation was performed in collaboration with a private audiology clinic.

Participants

A sample of 12 patients (5 male and 7 female, age: 33–76 years, median age: 58) scheduled for follow-up controls were recruited for the evaluation. All the patients were experienced hearing aid users with more that two years of experience. Two patients had been hearing aid users since their early childhood. The degree of hearing loss and the nature of the patients’ hearing impairment varied.

Three audiologists (all female, aged 32–49 years), with professional work experience ranging from 6 to 7 years, were also recruited for the evaluation.

Location and Equipment

The experimental consultations were conducted in a laboratory setting set up to resemble a genuine consultation office. The decision to conduct the evaluation of the prototype in a laboratory setting was primarily due to the challenges of installing the prototype and performing the evaluation in an operational clinic. To make sure that the setting reflected genuine work environments, we invited audiologists to help set up furniture and relevant equipment in the laboratory. In addition to the hardware components of the prototype, the laboratory was equipped with a laptop computer providing access to patient records and programming software for hearing aids. The laptop computer was placed on a small work desk adjacent to the interactive tabletop.

Other types of equipment used as part of the experimental consultations included standard audiological hardware equipment providing a Bluetooth-based wireless interface between the fitting computer and the patient’s hearing aids.

Procedure

We conducted the experimental consultations over a period of three days. A different audiologist participated each day. Below, we summarize the three-step procedure for the experimental consultations:

Preparatory briefings

At the beginning of each day, before the consultations commenced, the participating audiologist was informed about the overall motivation behind the evaluation and its general procedure. The audiologist was also given the opportunity to try out the prototype in order to become familiar with how it worked before receiving the patients.

Before each consultation commenced, we also explained the motivation behind the experimental consultation and the procedure to the patient. We informed the patient and the audiologist that they were free to decide when, how, and the extent to which they wanted to use the prototype during the hearing aid tuning process.

Consultation

For each consultation one to two observers were present as “fly-on-the-wall” observers. The duration of each consultation was 40–60 min.

Concluding interview

To gather feedback about the perceived usefulness of the prototype, we conducted a semi-structured interview with the patient and the audiologist after the consultation had been concluded. This also gave us the opportunity to ask about specific events we observed during the consultations.

8.2. Data Gathering and Analysis

The experimental consultations and the post-consultations interviews were video and audio recorded using a ceiling-mounted GoPro camera. The recorded video material from the consultations was closely inspected repeatedly to identify different purposes the prototype served in the consultations, and to form a better understanding of how usage affected patient–practitioner interaction. Observer notes taken during the consultations were used as an initial guide when inspecting the videos for relevant events. In later inspections, we employed a more open search strategy, which involved looking for incidents not registered in the observer notes taken during the experimental consultations. The identified relevant events were then transcribed.

To analyze the transcripts of the simulator-supported talk-in-interaction, we followed a similar analytic strategy as proposed by Ten Have, (Citation2007, pp. 124–126) and worked through the text in terms of a restricted set of central conversation mechanisms, or organizations: (1) turn-taking organization, i.e., the sets of practices speakers use to construct and allocate turns in conversation; (2) sequence organization i.e., how the interactional talk is ordered and combined to make actions (requests, advice, suggestions, etc.) take place in conversation; (3) organization of turn-design, i.e., how a speaker chooses to form utterances (the packaging actions), for example, to fit a particular recipient or incite a certain response; and (4) repair organization, i.e., ways of dealing with various challenges in progress of the interaction, such as misunderstandings and communication breakdowns.

As we worked through the transcripts, descriptive codes were added to text segments, summarizing what had been observed with regard to the role of the prototype. These consisted of a primary key word (e.g. “Demonstrator”), a short description to capture the essence of what was taking place (e.g. “Audiologist uses the car cabin environment to demonstrate to the patient that his hearing aid needs tuning”), and a note about which of the relevant organizations the given segment could be linked to (e.g. “Organization of turn-design”). Next, the descriptive codes were reviewed for consistency. This involved checking that the codes were used in the same way for the different text segments, and combining codes (using the most descriptive term) where different codes had been used to describe similar phenomena. Finally, the codes were grouped into thematically relevant categories and labeled. The resulting categories are described in Section 8.2.

The post-consultation interviews were transcribed in their entirety. The transcribed text was then examined to identify text segments describing participants’ perceived usefulness of the prototype. Similar to how we coded the consultation dialogs we attached a primary keyword and a short descriptive text to each relevant fragment. We added cross-references to transcript segments from the talk-in-interaction to link a participant’s perception to concrete episodes of use. The codes were then reviewed for consistency and grouped. The results from the analysis of the interviews are presented in Section 8.3.

8.3. Observed Effects on Patient–Practitioner Interactions

In the following, we describe key observations from the experimental consultations detailing how the prototype affected patient–practitioner interaction, verbally and non-verbally, and in many cases contributed to form shared understanding between the two. The observations are grouped into three sections, each describing findings relative to the specific features of the prototype, i.e., the simulated listening environment, the visual representations of the environments, and user control. We also briefly describe incidents in which the prototype failed to support intersubjectivity due to aspects related to the implementation of the prototype. Lastly, we account for what participants expressed regarding the subjectively perceived value of using the prototype, with the aim of complementing our observations of the prototype in use.

shows use of the prototype during one of the experimental consultations.

FIGURE 6. Audiologist (left) and patient (right) using the prototype simulator during hearing aid tuning.

FIGURE 6. Audiologist (left) and patient (right) using the prototype simulator during hearing aid tuning.

The Role of the Simulated Listening Environments

Iterative tuning (“listen-report-adjust” cycles)

A central finding with regard to the effect of the prototype on the patient–practitioner interaction was that it appeared to open up for a more iterative and patient-driven tuning process than what was the case for the consultations we observed in the field (see Transcript excerpt 1). As Transcript excerpt 2 below illustrates, the use of the simulated listening environments in the tuning process affected both turn-taking in the ongoing patient–practitioner conversation, i.e., how the two partakers take turns speaking during the tuning process, but also sequence-organization in the conversation, i.e., how one utterance or action lead to another. We enter the consultation as the patient has identified a listening environment (the car interior environment) he finds particularly challenging.

Transcript excerpt 2:

01 ((The patient starts the playback of car interior environment, with the voice of the female storyteller set as listening source. The patient listens to the playback for four seconds))

02 Pt: The sound of the car is terribly sharp.

03 ((The audiologist adjusts the hearing aid))

04 ((The patient listens to the playback))

05 Pt: Yes, now it became a little less audible, and the woman’s voice became clearer.

06 ((The audiologist adjusts the hearing aid))

07 Au: Did the woman’s voice become clearer now?

08 ((The patient listens to the playback))

09 Pt: Yes, she became a little bit clearer and more distinct.

10 Au: And the noise from the car is still OK?

11 Pt: Yes, because it…have you lowered it?

12 Au: Yes, I lowered it a little bit earlier.

13 Pt: OK.

14 Au: If you turn on the man’s voice… Let’s check how it turns out.

15 ((The patient turns off the female storyteller and turns on the male storyteller using the controls in tabletop user interface))

16 Au: Can you hear him well?

17 ((The patient listens to the playback))

18 Pt: I can hear the man’s voice well, but I think the sound of the car became a bit more prominent…but it isn’t annoying, as long as I can hear what he says.

19 ((The audiologist adjusts the hearing aid))

20 Pt: Oh, what did you do now? Did you remove the car [sound]?

21 Au: No, I just turned it [the hearing aid] down a click.

22 Pt: Yes, that was lovely. This is the way I want it.

As we can see from the transcript excerpt, the prototype simulator appeared to encourage dialog related to the listening experience and more experimentation during the tuning process. With respect to the sequence organization of the conversation we found that the availability of richer sound references typically promoted a sequence-like structures consisting of repetitive listen-report-adjust cycles—the patient would listen to the playback of a selected listening environment for a short period. Next, the patient would report his or her personal account of the listening experience to the audiologist (often, with the playback still running). The audiologist would then use the patient’s verbal report as a basis for reconfiguring the hearing aid settings, before giving the patient the opportunity to assess the new configuration in a similar manner. On some occasions, the invitation to assess the new hearing aid settings was in the form of an explicit follow-up question from the audiologists, as in line 07 (“Did the woman’s voice become clearer now?”). On other occasions (e.g., lines 02–05), patients gave feedback spontaneously, taking the silence of the audiologist as an implicit invitation to talk.

The listen-report-adjust cycle described above was typically repeated multiple times during the tuning process, using different listening environments as references, until the patient and the audiologist agreed that a satisfactory result had been achieved. For example, in lines 01–13, we can find evidence of three listen-report-adjust cycles (lines 01–03, lines 04–07, lines 08–13). The pattern is temporarily suspended in lines 14–16, as the audiologist asks the patient “to turn on the man’s voice” and report whether he can hear it well, but resumes again in lines 17–19. As we can see from lines 20–22 the pattern stops as the patient identifies and notifies the audiologists of a potentially suitable hearing aid setting, and later (line 22) states explicitly that this setting is satisfactory.

Transcript excerpt 2 provides an example of how use of the simulated listening environments stimulated a process more driven by the users continuous feedback than was the case for the consultations we observed in the field. The except also illustrates how multiple iterative listen-report-adjust cycles, typically consisting of increasingly more fine-grained hearing aid adjustments, helped the patient and the audiologist arrive at a shared decision regarding the configuration of the patient’s hearing aids. Both the above findings can be considered important in terms of patient involvement and in overcoming what we earlier described as normative boundaries of such encounters.

Contextualized feedback and follow-up questions

In addition to encouraging a highly iterative tuning process, we found that the use of the simulated listening environments as references during the tuning process tended to contextualize feedback from the patient and follow-up questions from the audiologist. For, example, in Transcript excerpt 2 (line 02) we see that the patient, in his verbal account of the hearing experience, describes both what he perceives as the sound source causing the negative reaction (“the car engine”) and also his perception of the sound (“terribly sharp”). The excerpt (line 05) also shows how the patient compares various listening and noise sources in the playback against each other—the patient assesses how the voice of the female storyteller is perceived against the sound of the car engine (“Yes, now it [the car engine] became a little less audible, and the woman’s voice became clearer”). This illustrates, how the use of the simulated sound environment tended to generate richer patient feedback than was the case in the consultations we observed in the field, and in which the audiologist’s voice typically formed the only sound reference. As such, the use of the prototype appeared to increase particularly the patient’s turn size, as each turn would often contain more detailed information from the patient about his or her hearing experience.

From the excerpt above, we also see how the use of the simulated listening environments affected follow-up question from the audiologist. In Lines 07 and 10 we see how the audiologist refers to a combination of sound sources when asking about the patient perceived experience. In this sense, the simulated listening environment also promoted the audiologist’s follow-up question that encouraged the patient to reflect and report on particular aspects of the listening experience that could influence the tuning.

Similar incidents where the audiologist used the listening environments as a basis for asking specific questions about the patient’s hearing experience took place frequently in the experimental consultations. We consider the contextualized feedback and follow-up question particularly relevant in terms of bridging the language boundary. By forming a common reference for both the patient and the audiologists, the simulated listening environments acted as a supplement to the verbal communication helping the two stakeholders convey experiences and knowledge.

Improved decision basis for treatment actions

Another central observation was that the aggregated insights from the simulator-supported tuning process, with exploration of various hearing aid settings, provided the audiologist and the patient, a stronger basis for making co-decisions about further treatment. Transcript excerpt 3, below, describes a dialog between the patient and the audiologist, toward the end of the tuning process, after various hearing aid settings have been assessed against multiple listening environments.

Transcript excerpt 3:

01 ((The audiologist looks at her laptop computer screen, reviewing the patient’s audiogram))

02 Au: I can see [from your audiogram] that you are able to hear the bass normally, and that you have some problems with the treble—and female voices are treble sounds.

03 Pt: Uh-huh.

04 Au: But when I turn it [the treble settings] up a click you say that it [the sound] becomes blurred. That tells me that you need to get more accustomed to the sound produced by your hearing aids, before we can turn them up to a level appropriate for you. The volume is a bit too low given your reduced hearing ability with respect to high-pitched sounds, but if you are provided sufficient volume, it [according to the patient’s verbal feedback] becomes a mess because your ears are not sufficiently adapted to the sound.

05 Pt: So, it’s a matter of adaption. Yes, I see.

06 Au: Normally, I would have turned your hearing aids up a click, but using the table [the prototype simulator] it became very clear to me: No, that will only reduce your hearing ability.

In the beginning of the transcript excerpt (line 01) the audiologist reviews the patient’s audiogram shown on the laptop computer and informs the patient what it indicates about the patient’s hearing (line 02). Line 02, serves in many ways as a preface to the audiologist's more elaborate explanation provided later. The patient responds with a simple “uh-huh” (line 03), thus signaling that the audiologist can continue speaking. Continuing from her previous turn, the audiologist then goes on to describe what she has learned from the simulator-supported tuning process about the patient’s hearing (line 04). The patient’s reply (line 05) suggests that he has been able to comprehend the essence of the audiologist's explanation (i.e., that he needs more time to get used to the sound produced by the hearing aids). Finally (line 06), and of particular relevance for understanding the role of the prototype in this context, the excerpt shows how lessons learned from the simulator-supported tuning process has led the audiologist to suggest what she considers an “unconventional” approach to treatment given the situation, i.e., recommending not to change the hearing aid settings. We see from the audiologist's extended rationale (lines 04 and 06) that this recommendation is opposite to what would be the likely outcome if the audiogram had been the only data source. The situation described in Transcript excerpt 3 illustrates how the sound environment simulator helped generate information, which in some cases caused the audiologist to reassess the objective data provided by assessment tools such as the audiogram.

Transcript excerpt 4, which follows, shows another example of an incident showing the simulator-supported process helped the partakers arrive at a treatment. Here the audiologist is attempting to form an understanding of whether or not it is feasible to optimize the patient’s hearing aids further. In this case, the patient has severe hearing loss. We enter the dialog after the patient has tried out various hearing aid configurations, assessed against multiple simulated sound environments, but without perceived hearing improvement.

Transcript excerpt 4:

01 Au: But, when you hear these sounds [different listening environments]—do you perceive them only as noise, or are you able to differentiate between traffic noise, noise coming from inside a bus, and noise from a gym?

02 Pt: No, I think it is difficult for me to say what kind of noise I am hearing. It’s just noise.

In line 01, we find that the audiologist refers to a set of simulated sound environments that have been used earlier in the tuning process, when asking about the patient’s listening experience. The way the question is formulated suggests that the audiologist wants to clarify whether the patient is at all able to distinguish the different sound environments from each other. The patient’s response (line 02) confirms what is likely to be the audiologist’s initial hypothesis, i.e., that the patient cannot separate one sound environment from the others. Similar to Transcript excerpt 3, the current excerpt also shows how the simulator gradually helped provide the partakers in the consultation important insights about the patient’s hearing impairment, which contributed to guide decisions about treatment. In particular, both transcript excerpts illustrate the preparatory work the prototype simulator performed in the patient–audiologist dialog by helping to explore whether or not the conditions for changing the hearing aid settings are met. In both cases, the audiologist recommends that the hearing aid settings remain unchanged, but for different reasons. Regarding the case described in Transcript excerpt 3, the decision is to wait to see if the patient acclimatizes to sound produced by the hearing aids given. In the case described in Transcript excerpt 4, the decision the audiologist opts not to change the hearing aid settings, as she considers it unlikely that this will improve the patient’s hearing given the severity of the impairment.

Explanatory and demonstrative function

Transcript excerpt 2 provided earlier illustrated how the listening environments helped the patient describe his or her listening experience to the audiologist, and how the enriched feedback helped guide the audiologist during the tuning process. However, we learned that the simulated listening environments also could serve as an aid for the audiologist in communicating relevant medical information and treatment actions to the patient. Transcript excerpt 5, below, illustrates what in many ways can be considered the prototype’s explanatory and demonstrative function.

As exemplified in Transcript excerpt 5, the simulated listening environments allowed the audiologist to explain to the patient by demonstration what a suboptimal hearing aid means in practical terms, i.e., its effect in daily life situations. Moreover, the listening environments also reduced the potential problem of treatment actions being opaque to the patient, as the patient often could experience, and thus respond, to the results of the changes made to the hearing aid settings.

The dialog in the transcript occurs after an initial briefing about the patients hearing problem, and after the audiologist suggested that the patient starts the playback of the car cabin environment.

Transcript excerpt 5:

01 ((The playback plays the sounds of a car engine starting))

02 ((The Patient sits silently and listens to the playback for 5.5 seconds))

03 Au: Can you hear those sounds [the car engine]?

04 ((The patient listens to the playback for a while))

05 Pt: That is more silent than my own car…given that it has started [the patient chuckles].

06 Au: It has started.

07 Pt: Right.

08 Au: Yes, because I’m wondering if you have too little bass [based on the problems you described earlier], so I would like to try and adjust a little bit.

09 Pt: Just go on, then.

10 ((The audiologist adjust the hearing aid settings, while the playback continues))

11 Au: How is it now?

12 Pt: I can hear the engine now.

First (lines 01–07), the excerpt shows how the audiologist uses a specific simulated listening environment, to verify her initial hypothesis, i.e., that the patient’s hearing aids provide too little bass (line 08). We see from the patient’s ironic comment in line 05 that the audiologist’s hypothesis appears to be true, i.e. the patient is not able to hear the (low-frequency) sound of the car engine being played. In addition to verifying her initial hypothesis, the way the audiologist uses the particular sound environment, can also be seen as an example of careful recipient design (Sacks, Schegloff, & Jefferson, Citation1978, p. 129), i.e. “packaging” or conveying a message in such a way that it fits the presupposed knowledge level of the receiver (i.e., the patient). As opposed to only providing an oral account of what she (the audiologist) considers to be the problem (line 08), the prototype simulator allows the audiologist to demonstrate it to the patient. The experience the patient is given though the prototype simulator can in many ways be considered complementary to the verbal description the audiologist subsequently provides, helping the patient realize the extent of his or her hearing disability. In this sense, the listening environment can also be considered to serve an explanatory function potentially reducing the effects of the language boundary described earlier.

In the continuing dialog, we find that having verified her initial hypothesis, the audiologist suggests that the current settings should be adjusted (line 08), to which the patient agrees (line 09). After having adjusted the hearing aids (line 10), the audiologist asks the patient to assess the new settings (line 11). In line 12 of the excerpt, we find that the patient, by listening to the same playback, is able to recognize, through experience, that the audiologist has changed the settings and that these changes have had an apparent positive effect on his hearing ability (“I can hear the engine now”). This shows how, the simulator can potentially reduce the problem of practitioner actions being opaque to the patient and thus help overcome what we earlier defined as the knowledge boundary.

A recurring pattern in the observed consultations was that the audiologist tended to use of the simulated listening environments as a means to conclude the tuning process. Typically, the patient was given the opportunity to compare the pre- and post-tuning settings of the hearing aid, thereby allowing him or her to verify their preferences. Transcript excerpt 6 gives an example of the audiologist using the simulated listening environments for such purposes.

Transcript excerpt 6:

01 ((The car interior environment and the male storyteller is being played over the loudspeakers))

02 Au: I will change it [the hearing aid settings] back to as they were when you came [to the consultation].

03 ((The audiologist changes the hearing aid settings back to the previous setting))

04 Au: Yes.

05 ((The patient listens to the playback for approximately three seconds))

06 Pt: No, I don’t want it like that.

07 Au: No? Then we take you up to where I put you [apply the new settings].

08 ((The audiologist restores the new hearing aid settings))

09 Pt: Now I can hear the male voice really clearly.

Here, we find that the audiologist first informs the patient that she will change the hearing aid settings to the initial (post-tuning) configurations (lines 02–03) while the playback is running. The “Yes” uttered by the audiologist in line 04 signals to the patient that the original hearing aid setting has been restored and that the patient can commence his assessment and provide feedback. After the patient utters a strong negative response to the new settings (line 06) the audiologist changes the hearing aid configurations back to the post-tuning settings again (lines 07–08). The patient’s subsequent comment (line 09) suggests that he perceives an immediate positive effect of the restored settings. Again, we see how the demonstrative possibilities of the prototype simulator served a central role in the intersubjective communication between the patient and audiologist.

While patients were also given a summary of treatment actions toward the end of the consultations we observed in the field, we did not observe practical demonstration and reassessment of performed changes to the hearing aid as described above.

The Role of the Visual Representations

So far we have described findings related to the role the simulated listening environments (i.e., provision of rich sound references) played in bridging communication and understanding between the patient and the audiologist. However, the results also indicate that visual aspects of the prototype, i.e., the images displayed in the tabletop depicting the various listening environment, also played a relevant part in the patient–practitioner interaction. For example, before the tuning process commenced the audiologists generally invited the patient to select from the images displayed in the tabletop interface a listening environment he or she recognized as challenging (based on their previous experiences), and which could serve as a first sound reference to guide tuning. On some occasions we found that the audiologist offered suggestions as to which type of listening environments to select from. For example, if the patient had a hearing loss in the lower (bass) frequencies, the audiologist would typically point out to the patient the most relevant listening environment in that respect.

We also found that the images displayed in the tabletop interface often helped jog the memory of patient regarding challenging situations and, in many cases, their ways of dealing with them in daily life. This type of information appeared to be valuable to the audiologist both with respect to optimizing the hearing aid, but also in terms counseling patients about, for example, how to position themselves in a specific environment in order to enhance hearing ability and speech recognition.

In addition to helping patient recollect and select challenging listening situations, we also found that the images displayed in the tabletop interface at times acted as a subtle complementary tool in the patient–practitioner dialogs taking place during the tuning process. For example, both stakeholders would make gestures (e.g., point) toward images when discussing listening experiences and aspects of sound, so as to emphasize to the other partner which environment, or types of sounds, they were currently referring to (). In this sense the images served as a common visual reference and coordination tool for the patient and the audiologist. As described earlier, conventional desktop computer systems applied in consultation settings generally do not offer such opportunities in a satisfactory way, sometimes rendering the computer tools a physical barrier preventing fluent interaction between the patient and the practitioner.

FIGURE 7. Audiologist (left) gesturing toward the image of a listening environment.

FIGURE 7. Audiologist (left) gesturing toward the image of a listening environment.

Control Aspects

Another aspect affecting patient–practitioner interaction, and particularly turn-taking between the two, was related to control of the prototype. While the prototype simulator was designed to allow for shared viewing and control, we observed that the audiologist allowed the patient to maintain their primary control during hearing aid tuning. Generally, the patient selected which listening environment to be used as part of the tuning, which extra listening and noise sources to be added to the playback, when to start a playback (to assess new hearing aid configurations), and when to pause it (to provide feedback). Control of the prototype simulator interface implicitly gave the patient a greater degree of control of the tempo at which the tuning process proceeded, and generally providing the patient more speakership and increased turn sizes. The control aspects described above can also be seen as a central element in balancing the traditional “asymmetry of power” between the practitioner and the patient.

Breakdown Incidents

While the prototype appeared to support intersubjectivity in various ways, we also observed incidents in which the prototype failed to do so, causing breakdowns in the patient–practitioner dialog. These incidents were characterized by failure of the patient to accept the simulated listening environment as a realistic replacement of its real-world counterpart. Simulator acceptance is essential for knowledge transfer from the simulated situation to the individuals who take part in the situation (Dahl, Alsos, & Svanæs, Citation2010; Moroney & Lilienthal, Citation2008). Instead of acting as a reference point that the patient could use to assess perceived effect of changes made to the hearing aid, the simulator itself became the center of the patient’s attention. Transcript excerpt 7 shows an example where the prototype simulator became a source of disturbance in the patient–practitioner dialog.

Transcript excerpt 7

01 ((The sound recording of a crowded bus interior is playing in combination with the female storyteller as listening source))

02 Pt: This is a bit similar to the car situation [the car interior listening environment]. You don’t sit and talk to someone in front of you [in such an environment]. You sit and talk to someone next to you.

03 Au: You would prefer to have a speaker next to you?

04 Pt: More of the speech should have been channeled through the rear [speakers].

05 ((The patient points toward one of the rear speakers))

06 Pt: I assume there’s surround sound. It isn’t accurate compared to how you sit, you know.

As we see from the patient’s statement in line 2, the patient does not comment on his ability to hear the speech-in-noise situation being played. Rather, the simulator causes a breakdown in the ongoing dialog about the patients hearing, causing the patient to comment on the setup rather than on the listening experience. As we can see from the remainder of the transcript excerpt, the focus of the dialog changes to how the simulator ideally should have been configured.

Breakdown situations, such as the one described in Transcript excerpt 7, could indicate a shortcoming in the prototype implementation (i.e., lack of directional control of sound sources). However, the experimental consultations also revealed that the participants managed to come up with creative repairs to solve such issues—In some cases, the audiologist would take on the role as a mobile sound source during playback of a given listen environment, move over to a corner of the room, and use her own voice to provide the patient direction specific speech stimuli. These on-the-fly adaptions were in many ways consistent with our field observations from the field and how the audiologists flexibly addressed immediate problem arising from the consultation situation using simple techniques (see Section 6.2).

8.4. Perceived Usefulness

In addition to observing how the prototype shaped the interaction between the patient and the practitioner during the experimental consultations, we also performed post-consultation interviews of the participants about their perceived usefulness of the tool. The underlying motivation for the post-consultation interviews was to complement the findings from the observations outlined above with a subjective dimension.

The Patient Perspective

The patient’s subjective responses regarding the usefulness of the prototype were consistently very positive. In terms of the perceived value of the prototype, patients especially emphasized the advantages they found in being provided a cue as to how the hearing aid would perform in the daily life. One of the patients expressed:

It is really helpful that you can listen to sounds while the hearing aid is being tuned. You will never get a one hundred percent realistic impression, but at least you get an indication of how [your hearing aids] will work.

In terms of the relevant boundaries presented earlier, then, statements such as the one above suggest that the prototype helped reduce the effect of the time and place boundary. The main benefit of this again, as seen from the patients’ perspective, was related to the potential for reduced time and efforts spent on achieving a satisfactory benefit from hearing aids. The following quote illustrates how one patient considered the prototype simulator to potentially accelerate the rehabilitation process.

I think that [the audiologists] would perhaps have spent less time to get where we are today, if they had this [system]. I went in and out of the [clinic] once every two weeks for almost half a year before I was reasonably satisfied.

Several patients also pointed out how they found the simulated listening environments helpful with respect to articulating their listening experiences and perceived hearing problems to the audiologist. For example, one patient explained:

Regarding the kitchen, I can describe the situation and the challenges I face much better here [using the simulated listening environment], rather than just talking about it [without the playback].

The above quote is illustrative to how the prototype, for many patients, played a central role in overcoming what we earlier defined as the language boundary by providing a concrete reference during the tuning process.

The concreteness the simulated sound environments helped bring to the tuning process was not only seen as beneficial for articulating listening experiences (cf. Transcript excerpt 2) but also with respect to comparing and selecting between different hearing configurations (cf. Transcript excerpt 6). The following quote, in which the patient compares the simulator-supported tuning process with how opticians go about identifying correct lenses strength for glasses, illustrates how many of the patients appreciated this concreteness and made them more confident about the end result:

This is just like when I’m visiting the optician. He goes back and forth [checking how you respond to different lenses]. You can’t get any closer. You’re safe then—You’re as close [to an optimal solution] as you can get.

The statement suggests that the given patient was able to maintain an awareness of the progress being made throughout the simulator-supported tuning process. As we can see from the quote, this awareness appears to affect the patient’s perception of and confidence in the end result. Earlier in the paper, we described how opaqueness of the actions performed by the practitioner contributes the gap between the patient’s and the practitioner’s distinct knowledge worlds.

The Practitioner Perspective

Similar to the patients, the audiologists also highlighted the benefit they found in using richer sound references to help evoke feedback from patients about their listening experiences during the tuning process. Referring to the limited possibilities for providing “rich sound” in the clinic using conventional techniques—a problem associated with the time and place boundary—one of the audiologists expressed:

It feels quite poor to only have the possibility to ask [the patient]: ‘how do I sound now?’

Another central benefit the audiologists associated with the simulator-supported tuning process and the possibility to bridge the time and place boundary was the increased prospects for being more explorative during the tuning process:

The system allowed me to be more explorative during the [tuning process]…and it made me willing to take more risks and try out things [settings], which I probably wouldn’t have tried out in an ordinary consultation. Take for instance, [patient name]; I turned his hearing aid settings upside-down.

As with several of the patients, the audiologist also expressed increased confidence in the end result of the tuning process, and considered the assessed concept to have a significant potential for reducing the number of patient revisits to the clinic related to re-configuration of non-optimal hearing aids. However, the audiologists also stressed the need for patients, especially inexperienced hearing aid users, to acclimatize to new hearing aids, and that achieving an acceptable tuning result with the help of the prototype simulator, did not eliminate the need for follow-ups and potentially additional fine-tuning.

Yet another value the audiologists recognized in the prototype was its pedagogical potential. The way it could help demonstrate to the patient the nature of his or her hearing impairment, and thereby compliment the oral information provided by the audiologist (cf. Transcript excerpt 5) was one example of how the prototype could serve a pedagogical purpose. Additional pedagogical potential the audiologists recognized in the prototype was related to aspects such as strategic placing within a given environment for optimal listening ability (the listening scenarios was played in a 5.1 speaker setup, thus offering a certain fidelity). For example, the audiologists highlighted the possibility of demonstrating to the patient how physical placement within a room can affect the possibility to recognize speech in a noisy environment.

The pedagogical possibilities recognized in the prototype can in many ways be seen as ways of overcoming what we earlier defined as the knowledge boundary. The examples above represent means by which the prototype can help convey medical knowledge held by the audiologist in a form that is potentially more comprehensible to the patient, i.e. by demonstration.

One concern the audiologists raised with respect to the prototype was related to the transitions between “entering” and “leaving” the simulated listening environments during the tuning process. In particular, the abruptness of these transitions was considered to potentially affect the listening experiences of the patients, and consequently their feedback:

It can feel very odd when we turn off the sound, because then it becomes silent—it becomes very silent in the room—Likewise, when we turn on the sounds, they seem much louder than they do in a real situation, because we just come straight into it.

Similar to the breakdown situations described earlier (see Transcript excerpt 7), the above statement is related to the aspect of simulator acceptance. The concern highlights that the simulated sound environments do not necessarily allow for sufficient bridging of the time and place boundary in the treatment of all patients.

8.5. Summarizing How the Prototype Helped Bridge the Five Boundaries

Having described key observations from the experimental consultations and central findings from post-consultation interviews, we summarize in the list below how the prototype simulator appeared to bridge the five boundaries described in Section 4.1.

  • The knowledge boundary: Our findings suggest that the prototype simulator acted as a conceptual bridge between the patient’s listening experiences and the audiologist’s understanding of the problem during the tuning process. In this sense, the prototype helped the practitioner translate experiences and lifeworld knowledge of the patient to medical knowledge resulting in treatment actions (e.g., new hearing aid configurations, explanations and advice). It also played a complementary role in medical information given to the patient by the audiologist, particularly by allowing for demonstration of certain aspects.

    In addition to the above, we found that the prototype simulator played a key role in allowing the patient to follow the progress of the tuning process, and better understand what actions the audiologist performed and why. For example, the prototype helped the patient form an immediate impression of how changes made to the hearing aid affected his or her hearing, and when a satisfactory result had been achieved. In cases where the audiologist did not find a more optimal hearing configuration (such as in Transcript excerpts 3 and 4), the simulated sound environments in many ways served as a means to help the patient understand and accept the limitations of hearing aids as a tool for better hearing.

  • The language boundary (the “voice” boundary): A central finding concerning how the prototype simulator helped bridge the language boundary was the role it played in contextualizing the dialog between the patient and the practitioner during the tuning process. As several of the presented transcript excerpts from the simulator-supported tuning processes illustrate, the patient–audiologist dialog focused to a large extent on auditory aspects of the specific simulated sound environments that were used as part of the process. For the patient, the simulated sound environments and the different sound sources contained in those environments served as a supplementary tool in articulating perceived hearing problems. For the audiologist the enriched, contextualized narratives of the patient’s listening experience appeared to help in the process of optimizing the patient’s hearing aids.

    As noted above, our observations also revealed that the audiologist occasionally used the prototype simulator as a communicative supplement to demonstrate certain aspects of the patient’s hearing problem or ways of dealing with it. In this sense, the prototype helped convey medical information in a more “patient-friendly” manner.

  • The time and place boundary: In order to help overcome the time and place boundary the prototype was designed to allow the patient to assess new hearing aid configuration against the hi-fidelity recordings from everyday sound environments. In many cases, we found that the patients were able to recognize environments they personally found challenging from the set accessible via the prototype. On some occasions we found that the audiologist would suggest sound environments to use during the process, based on their knowledge of nature of the patient’s hearing loss as shown in the audiogram.

    While the interviews revealed that both the patients and the audiologists did not consider a simulator-supported tuning process would guarantee increased hearing aid benefit in daily life, both stakeholder groups expressed high confidence in the end result. The patients emphasized the potential they identified in the simulator for reducing the number of revisits to the hearing clinics in order to achieve a satisfactory hearing aid. The practitioners considered the simulated environments central in overcoming the problem tuning hearing aids in “sterile” and (for the patient) non-representative sound environment of the clinic.

  • The physical boundary: Existing studies (e.g., (Matthews & Heinemann, Citation2009)) have pointed out that conventional computer technology, such as desktop computer systems, can have a negative effect on patient involvement in medical consultations, as they risk taking the practitioners attention away from the patient, and make sharing of screen content cumbersome. In this sense, computerized tools and their form factor risk becoming a physical barrier in the interaction between practitioner and patient. The prototype simulator provided a shared graphical user interface allowing both the audiologist and the patient to control the simulator. Often, however, the patient mainly controlled the simulator during the tuning process.

    We also found that the images of various sound environments shown in the tabletop display played a supplementary role in the patient–practitioner dialog, for example, by allowing either stakeholder to point toward an image to capture the other’s attention.

  • The normative boundary: The most notable findings related to the effect of the prototype simulator on the normative boundary and the “asymmetry of power” between the patient and the practitioner is that it generally appeared to promote a more iterative tuning process than what we observed in the field. The simulator-supported tuning processes consisted of multiple listen-report-adjust cycles, where the patient’s continuous reports about his or her listening experience drove the process. In this way, our results suggest that the prototype helped promote active engagement of the patient.

9. DISCUSSION

Drawing on findings from the experimental consultations, we first draw attention to some aspects of the prototype simulator that we consider central with respect to active patient involvement in the consultation situation, and supportive of intersubjective communication and understanding between the patient and the audiologist. We then discuss some key implications of our findings with respect to design of technology aiming to facilitate collocated patient-–practitioner collaboration and intersubjectivity between the two. Finally, we provide some reflections on what we consider key values of collocated patient–practitioner collaboration, and why we recommend that these values are taken into consideration in future development of assistive listening devices and other assistive health-care devices.

9.1 The Role of the Prototype Simulator in Facilitating Intersubjectivity

The observations from the experimental consultations and the subjective feedback from the participants suggested that the prototype sound simulator helped support communication and understanding between the patient and the audiologist in multiple ways. But what aspect of the prototype contributed to such a result? In an attempt to answer this question, we identify and describe below four characteristics of the prototype we consider particularly relevant for supporting the intersubjective process:

  • Interpretability: Our findings from the experimental simulations show that the prototype simulator played a mediating role in tuning process and the related the patient–practitioner dialog. In particular, the way the simulated listening environments helped contextualize the hearing problem and the related discussion appeared to play a key role in aligning the two partakers. The simulated listening environments essentially helped make information provided by one part interpretable and understandable to the other. The enriched, context-specific feedback the patients gave about their listening experiences helped the audiologist “decode” the hearing problem into new hearing aid configurations. Likewise, the simulated sound environments played a supplementary role in helping convey to the patient medical information given by the practitioner.

  • Shareability: The simulated sound environments acted as shared reference for the patient and the practitioner. Being exposed to the playbacks at the same time helped provide the audiologist with a better understanding of the patient's hearing problem, for example, by being able to ask follow-up question related to specific auditive aspects of the playbacks. The patient also benefited from the simulated sound environments being shared. For example, information from the audiologist regarding specific auditory aspects of the playbacks helped the patient form a better understanding of his or her impairment, problems with the current hearing aid settings, and the potential disability implications (cf. Transcript excerpt 5).

  • Modularity: Different parts of the prototype simulator served as a basis for dialog between the patient and the audiologist. Example of such parts included the specific listening environments, and sometimes groups of environments (e.g., environments dominated by high-frequency sounds), particular sound sources within given listening environments (e.g., “the car”, and the “woman’s voice” in Transcript excerpt 5), but also the images of the listening environments displayed in the tabletop interface. This modularity played an important role when addressing the hearing problems of the individual patient.

  • Versatility: The prototype simulator served multiple purposes in the experimental consultations. At various times the prototype fluently shifted between being a tool for reflection and assessment, hypothesis testing, verification, explanation, demonstration, and learning. Likewise, the control of the prototype also tended to shift between the audiologist and the patient. The versatility of the prototype played a central role in supporting interaction across the different boundaries we identified as relevant to the intersubjective process between the patient and the practitioner.

Several of our findings draw attention to the diverse, and situation-specific ways, by which the prototype was applied in the consultations to help facilitate communication and understanding between the patient and the audiologist. We consider the changing roles the prototype played in the experimental consultations to emphasize the emergent and temporal nature of resources that helped build common ground between patient and the practitioner. The emergent and temporal nature of these resources, or intersubjective “bridges”, implies that they cannot be product of design per se. Rather, the extent to which, for example, a particular simulated sound environment, certain sound sources contained within such an environment, or the environments associated image emerge as an intersubjective bridge during a tuning process is a function of the design in use. In other words, the “bridging” capability of design elements, such as those noted above, is a relation between a design element and the role it plays in a collaborative context of use. The breakdown situations described earlier illustrate how the relation between the prototype simulator and its context of use in some cases was too weak to effectively form an intersubjective bridge. The conceptualization of resources for intersubjective communication and understanding as emergent and temporal, rather than predefined and stable, draws attention to aspects external to the prototype simulator—Aspects of its context of use that in various ways “enabled” the prototype, or its specific elements, to bridge patient–practitioner communication and understanding. The practitioner’s experience-based knowledge, communication skills, pedagogical abilities, empathic dispositions toward the patient, and openness to allow the patient to drive more of the tuning process are examples of relevant enabling factors in this regard. Other examples of potential factors include patients’ individual capabilities to put listening experiences offered by the prototype into words, and his or her own impairment and treatment options (e.g., hearing aids).

Contextual factors such as those mentioned above, however, fall beyond the control of designers, i.e., there is no way of knowing at design time the extent to which these factors are present or not in a given use situation. The practical design implication of the highly context-dependent and emergent nature of intersubjective bridges, such as those exemplified above, is that designers of collaborative technology can only hope to increase the prospect of such to arise from use. With respect to design of collaborative solutions for medical consultations, this leads to the questions of how one can improve the likelihood of intersubjective bridges arising during use so that the patient and the practitioner are able to form a constructive partnership. In the following subsection, we discuss two key considerations derived from the current study, and which can provide important clues to the above question. The two considerations are (1) implications of the multifaceted boundaries affecting patient–practitioner interactions in medical consultations and (2) implications of the explorative and situated nature of consultation situations.

9.2. Designing for Patient–Practitioner Collaboration in Consultations

Taking a Holistic View of Intersubjective Boundaries

The shift toward patient-centered care calls particular attention to patient–practitioner interaction, and factors that can prevent the patient from taking an active part in his or her own treatment. Through our investigation we have put focus on the boundaries that the two stakeholders need to interact across in order to establish common ground in a consultation situation. One feasible explanation for the observed utility and the stakeholders’ perceived usefulness of the prototype was its capability to support interaction across several boundaries of relevance. In many cases, it helped translate between the patient’s “world” and the practitioner’s “world” by promoting rich reflections on hearing experiences and by serving as an explanatory aid. Moreover, it helped re-contextualize the patient’s problem by allowing it to be addressed in a simulated environment. The possibility for shared viewing (of screen content) and joint control of the simulator helped avoid the interactive medium (i.e., the interactive tabletop) becoming a physical barrier preventing, rather enabling, collaboration.

With respect to designing collaborative technology for medical encounters, our investigation highlights the multiple dimensions of collocated patient–practitioner interaction and how facilitation of collaboration and intersubjectivity requires designers to take into account the multifaceted nature of the boundaries at play. Many of the existing HCI and CSCW-related studies that have explored the effects of interactive technology on patient–practitioner interaction have typically narrowed the scope of investigation to primarily one particular aspect (boundary) of interaction. For example, several studies cited earlier in the paper (e.g., (Chen et al., Citation2011; Dahl & Svanæs, Citation2008) have focused mainly on physical aspects of interaction and how form, placement, screen-size, and portability of interaction devices in combination with the physical environment affect patient–practitioner interaction. Other studies (e.g., (Bagalkot & Sokoler, Citation2011)) have put the primary focus on the language boundary and how interactive solutions can meet both the language of the person in rehabilitation and the language of health practitioner.

While the studies cited above have provided valuable insights with respect to how specific boundaries may affect patient–practitioner interactions, and, in some cases, how design can help overcome them, it is nevertheless challenging to form an understanding of how various boundaries may affect patient–practitioner interaction as a whole. As our findings indicate, the boundaries we have focused on are to some extent interrelated. For example, reducing the effect of the time and place boundary by means of the simulated sound environments also appeared to diminish language boundary as patients found it easier to articulate their listening experiences. Likewise, the way the prototype helped the patient get actively involved in the consultation, thereby overcoming normative barriers, can be seen as the cumulative effect of successful spanning of several relevant boundaries.

As pointed out earlier, we are aware that there may be other boundaries of relevance beyond those discussed in this study. We also recognize that the specific boundaries we have focused on in this study may manifest themselves in other ways than we observed.

Designing for Explorative Problem-Solving

The current investigation has also brought into attention the nature of the medical consultation as a collaborative problem-solving process, and what it implicates for design of supportive technology. Our findings paint a picture of the audiological consultation and particularly hearing aid tuning, as a highly explorative and situated process, in which the partakers continually deal with the ambiguity, or uncertainty, both of the problem at hand and the effects of solutions with respect to the patient’s daily listening experience.

We found that the problem addressed in the consultations, i.e., the patient’s hearing loss and relative hearing aid benefit, was typically only partially understood at the beginning of the tuning process (for the audiologist the patient’s audiogram generally formed the basis for this understanding). Through an interactive, iterative and incremental process, characterized by experimentation, reflection, dialog, demonstration, and learning both the understanding of the problem and its solution gradually emerged. Changes in the understanding of the problem often led to changes with respect to the solution. We typically found that the solution to the patient’s problem involved a number of steps, which beyond potential reprogramming of the hearing aid, also could involve making the patient aware of certain aspects of his or her hearing impairment, adjusting a patient’s expectations of treatment (e.g., informing about the limitations of the hearing aid for the patient’s specific hearing problem), and advice on how to position oneself within a specific environment for optimized hearing. Finally, we found that ambiguity of the problem at hand was not necessarily resolved through the process, but at best reduced.

The explorative, intersubjective process describe above, then, stands in relatively sharp contrast to highly structured processes in which the problem to be addressed is clearly defined before problem-solving commences, where the means to solving the problem exists in the form of pre-existing alternatives, and where the goal of the process is definite.

The hearing aid fitting consultation can also be seen as a learning process. The audiologist needs to learn about the patients’ specific hearing problem, both objectively through standardized tests (e.g., expressed as an audiogram), and intersubjectively by trying to gain insight into the patient’s listening experiences and lifeworld, which may be hard to communicate by common language; our vocabulary is poorly equipped with words and terms that can describe perceptions and experiences of sound (Moylan, Citation2007, p. 87). In contrast, our vocabulary to describe what we see is much richer and intuitive. On the other hand, the patient needs to learn about the technical features and functions of the hearing aid, but even more about how the device is individually perceived in a variety of listening situations. From our pre-assessment observations at the clinic we observed that communication and learning were limited, and that communication and learning, as exemplified through many of the presented transcript excerpts, in many cases became richer when the prototype was used. We believe that the increased richness (exemplification by sounds, increased frequency of questions and answers, and ad-hoc experimentation) leads to an improved transition from tacit to explicit knowledge. Tacit knowledge is characterized as personal and context specific (Von Krogh, Citation1998) and thus hard to communicate. Transfer of tacit knowledge requires face-to-face relationship (Wyatt, Citation2001), which is one of the key characteristics of the use of the prototype.

Another feasible explanation why the prototype helped facilitate patient–practitioner collaboration, relates to its capability to support the explorative and situated nature of the audiological consultation. In particular, we consider the results from the experimental consultations to highlight the value of design solutions that are sufficiently open-ended with respect to use, and thereby capable of serving multiple purposes in intersubjective process between the patient and the practitioner. The various roles the prototype played in the patient–practitioner interaction came as a result of the partakers, in response to changing needs, spontaneously attributed different functional values to the prototype. Functional value, in this context, refers to what Tchounikine (Citation2016) describes as the utility of an artifact for achieving some task or goals perceived by a user. The impromptu attribution of functional value is closely linked to the concept of appropriation (Dalton, MacKay, & Holland, Citation2012; Dix, Citation2007; Pekkola, Citation2003; Tchounikine, Citation2016). Appropriation can involve making changes to a product (i.e., a software system) to make it fit particular purposes, and/or finding new and innovative uses for a product (sometimes in ways not intended by the designers of the product). Dix (Citation2007) describes three reasons why appropriation is important: to accommodate situatedness, i.e. the particularities of different use contexts, to support dynamics of use and changing needs, and to facilitate ownership, i.e., providing users a sense of control by allowing them to do things in their own way. These three reasons fit well with what we found to be characteristic for the consultation situations.

We consider our findings to illustrate how appropriation does not necessarily require users to make explicit modification to the product. Rather, our findings highlight the added value of allowing users to add their own meaning or interpretation of how the system can be used. Such flexibility is enabled if the system does not impose an ideal procedure.

In terms of designing for appropriation for use contexts such as the one addressed in this study, we recommend, first, an explorative approach. In the context of user-centered design this may implicate an extension of a design project’s discovery phase and an explicit focus on uncovering “hidden” values that may be embedded in a design concept. Second, we recommend an approach in which both stakeholders’ are “co-present” in all phases of the user-centered design cycle. The challenges we identified in the preliminary field study (analysis phase), the design ideas from the workshops (design phase), and the use values we identified through the utility assessment (the evaluation phase) were all results that emerged through co-presences and co-work.

9.3. Design Recommendations

Drawing on what we learned from the user-centered and participatory design process described in this paper, we propose below a set of recommendations for design of technology for highly explorative, situated and intersubjective processes, such as audiological consultations. The recommendations are meant as a supplement to the general user-centered design process, as defined in ISO 9241-210 (Citation2010).

  • Focus first on understanding which boundaries the involved stakeholders need to interact across in order to form a joint understanding of the problem at hand. Attend also to various performed practices that can help construct such an understanding, with an eye toward how well they work (what insights they bring about), as well as their shortcomings.

  • Design with the aim of enhancing, rather that replacing, existing practices by which intersubjectivity is built. Our results suggest that this increases the likelihood that the design will add value. As processes, such as the one we have focused on in this paper do not follow a fixed and undisturbed sequence, we recommend designs that do not “force” a certain structure, but which can be easily appropriated by users on a moment-by-moment basis to deal with challenges related to intersubjective communication and understanding as the challenges arise.

  • Just as an intersubjective process, such as the one we have studied and designed for, require two (or more) participants, we recommend that the design of supportive technology involve co-presence and co-work of relevant stakeholders. Many of the suggestions and insights that helped form the prototype simulator came as a result of hearing aid users and the audiologists being able to draw upon each others expertise and perspectives when working together in workshops and in the actual tuning process (during the experimental consultations).

  • Interactive technology aiming to support explorative intersubjective processes can benefit strongly from being designed by means of an exploratory approach, which explicitly focuses on value discovery early in the design process. By drawing specific attention to the various purposes that a design prototype serves in a collaborative process, previously unidentified use values may be discovered.

  • Be mindful about how new designs may negatively impact the communication and understanding between the users. While our prototype design proved useful in many circumstances, we also observed occasional communication breakdowns as a result of use. This highlights the necessity of an iterative design processes.

9.4. Reassessing the Value of Collocated Patient–Practitioner Encounters

Through our investigation of intersubjective boundaries at play in patient–practitioner medical encounters, we have called to attention some of the unique affordances that “same time, same place” interaction offers, and how collaborative technology can be designed to accommodate these. Many of the benefits the prototype simulator offered came as a result of the partakers being co-present and co-exposed to the same sound stimuli during the tuning process, thus creating a shared reference point—bridging the medical and lifeworld. For the practitioner, co-presence and co-exposure to the simulated listening environments played a key role, for example, when interpreting the patient’s response. The same factors also benefited the patient, for example, when describing their listening experiences. The examples above illustrate some of the values that collocated interaction can offer in the context of medical consultations.

If we take into consideration recent health-care technology trends, such as provision of remote medical consultations and technology-supported, patient-managed approaches, we find that many of the inherent values of collocated patient–practitioner collaboration and the construction of a shared understanding of the patient’s problem are easily diminished or lost in search of efficiency. Attaining a sufficient level of situational awareness can be difficult to achieve in, for example technology-enabled remote medical consultations (Aggarwal, Ploderer, Vetere, Bradford, & Hoang, Citation2016). The underlying assumption of many so-called patient-managed solutions, such as self-fitting hearing aids (Keidser & Convery, Citation2016) is that the technology can remove the need for a practitioner to guide the tuning process. Such solutions, however, tend to focus exclusively on measured hearing characteristics and generally do not offer complementary advice, counseling and explanations, which we found to be essential components of the consultations we observed.

Our point here is not to argue against the potential benefits of technology-supported, patient-managed solutions, but rather to highlight values and insights that emerge in collocated interaction between the patient and the practitioner, and which do not easily transfer to technology.

10. CONCLUSION

In the current study, we have investigated how interactive technology, in the form of a co-designed prototype sound simulator, can support active involvement of the patient in his or her own hearing rehabilitation, and help facilitate intersubjective communication and understanding between the patient and the practitioner in audiological consultations. Particularly, our exploration has drawn attention to the multi-faceted boundaries at play in collocated patient–practitioner encounters, and the explorative and situated intersubjective process through which patient’s hearing problem is attempted understood and solved. We found that patient–practitioner collaboration can benefit significantly from designs that accommodate both the above aspects. Accommodating these aspects, we have argued, requires designs that are sufficiently flexible or open-ended in terms of use, allowing them to serve multiple purposes in the unfolding patient–practitioner dialog as needs change. In particular, we consider the possibility for appropriation to play a key role in enabling this flexibility. The findings from the utility assessment of the prototype simulator show that the value it added to intersubjective communication and understanding did not come by altering existing consultation micro-practices or reinforcing new ones. Rather, the added value that the prototype offered the patient and the practitioner came from overcoming shortcomings and thus strengthening existing practices, by digitally augmenting the consultation environment.

Regarding the role technology can play in the context of health care and rehabilitation, our investigation calls attention to the values embedded in “same time, same place” interaction between the patient and the practitioner. Co-presence and face-to-face interaction, in comparison to the other modes of collaboration in the space-time taxonomy of Ellis et al. (Citation1991), offer unique possibilities for constructing shared understandings between the patient and the practitioner, and technology may play a significant role in realizing such a potential. This however necessitates designs that do become barriers in themselves to communication and understanding—sometimes being the case in medical consultations—but which serve to enrich human–human interaction in collaborative problem-solving.

Additional information

Funding

The study has been funded by the Research Council of Norway under Grant No. 227129/O70 (ABRUMED) Norges Forskningsråd [227129/O70];

Notes on contributors

Yngve Dahl

Yngve Dahl ([email protected]) is a human-computer interaction research scientist at SINTEF Digital and an Associate Professor at the Department of Computer Science at NTNU. His main research interests are in user-centered design and ubiquitous computing, particularly in relation to the application area of health care.

Geir Kjetil Hanssen

Geir Kjetil Hanssen ([email protected]) is a computer scientist with an interest in agile software development and software process improvement; He is a senior research scientist at the Software Engineering, Safety and Security department of SINTEF Digital.

References

  • Abrams, H., Edwards, B., Valentine, S., & Fitz, K. (2011). A patient-adjusted fine-tuning approach for optimizing hearing aid response. Hearing Review, 18(3), 18–27.
  • Aggarwal, D., Ploderer, B., Vetere, F., Bradford, M., & Hoang, T. (2016). Doctor, Can You See My Squats?: Understanding bodily communication in video consultations for physiotherapy. Proceedings of the 2016 Conference on Designing Interactive Systems. doi:10.1145/2901790.2901871
  • Agnew, J. (1998). The causes and effects of distortion and internal noise in hearing aids. Trends in Amplification, 3(3), 82–118. doi:10.1177/108471389800300302
  • Alsos, O. A., Das, A., & Svanæs, D. (2012). Mobile health IT: The effect of user interface and form factor on doctor-patient communication. International Journal of Medical Informatics, 81(1), 12–28. doi:10.1016/j.ijmedinf.2011.09.004
  • Alsos, O. A., & Svanæs, D. (2006). Interaction techniques for using handhelds and PCs together in a clinical setting. Proceedings of the 4th Nordic conference on Human-computer interaction: changing roles. doi:10.1145/1182475.1182489
  • Arlinger, S. (2003). Negative consequences of uncorrected hearing loss––a review. International Journal of Audiology, 42(sup2), S17–20. doi:10.3109/14992020309074639
  • Bagalkot, N., & Sokoler, T. (2011). MyReDiary: Co-Designing for Collaborative Articulation in Physical Rehabilitation. Proceedings of the ECSCW 2011 European Conference on Computer Supported Cooperative Work. doi:10.1007/978-0-85729-913-0_7
  • Barry, M. J., & Edgman-Levitan, S. (2012). Shared Decision Making — The Pinnacle of Patient-Centered Care. The New England Journal of Medicine, 366(9), 780–781. doi:10.1056/NEJMp1109283
  • Bernabeo, E., & Holmboe, E. S. (2013). Patients, providers, and systems need to acquire a specific set of competencies to achieve truly patient-centered care. Health Affairs, 32(2), 250–258. doi:10.1377/hlthaff.2012.1120
  • Chen, Y., Ngo, V., Harrison, S., & Duong, V. (2011). Unpacking exam-room computing: Negotiating computer-use in patient-physician interactions. Proceedings of the CHI 2011 Conference on Human Factors in Computing Systems. doi:10.1145/1978942.1979438
  • Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). Washington, DC: American Psychological Association.
  • Cord, M., Baskent, D., Kalluri, S., & Moore, B. (2007). Disparity between clinical assessment and real-world performance of hearing aids. Hear Reviews, 14, 22–26.
  • Dahl, Y., Alsos, O. A., & Svanæs, D. (2010). Fidelity considerations for simulation-based usability assessments of mobile ICT for hospitals. International Journal of Human–Computer Interaction, 26(5), 445–476. doi:10.1080/10447311003719938
  • Dahl, Y., & Hanssen, G. K. (2016). Breaking the sound barrier: designing for patient participation in audiological consultations. Proceedings of the CHI 2016 Conference on Human Factors in Computing Systems. doi:10.1145/2858036.2858126
  • Dahl, Y., Linander, H., & Hanssen, G. K. (2014). Co-designing interactive tabletop solutions for active patient involvement in audiological consultations. Proceedings of the NordiCHI 2014 Nordic Conference on Human-Computer Interaction. doi:10.1145/2639189.2639221
  • Dahl, Y., & Svanæs, D. (2008). A comparison of location and token-based interaction techniques for point-of-care access to medical information. Personal Ubiquitous Computation, 12(6), 459–478. doi:10.1007/s00779-007-0141-8
  • Dalton, N., MacKay, G., & Holland, S. (2012). Kolab: Appropriation & improvisation in mobile tangible collaborative interaction. Proceedings of the DIS 2012 Conference on Designing Interactive Systems. doi:10.1145/2317956.2317960
  • Deppermann, A. (2012). Negotiating hearing problems in doctor-patient interaction: Practices and problems of accomplishing shared reality. In M. Egbert & A. Deppermann (Eds.), Hearing aids communication: Integrating social interaction, audiology and user centered design to improve communication with hearing loss and hearing technologies (pp. 90–103). Mannheim, Germany: Verlag für Gesprächsforschung.
  • Dix, A. (2007). Designing for appropriation. Proceedings of the BCS-HCI 2007 British HCI Group Annual Conference on People and Computers, University of Lancaster, United Kingdom — September 03 - 07, 2007.
  • Egbert, M., & Matthews, B. (2012). User centered design: From understanding hearing loss and hearing technologies towards understanding interaction. In M. Egbert & A. Deppermann (Eds.), Hearing aids communication: Integrating social interaction, audiology and user centered design to improve communication with hearing loss and hearing technologies (pp. 48–55). Mannheim, Germany: Verlag für Gesprächsforschung.
  • Elberling, C., & Worsoe, K. (2006). fading sounds: about hearing and hearing aids. Bording A/S, DK-2730 The Oticon Foundation, Herlev, Denmark.
  • Ellis, C. A., Gibbs, S. J., & Rein, G. (1991). Groupware: Some issues and experiences. Communications of the ACM, 34(1), 39–58. doi:10.1145/99977.99987
  • English, K. (2005). Get ready for the Next Big Thing in audiologic counseling. Hearing Journal, 58(7), 10–15. doi:10.1097/01.HJ.0000286416.66547.47
  • Epstein, R. M., & Street, R. L. J. (2011). The values and value of patient-centered care. Annals of Family Medicine, 9(2), 100–103. doi:10.1370/afm.1239
  • Fleck, R., Rogers, Y., Yuill, N., Marshall, P., Carr, A., Rick, J., & Bonnett, V. (2009). Actions speak loudly with words: Unpacking collaboration around the table. Proceedings of the ITS 2009 International Conference on Interactive Tabletops and Surfaces. doi:10.1145/1731903.1731939
  • Gilligan, J., & Weinstein, B. E. (2014). Health literacy and patient-centered care in audiology – implications for adult aural rehabilitation. Communicable Disorders Deaf Studies Hearing Aids, 2(110). doi:10.4172/2375-4427.1000110
  • Goff, A. E. (2013). The use of hearing aid outcome measures in the audiologic treatment of older adults. Department of Speech and Hearing Science, Ohio State University, OH, USA.
  • Grenness, C., Hickson, L., Laplante-Lévesque, A., & Davidson, B. (2014). Patient-centred audiological rehabilitation: Perspectives of older adults who own hearing aids. International Journal of Audiology, 53(Suppl 1), 68–75. doi:10.3109/14992027.2013.866280
  • Grenness, C., Hickson, L., Laplante-Lévesque, A., Meyer, C., & Davidson, B. (2015). Communication patterns in audiologic rehabilitation history-taking: Audiologists, patients, and their companions. Ear and Hearing, 36(2), 191–204. doi:10.1097/AUD.0000000000000100
  • Hanssen, G. K., & Dahl, Y. (2016). A participatory design approach to develop an interactive sound environment simulator. American Journal of Audiology, 25(3S), 268–271. doi:10.1044/2016_AJA-16-0005
  • Heinemann, T., Matthews, B., & Raudaskoski, P. (2012). Hearing aid adjustment: Translating symptom descriptions into treatment and dealing with expectations. In M. Egbert & A. Deppermann (Eds.), Hearing aids communication: Integrating social interaction, audiology and user centrered desighn to improve communication with hearing loss and hearing technologies (pp. 113–124). Mannheim, Germany: Verlag für Gesprächsforschung.
  • Humes, L. E. (1999). Dimensions of hearing aid outcome. Journal of the American Academy of Audiology, 10(1), 26–39.
  • ISO 9241-210. (2010). Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems. International organization for standardization, Geneva, Switzerland.
  • Jenstad, L. M., Van Tasell, D. J., & Ewert, C. (2003). Hearing aid troubleshooting based on patient’s descriptions. Journal of the American Academy of Audiology, 14, 7.
  • Jerger, J. (2009). Ecologically valid measures of hearing aid performance. Starkey Audiology Series, 1(1), 1–4.
  • Keidser, G., & Convery, E. (2016). Self-fitting hearing aids: status quo and future predictions. Trends in Hearing, 20(1). doi:10.1177/2331216516643284
  • Kjeldsen, M. P., & Matthews, B. (2008). Talking about hearing: Designing from users’ problematisations. Proceedings of the NordiCHI 2008 Nordic Conference on Human-Computer Interaction. doi:10.1145/1463160.1463237
  • Knutson, J. F., & Lansing, C. R. (1990). The relationship between communication problems and psychological difficulties in persons with profound acquired hearing loss. Journal of Speech and Hearing Disorders, 55(8). doi:10.1044/jshd.5504.656
  • Koschmann, T., Zemel, A., Conlee-Stevens, M., Young, N., Robbs, J., & Barnhart, A. (2005). How do people learn? Members’ methods and communicative mediation. In R. Bromme, F. W. Hesse, & H. Spada (Eds.), Barriers and biases in computer-mediated knowledge communication—And how they may be overcome (pp. 265–294). Dordrecht, Netherlands: Kluwer.
  • Lauritzen, S. O., & Hyden, L.-C. (2007). Medical technologies, the life world and normality. In S. O. Lauritzen & L.-C. Hyden (Eds.), Medical technologies and the life world: the social construction of normality. London, New York: Routledge, Taylor & Francis Group.
  • Luff, P., & Heath, C. (1998). Mobility in collaboration. Proceedings of the CSCW 1998 Conference on Computer Supported Cooperative Work. doi:10.1145/289444.289505
  • Martin, G. (2014). Pragmatics and medical discourse. In K. P. Schneider & A. Barron (Eds.), Pragmatics of discourse. Berlin, Germany: Walter de Gruyter GmbH & Co KG.
  • Martin, R. L. (2004). Wear your hearing aids or your brain will rust. The Hearing Journal, 57(1), 46. doi:10.1097/01.HJ.0000292405.09805.5a
  • Matthews, B., & Heinemann, T. (2009). Technology use and patient participation in audiological consultations. Amj, 1(12), 174–180. doi:10.4066/AMJ.2009.99
  • McCormack, A., & Fortnum, H. (2013). Why do people fitted with hearing aids not wear them? International Journal of Audiology, 52(5), 360–368. doi:10.3109/14992027.2013.769066
  • Meddis, R., Lecluyse, W., Tan, C. M., Panda, M. R., & Ferry, R. (2010). Beyond the audiogram: identifying and modeling patterns of hearing deficits. In E. A. Lopez-Poveda, A. R. Palmer, & R. Meddis (Eds.), The neurophysiological bases of auditory perception (pp. 631–640). New York, NY: Springer New York.
  • Mishler, E. G. (1984). The discourse of medicine: dialectics of medical interviews. Norwood, NJ: Ablex.
  • Moroney, W. F., & Lilienthal, M. G. (2008). Human factors in simulation and training – an overview. In D. A. Vincenzi, J. A. Wise, M. Mouloua, & P. A. Hancock (Eds.), Human factors in simulation and training. Boca Raton, FL: CRC Press, Inc.
  • Moylan, W. (2007). Understanding and crafting the mix: the art of recording. Burlington, MA: Focal Press.
  • Öberg, M. (2008). Approaches to Audiological Rehabilitation with Hearing Aids: Studies on pre-fitting strategies and assessment of outcomes ( Doctoral thesis). Division of Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, Sweden.
  • Pekkola, S. (2003). Designed for unanticipated use: Common artefacts as design principle for CSCW applications. Proceedings of the GROUP 2003 Conference on Supporting group work. doi:10.1145/958160.958218
  • Rogers, Y., & Lindley, S. (2004). Collaborating around vertical and horizontal large interactive displays: Which way is best? Interacting with Computers, 16(6), 1133–1152. doi:10.1016/j.intcom.2004.07.008
  • Rommetveit, R. (1979). On the architecture of intersubjectivity. In R. Rommetveit & R. M. Blakar (Eds.), Studies of language, thought and verbal communication (pp. 147–161). London, UK: Academic Press.
  • Roter, D., & Hall, J. A. (2006). Doctors talking with patients/patients talking with doctors: improving communication in medical visits. Westport, CT: Praeger.
  • Sacks, H., Schegloff, E. A., & Jefferson, G. (1978). A simplest systematics for the organization of turn taking for conversation. In J. N. Schenkein (Ed.), Studies in the organization of conversational interaction. New York, NY: Academic Press (originally 1974).
  • Sandman, L., & Munthe, C. (2010). Shared decision making, paternalism and patient choice. Health Care Analysis, 18(1), 60–84. doi:10.1007/s10728-008-0108-6
  • Scott, S. D., Carpendale, M. S. T., & Inkpen, K. M. (2004). Territoriality in collaborative tabletop workspaces. Proceedings of the CSCW 2004 Conference on Computer Supported Cooperative Work. doi:10.1145/1031607.1031655
  • Stahl, G. (2016). From intersubjectivity to group cognition. Computation Supported Cooperative Work, 25(4–5), 355–384. doi:10.1007/s10606-016-9243-z
  • Star, S. L., & Griesemer, J. R. (1989). Institutional ecology, ‘Translations’, and Boundary objects: Amateurs and professionals on Berkeley’s museum of vertebrate zoology, 1907-39. Social Studies of Science, 19(3), 387–420. doi:10.1177/030631289019003001
  • Stewart, M., Brown, J. B., Donner, A., McWhinney, I. R., Oates, J., Weston, W. W., & Jordan, J. (2000). The impact of patient-centered care on outcomes. The Journal of Family Practice, 49(9), 796–804.
  • Street, R. J., Gordon, H., Ward, M., Krupat, E., & Kravitz, R. (2005). Patient participation in medical consultations: Why some patients are more involved than others. Medical Care, 43(10), 960–969. doi:10.1097/01.mlr.0000178172.40344.70
  • Tchounikine, P. (2016). Designing for appropriation: a theoretical account. Journal of Human–Computer Interaction, 1–41. doi:10.1080/07370024.2016.1203263
  • Ten Have, P. (2007). Doing conversation analysis: a practical guide. London, UK: SAGE.
  • Von Krogh, G. (1998). Care in knowledge creation. California Management Review, 40(3), 133–153. doi:10.2307/41165947
  • Webb, H., Heath, C., Vom Lehn, D., & Gibson, W. (2013). Engendering response: professional gesture and the assessment of eyesight in optometry consultations. Symbolic Interaction, 36(2), 137–158. doi:10.1002/symb.55
  • Wyatt, J. C. (2001). Management of explicit and tacit knowledge. Journal of the Royal Society of Medicine, 94(1), 6–9. doi:10.1177/014107680109400102