5,930
Views
43
CrossRef citations to date
0
Altmetric
Articles

Go-with-the-Flow: Tracking, Analysis and Sonification of Movement and Breathing to Build Confidence in Activity Despite Chronic Pain

, , , , , , , & show all
Pages 335-383 | Received 19 Nov 2014, Accepted 13 Aug 2015, Published online: 11 Jan 2016

Figures & data

FIGURE 1. The three parts of the Go-with-the-Flow framework.

FIGURE 1. The three parts of the Go-with-the-Flow framework.

FIGURE 1. (Continued).

Note. SES = sonified exercise spaces.
FIGURE 1. (Continued).

FIGURE 2. (Middle) Device attached to person’s back for sonifying trunk movement during the forward reach exercise (Left). (Right) Examples of SESs for a forward reach exercise: The Flat sound is a repetition of the same tone played between the starting standing position and the maximum stretching position. The Wave sound is a combination of two tone scales (phrases), an ascending one ending at the easier stretching target and a descending one to the final more challenging target. The reaching of the easier target is marked by the highest tone (Singh et al., Citation2014).

FIGURE 2. (Middle) Device attached to person’s back for sonifying trunk movement during the forward reach exercise (Left). (Right) Examples of SESs for a forward reach exercise: The Flat sound is a repetition of the same tone played between the starting standing position and the maximum stretching position. The Wave sound is a combination of two tone scales (phrases), an ascending one ending at the easier stretching target and a descending one to the final more challenging target. The reaching of the easier target is marked by the highest tone (Singh et al., Citation2014).

FIGURE 3. Final design of the wearable device: (a) Architecture of the breathing module built using Arduino UNO. (b) Breathing sensors. (c) Front and back views of the tabard with integrated breathing sensors, button held in the person’s hand for calibration, the smartphone in its pocket on the back of the trunk and the breathing sensing module shown outside its corresponding pocket. (d) Smartphone interface for selecting sonified exercise spaces and visualization of the tracked signals.

FIGURE 3. Final design of the wearable device: (a) Architecture of the breathing module built using Arduino UNO. (b) Breathing sensors. (c) Front and back views of the tabard with integrated breathing sensors, button held in the person’s hand for calibration, the smartphone in its pocket on the back of the trunk and the breathing sensing module shown outside its corresponding pocket. (d) Smartphone interface for selecting sonified exercise spaces and visualization of the tracked signals.

FIGURE 4. Description of the Implemented Sonified Exercise Spaces (SESs) for the Forward Reach Exercise.

FIGURE 4. (Continued).

FIGURE 5. Layout of room for evaluation study for both devices.

FIGURE 5. Layout of room for evaluation study for both devices.

FIGURE 6. Description of Independent and Dependent Variables for each Part of the Study.

FIGURE 7. Mean (± SE) perceived and actual bend angle for all four sound conditions in the study with the wearable device.

Note. The perceived angles were obtained by translating the 1 to 5 ratings to the respective range of angles centred at 15°, 30°, 45°, 60°, 75°. *Significant differences, p < .05.
FIGURE 7. Mean (± SE) perceived and actual bend angle for all four sound conditions in the study with the wearable device.

FIGURE 8. Mean (± SE) ratings (0 = worst to 6 = best) on awareness, performance, motivation, and relaxation for all four sound conditions in the study with the Kinect (left) and the wearable device (right).

Note. For sake of conciseness, the significant differences are reported in the text and in Figures B.2–B.5 in Appendix B.
FIGURE 8. Mean (± SE) ratings (0 = worst to 6 = best) on awareness, performance, motivation, and relaxation for all four sound conditions in the study with the Kinect (left) and the wearable device (right).

FIGURE 9. Distribution of results for all the sounds.

Note. Nature-bin refers to both Kinect nature sound and water sound from the Smartphone device. Tones-bin refers to both Smartphone and Kinect tone sound. The flat sound was never selected.
FIGURE 9. Distribution of results for all the sounds.

FIGURE A1. (Left) To detect forward reach movement and related guarding strategies, the system monitors the distances between shoulders, hips, and feet. (Right) The progress of the sit-to-stand movement is detected by monitoring the angles between trunk, thigh, and crus.

FIGURE A1. (Left) To detect forward reach movement and related guarding strategies, the system monitors the distances between shoulders, hips, and feet. (Right) The progress of the sit-to-stand movement is detected by monitoring the angles between trunk, thigh, and crus.

FIGURE B1. Results from Statistical Comparisons between the Sound Conditions (Different Amounts of Information) on Perceived and Actual Bend Angle During Forward Reach Exercising Using the Wearable Device. Bonferroni Correction was Applied to Multiple Comparisons (p = .008 Corresponding to a Significance Level of α = 0.05).

FIGURE B2. Statistical Comparison of Sonification Effects During Physical Activity using the Kinect-based Device on Awareness, Performance, Motivation and Relaxation.

FIGURE B3. Statistical Comparison of Sonification Effects During Forward Reach with the Wearable Device on Awareness, Performance, Motivation and Relaxation.

FIGURE B4. Results from the Comparisons of the Effects Between Sound Conditions with and Without Target on Awareness, Performance, Motivation, and Relaxation.

FIGURE B5. Results from Statistical Comparisons Between the Sound Conditions (Different Amounts of Information) Using the Wearable Device and Performing Movement Aimed Towards a Target.