Figures & data
Table 1. Selection of MST gestures.
Table 2. Visual design inspiration.
Table 3. Details of vibrotactile stimuli for the MST gestures selected from Wei et al. (Citation2022b).
Figure 2. Interface for texting. (a) interface for texting words, (b) interface for the participants’ inputting MST gestures (the avatar is the experimenter), (c) positions of each MST gesture, (d) interface for the participants’ receiving MST gestures (the avatar is the participant), and (e) positions of each MST gesture. We developed this texting interface based on an open-source application (https://github.com/Baloneo/LANC).
![Figure 2. Interface for texting. (a) interface for texting words, (b) interface for the participants’ inputting MST gestures (the avatar is the experimenter), (c) positions of each MST gesture, (d) interface for the participants’ receiving MST gestures (the avatar is the participant), and (e) positions of each MST gesture. We developed this texting interface based on an open-source application (https://github.com/Baloneo/LANC).](/cms/asset/103af52f-0471-4456-8d3b-7c8aa27d8908/hihc_a_2148883_f0002_c.jpg)
Figure 3. Interface for video calls. (a) original interface for video calls, (b) interface for the participants’ inputting MST gestures (the full-screen image is the experimenter), (c) positions of each MST gesture on the experimenter’s image, (d) interface for the participant’s receiving MST gestures (the image box in the upper right corner shows the participant, and (e) positions of each MST gesture on the participant’s image. We developed this video calling interface based on an open-source application (https://github.com/xmtggh/VideoCalling).
![Figure 3. Interface for video calls. (a) original interface for video calls, (b) interface for the participants’ inputting MST gestures (the full-screen image is the experimenter), (c) positions of each MST gesture on the experimenter’s image, (d) interface for the participant’s receiving MST gestures (the image box in the upper right corner shows the participant, and (e) positions of each MST gesture on the participant’s image. We developed this video calling interface based on an open-source application (https://github.com/xmtggh/VideoCalling).](/cms/asset/aee94fa1-7664-46f2-be71-274f72246527/hihc_a_2148883_f0003_c.jpg)
Table 4. Description of the interface structure of texting.
Table 5. Description of the interface structure of video calling.
Table 6. Descriptions of variables.
Figure 6. Test environment. (a) Texting and (b) video calling (the second phone on the desk is used for voice transmission).
![Figure 6. Test environment. (a) Texting and (b) video calling (the second phone on the desk is used for voice transmission).](/cms/asset/941227da-1cfd-4a5c-a707-2a3e7a6e6735/hihc_a_2148883_f0006_c.jpg)
Table 7. Example questionnaire of NMSPM (Harms & Biocca, Citation2004).
Table 8. Test communication modes, conditions, and activities for participants.
Figure 8. Dimensions of social presence. Between-subjective analysis—video calling and texting (VT: vibrotactile stimuli).
![Figure 8. Dimensions of social presence. Between-subjective analysis—video calling and texting (VT: vibrotactile stimuli).](/cms/asset/e60a4925-c33d-421b-b576-01883f0ed746/hihc_a_2148883_f0008_c.jpg)
Figure 9. Dimensions of social presence. Within-subjects analysis—with VT and without VT (VT: vibrotactile stimuli).
![Figure 9. Dimensions of social presence. Within-subjects analysis—with VT and without VT (VT: vibrotactile stimuli).](/cms/asset/2df14d92-0f14-4d29-b0c3-c86a54125e75/hihc_a_2148883_f0009_c.jpg)
Table 9. Qualitative analysis of texting.
Table 10. Qualitative analysis of video calling.