ABSTRACT
This mixed-methods study details the development, usability testing, and user experience evaluation of an informational mHealth app for older women who are considering diagnosis, treatment, and prevention of osteoporosis. Developers used heuristics from Universal Design theory adapted for older users. Formative usability testing measured 16 functional, informational, and navigational tasks. Data included transcripts of audio recordings, observer notes from video recordings, task completion times, and the results of a post-testing participant survey that evaluated user experience for app functions and information content. Participants interacted with the app in productive ways and with relative ease. The study also identified several app- and context-specific challenges that designers will address in future iterations of the tool. Researchers who are developing other mHealth products may benefit from using this study’s methodological framework, which includes both qualitative and quantitative results.
Disclosure of potential conflict of interest
The author has no competing interests related to this research.
Notes
1. Following the recommendations of Doak et al. (Citation1996), the NIH information was modified to a Simple Measure of Gobbledygook (SMOG) Index reading level of 6.9.
2. For this version of the app, the ABH FRC was inserted as a parallel site link because researchers did not have access to the application programming interface (API) at that time. The ABH FRC page was reformattable for a smartphone screen. However, the site text and the PDF results output document were not resizable. Later versions of the app will incorporate the API into the interface with the same heuristics used in the osteoporosis information section and the decision self-efficacy survey\
3. Alternate methods of interaction were not developed for this version of the prototype, so a question about these functions was omitted from the survey and the usability testing.
4. Times for functional tasks are based on a completed attempt, even if more than one attempt was made.
5. Researchers did not measure the time for task completion of the second survey attempt. The purpose of the second survey attempt was to create at least two sets of survey results so that the participants could compare them.
6. Participants responded to the following task prompt: How can you compare the results of the two surveys you have completed?
Additional information
Funding
Notes on contributors
Russell Kirkscey
Russell Kirkscey, is Assistant Professor of English and Technical and Professional Writing at Penn State Harrisburg. His research interests include medical rhetoric, health information technology, user experience evaluation, and capstone experiences.