ABSTRACT
Smart Interactive Experiences (SIEs) are usage situations enabled by the Internet of Things that empower users to interact with the surrounding environment. The goal of our research is to define methodologies and software environments to support the design of SIEs; more specifically, we focus on design paradigms suitable for experts of given domains, who however might not be experts in technology. In this context, this paper discusses some trade-offs that we identified between six different dimensions that characterise the quality of software environments for SIE design. The trade-offs emerged from the analysis of data collected in an experimental study that compared three different design paradigms to understand in which measure each paradigm supports the creative process for SIE design. After reporting on the study procedure and the data analyses, the paper illustrates how the resulting trade-offs led us to identify alternatives for SIE design paradigms, and to structure on their basis a modular architecture of a software platform where the strengths of the three paradigms can be exploited flexibly, i.e. depending on the constraints and the requirements characterising specific design situations.
Disclosure statement
No potential conflict of interest was reported by the authors .
ORCID
Carmelo Ardito http://orcid.org/0000-0001-8993-9855
Giuseppe Desolda http://orcid.org/0000-0001-9894-2116
Rosa Lanzilotti http://orcid.org/0000-0002-2039-8162
Alessio Malizia http://orcid.org/0000-0002-2601-7009
Maristella Matera http://orcid.org/0000-0003-0552-8624
Notes
1 The Pearson correlation coefficient measures the strength and direction of associations between two variables measured on at least an interval scale, like it happens for NASA-TLX, AttrakDiff and CSI.
The Spearman correlation coefficient measures the strength and direction of associations between two variables measured on at least an ordinal scale, which is the case of the other dimensions.
2 In the current prototype, both the Tactile and the Tangible systems use Google Vision APIs (https://cloud.google.com/vision) for visual recognition of tangible attributes, smart objects, and post-it notes and bar-code.