171
Views
4
CrossRef citations to date
0
Altmetric
Articles

Intention-Sensing Recipe Guidance via User Accessing to Objects

, , &

References

  • Bradbury, J. S., Shell, J. S., & Knowles, C. B. (2003). Hands on cooking: Towards an attentive kitchen. Proceedings of Chi ’03 Extended Abstracts on Human Factors in Computing Systems (pp. 996–997). New York: ACM Press.
  • Chang, K.-H., Liu, S.-Y., Chu, H.-H., Hsu, J. Y.-J., Chen, C., Lin, T.-Y., & Huang, P. (2006). The diet-aware dining table: Observing dietary behaviors over a tabletop surface. Proceedings of the 4th International Conference on Pervasive Computing (pp. 366–382). Berlin: Springer.
  • Favela, J., Tentori, M., & Gonzalez, V. M. (2010). Ecological validity and pervasiveness in the evaluation of ubiquitous computing technologies for health care. International Journal of Human–Computer Interaction, 26(5), 414–444.
  • Hamada, R., Okabe, J., Ide, I., Sakai, S., & Tanaka, H. (2005). Cooking navi: Assistant for daily cooking in kitchen. Proc. of the 13th Annual ACM International Conference on Multimedia (pp. 371–374). New York: ACM Press.
  • Hashimoto, A., Inoue, J., Nakamura, K., Funatomi, T., Ueda, M., Yamakata, Y., & Minoh, M. (2012). Recognizing ingredients at cutting process by integrating multimodal features. Proceedings of the ACM Multimedia 2012 Workshop on Multimedia for Cooking and Eating Activities (pp. 13–18). New York: ACM Press.
  • Hashimoto, A., Mori, N., Funatomi, T., Mukunoki, M., Kakusho, K., & Minoh, M. (2010). Tracking food materials with changing their appearance in food preparing. Proceedings of ISM 2010 Workshop on Multimedia for Cooking and Eating Activities (pp. 248–253). Washington, DC: IEEE.
  • Iscen, A., & Duygulu, P. (2013). Knives are picked before slices are cut: Recognition through activity sequence analysis. Proceedings of the 5th International Workshop on Multimedia for Cooking and Eating Activities (pp. 3–8). New York: ACM Press.
  • Ju, W., Hurwitz, R., Judd, T., & Lee, B. (2001). Counteractive: An interactive cookbook for the kitchen counter. Proceedings of Chi ’01 Extended Abstracts on Human Factors in Computing Systems (pp. 269–270). Seattle, WA: ACM Press.
  • Kelley, J. F. (1984). An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems, 2(1), 26–41.
  • Klompmaker, F., Nebe, K., & Fast, A. (2012). d SensingNI a framework for advanced tangible interaction using a depth camera. Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction (pp. 217–224). New York: ACM Press.
  • Kuehne, H., Fraunhofer, F., Arslan, A., & Serre, T. (2014). The language of actions: Recovering the syntax and semantics of goal-directed human activities. Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition (pp. 780–787). Washington, DC: IEEE.
  • Lei, J., Ren, X., & Fox, D. (2012). Fine-grained kitchen activity recognition using RGB-D. Proceedings of the 2012 Acm Conference on Ubiquitous Computing (pp. 208–211). New York: ACM Press.
  • López, G., López, M., Guerrero, L. A., & Bravo, J. (2014). Human–objects interaction: A framework for designing, developing and evaluating augmented objects. International Journal of Human–Computer Interaction, 30 (10), 787–801.
  • Matsushima, Y., Funabiki, N., Zhang, Y., Nakanishi, T., & Watanabe, K. (2013). Extensions of cooking guidance function on android tablet for homemade cooking assistance system. IEEE 2nd Global Conference on Consumer Electronics (pp. 397–401). Washington, DC: IEEE.
  • Miyawaki, K., & Sano, M. (2008). A virtual agent for a cooking navigation system using augmented reality. Proceedings of 8th International Conference on Intelligent Virtual Agents (pp. 97–103). New York: ACM Press.
  • Mori, S., Maeta, H., Yamakata, Y., & Sasada, T. (2014). Flow graph corpus from recipe texts. Proceedings of the Ninth International Conference on Language Resources and Evaluation.
  • Murata, M. (2001). RELAX NG Home Page. Retrieved from http://relaxng.org/.
  • Nakauchi, Y., Fukuda, T., Noguchi, K., & Matsubara, T. (2005). Intelligent kitchen: Cooking support by LCD and mobile robot with IC-labeled objects. Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1911–1916). Washington, DC: IEEE.
  • Nakazawa, A., & Nitschke, C. (2012). Point of gaze estimation through corneal surface reflection in an active illumination environment. Proceedings of European Conference on Computer Vision (pp. 159–172). Berlin: Springer.
  • Nintendo. (2006). Syaberu! DS Oryouri Navi. Retrieved from http://www.nintendo.co.jp/ds/a4vj/.
  • Ohnishi, K., Kanehira, A., Kanezaki, A., & Harada, T. (2015). Recognizing activities of daily living with a wrist-mounted camera. CoRR, abs/1511.06783. Retrieved from http://arxiv.org/abs/1511.06783
  • Packer, B., Saenko, K., & Koller, D. (2012). A combined pose, object, and feature model for action understanding. Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1378–1385). Washington, DC: IEEE.
  • Prasad, V. S. N., Kellokumpu, V., & Davis, L. S. (2006). Ballistic hand movements. Proceedings of Articulated Motion and Deformable Objects (pp. 153–164). Amsterdam, The Netherlands: Elsevier.
  • Rohrbach, M., Amin, S., Andriluka, M., & Schiele, B. (2012). A database for fine grained activity detection of cooking activities. Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1194–1201). Washington, DC: IEEE.
  • Sandweg, N., Hassenzahl, M., & Kuhn, K. (2000). Designing a telephone-based interface for a home automation system. International Journal of Human–Computer Interaction, 12(3–4), 401–414.
  • Schneider, M. (2009). Plan recognition in instrumented environments. Proceedings of the 5th International Conference on Intelligent Environments (pp. 295–302). Amsterdam, The Netherlands: IOS Press.
  • Shimada, A., Kondo, K., Deguchi, D., Morin, G., & Stern, H. (2013). Kitchen scene context based gesture recognition: A contest in icpr2012. Advances in depth image analysis and applications (pp. 168–185). Berlin: Springer.
  • Siltanen, S., Hakkarainen, M., Korkalo, O., Salonen, T., Saaski, J., Woodward, C., … Potamianos, A. (2007). Multimodal user interface for augmented assembly. Proceedings of IEEE 9th Workshop on Multimedia Signal Processing (pp. 78–81). Washington, DC: IEEE.
  • Song, D., Kyriazis, N., Oikonomidis, I., Papazov, C., Argyros, A., Burschka, D., & Kragic, D. (2013). Predicting human intention in visual observations of hand/object interactions. Proceedings of 2013 IEEE International Conference on Robotics and Automation (pp. 1608–1615). Washington, DC: IEEE.
  • Tang, A., Owen, C., Biocca, F., & Mou, W. (2003). Comparative effectiveness of augmented reality in object assembly. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 73–80). New York: ACM Press.
  • Ueda, M., Funatomi, T., Hashimoto, A., Watanabe, T., & Minoh, M. (2011). Developing a real-time system for measuring the consumption of seasoning. Proceedings of IEEE ISM 2011 Workshop on Multimedia for Cooking and Eating Activities (pp. 393–398). Washington, DC: IEEE.
  • Uriu, D., Namai, M., Tokuhisa, S., Kashiwagi, R., Inami, M., & Okude, N. (2012). Panavi: Recipe medium with a sensors-embedded pan for domestic users to master professional culinary arts. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 129–138). New York: ACM Press.
  • Yasuoka, R., Hashimoto, A., Funatomi, T., & Minoh, M. (2013). Detecting start and end times of object-handlings on a table by fusion of camera and load sensors. Proceedings of the 5th International Workshop on Multimedia for Cooking and Eating Activities (pp. 51–56). New York: ACM Press.
  • Yuan, M. L., Ong, S. K., & Nee, A. Y. C. (2008). Augmented reality for assembly guidance using a virtual interactive tool. International Journal of Production Research, 46(7), 1745–1767.
  • Zauner, J., Haller, M., Brandl, A., & Hartmann, W. (2003). Authoring of a mixed reality assembly instructor for hierarchical structures. Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality (pp. 237–246). Washington, DC: IEEE.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.