411
Views
18
CrossRef citations to date
0
Altmetric
Research Article

LIA: A Virtual Assistant that Can Be Taught New Commands by Speech

&

References

  • Argall, B. D., Chernova, S., Veloso, M., & Browning, B. (2009). A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5), 469–483. doi:https://doi.org/10.1016/j.robot.2008.10.024
  • Azaria, A., Krishnamurthy, J., & Mitchell, T. M. (2016). Instructable intelligent personal agent. AAAI, 4. (pp. 2681–2689).
  • Azaria, A., & Hong, J. (2016). Recommender systems with personality. In Proceedings of the 10th ACM Conference on Recommender Systems (pp. 207-210). ACM, Boston, MA.
  • Bellegarda, J. R. (2014). Spoken language understanding for natural interaction: The siri experience. In Natural interaction with robots, knowbots and smartphones (pp. 3–14). New York, NY: Springer.
  • Biermann, A. W. (1983). Natural language programming. Dordrecht, Netherlands: Springer.
  • Bohus, D., Raux, A., Harris, T. K., Eskenazi, M., & Rudnicky, A. I. (2007). Olympus: An open-source framework for conversational spoken language interface research. In Academic and industrial research in dialog technologies (pp. 32–39).
  • Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing, and generalizing a task in a humanoid robot. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions On, 37(2), 286–298. doi:https://doi.org/10.1109/TSMCB.2006.886952
  • Cassell, J. (2000). Embodied conversational interface agents. Communications of the ACM, 43(4), 70–78. doi:https://doi.org/10.1145/332051.332075
  • Chkroun, M., & Azaria, A (2018). “Did I say something wrong?”: Towards a safe collaborative chatbot. The thirty-second AAAI conference on artificial intelligence (AAAI-18). Louisiana: New Orleans.
  • Daemen, J., & Rijmen, V. (2013). The design of rijndael: AES-the advanced encryption standard. Switzerland: Springer Science & Business Media.
  • Gilbert, M., Arizmendi, I., Bocchieri, E., Caseiro, D., Goffin, V., Ljolje, A., … Wilpon, J. G. (2011). Your mobile virtual assistant just got smarter! INTERSPEECH, Twelfth Annual Conference of the International Speech Communication Association. 2011, Florence, Italy. (pp. 1101–1104).
  • Herrmann, J. (1996). Different ways to support intelligent assistant systems by use of machine learning methods. International Journal of Human-Computer Interaction, 8(3), 287–308. doi:https://doi.org/10.1080/10447319609526153
  • Huang, T. H. K., Azaria, A., & Bigham, J. P. (2016). Instructablecrowd: Creating if-then rules via conversations with the crowd. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 1555–1562). ACM, San Jose, CA.
  • Klopfenstein, L. C., Delpriori, S., Malatini, S., & Bogliolo, A. (2017). The rise of bots: A survey of conversational interfaces, patterns, and paradigms. Proceedings of the 2017 Conference on Designing Interactive Systems(pp. 555–565). Edinburgh, Scotland: ACM.
  • Kuklinski, K., Fischer, K., Marhenke, I., Kirstein, F., Solvason, D., Kruger, N., Savarimuthu, T. R., et al. (2014). Teleoperation for learning by demonstration: Data glove versus object manipulation for intuitive robot control. Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2014 6th International Congress on (pp. 346–351). IEEE, St. Petersburg, Russia.
  • Lamere, P., Kwok, P., Walker, W., Gouveˆa, E. B., Singh, R., Raj, B., & Wolf, P. (2003). Design of the cmu sphinx-4 decoder. INTERSPEECH, Eighth European Conference on Speech Communication and Technology, Geneva, Switzerland.
  • Li, T.-J.-J., Azaria, A., & Myers, B. A. (2017). Sugilite: Creating multimodal smartphone automation by demonstration. CHI’17 (pp. 6038–6049). New York, NY: ACM.
  • Mark, W., & Perrault, R. (2004). Calo: A cognitive agent that learns and organizes. http://www.ai.sri.com/project/CALO
  • Nakaoka, S., Nakazawa, A., Kanehiro, F., Kaneko, K., Morisawa, M., Hirukawa, H., & Ikeuchi, K. (2007). Learning from observation paradigm: Leg task models for enabling a biped humanoid robot to imitate human dances. The International Journal of Robotics Research, 26(8), 829–844. doi:https://doi.org/10.1177/0278364907079430
  • Quirk, C., Mooney, R., & Galley, M. (2015a, July). Language to code: Learning semantic parsers for if-this-then-that recipes. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15) (pp. 878–888). Beijing, China.
  • Quirk, C., Mooney, R. J., & Galley, M. (2015b). Language to code: Learning semantic parsers for if-this-then-that recipes. ACL (Vol. 1, pp. 878–888).
  • Reed, A. (2010). Creating interactive fiction with inform 7. Boston, MA: Cengage Learning.
  • Sohn, J., Kim, N. S., & Sung, W. (1999). A statistical model-based voice activity detection. IEEE Signal Processing Letters, 6(1), 1–3. doi:https://doi.org/10.1109/97.736233
  • Steedman, M., & Baldridge, J. (2011). Combinatory categorial grammar. Non–Transformational Syntax: Formal and Explicit Models of Grammar. (pp. 181–224).
  • Thorne, S., Ball, D., & Lawson, Z. (2013). Reducing error in spreadsheets: Example driven modeling versus traditional programming. International Journal of Human-Computer Interaction, 29(1), 40–53. doi:https://doi.org/10.1080/10447318.2012.677744
  • Wahlster, W., & Kobsa, A. (1989). User models in dialog systems. In User models in dialog systems (pp. 4–34). Berlin, Heidelberg: Springer.
  • Watanabe, T., Okubo, M., Nakashige, M., & Danbara, R. (2004). Interactor: Speechdriven embodied interactive actor. International Journal of Human-Computer Interaction, 17(1), 43–60. doi:https://doi.org/10.1207/s15327590ijhc1701_4
  • Zettlemoyer, L. S., & Collins, M. (2005). Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. UAI ’05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence, AUAI press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.