138
Views
1
CrossRef citations to date
0
Altmetric
Invited Article

Assessing automatic VOT annotation using unimpaired and impaired speech

, , &
Pages 624-634 | Received 24 Jun 2017, Accepted 15 Jun 2018, Published online: 09 Jan 2019

References

  • Adi, Y., Keshet, J., Cibelli, E., & Goldrick, M. (2017). Sequence segmentation using joint RNN and structured prediction models. In Proceedings of the 42st IEEE International Conference in Acoustic, Speech and Signal Processing (ICASSP), New Orleans, Louisiana, USA.
  • Adi, Y., Keshet, J., Cibelli, E., Gustafson, E., Clopper, C., & Goldrick, M. (2016a). Automatic measurement of vowel duration via structured prediction. The Journal of the Acoustical Society of America, 140, 4517–4527. doi:10.1121/1.4972527
  • Adi, Y., Keshet, J., Dmitrieva, O., & Goldrick, M. (2016b). Automatic measurement of voice onset time and prevoicing using recurrent neural networks. In the 17th Annual Conference of the International Speech Communication Association, San Francisco, California, USA.
  • Baayen, R.H., Davidson, D.J., & Bates, D.M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390–412. doi:10.1016/j.jml.2007.12.005
  • Baese-Berk, M., & Goldrick, M. (2009). Mechanisms of interaction in speech production. Language and Cognitive Processes, 24, 527–554. doi:10.1080/01690960802299378
  • Bakir, G., Hofmann, T., Schölkopf, B., Smola, A., Taskar, B., & Vishwanathan, S. (2007). Predicting structured data. Cambridge, Massachusetts, USA: MIT Press.
  • Buchwald, A., Gagnon, B., & Miozzo, M. (2017). Phonological and motor errors in individuals with acquired sound production impairment. Journal of Speech, Language, and Hearing Research, 60, 1726–1738. doi:10.1044/2017_JSLHR-S-16-0240
  • Buchwald, A., & Miozzo, M. (2011). Finding levels of abstraction in speech production: Evidence from sound-production impairment. Psychological Science, 22, 1113–1119. doi:10.1177/0956797611417723
  • Buchwald, A., & Miozzo, M. (2012). Phonological and motor errors in individuals with acquired sound production impairment. Journal of Speech, Language, and Hearing Research, 55, 1573–1586.
  • Buz, E., Tanenhaus, M.K., & Jaeger, T.F. (2016). Dynamically adapted context-specific hyper-articulation: Feedback from interlocutors affects speakers? Subsequent pronunciations. Journal of Memory and Language, 89, 68–86. doi:10.1016/j.jml.2015.12.009
  • Buz, E. (2016). Speaking with a (slightly) new voice: Investigating speech changes, communicative goals, and learning from past communicative success (Doctoral dissertation). University of Rochester.
  • Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., & Singer, Y. (2006). Online passive aggressive algorithms. Journal of Machine Learning Research, 7, 551–585.
  • Dissen, Y., & Keshet, J. (2016). Formant estimation and tracking using deep learning. In the 17th Annual Conference of the International Speech Communication Association, San Francisco, California, USA.
  • Eimas, P., Siqueland, E., Jusczyk, P., & Vigorito, J. (1971). Speech perception in infants. Science, 171, 303–306. doi:10.1126/science.171.3968.303
  • Garofolo, J., Lamel, L., Fisher, W., Fiscus, J., Pallett, D., & Dahlgren, N. (1993). TIMIT acoustic-phonetic continuous speech corpus. Philadelphia: Linguistic Data Consortium.
  • Godfrey, J., Holliman, E., & McDaniel, J. (1992). Switchboard: Telephone speech corpus for research and development. In Proceedings of ICASSP-92 (Vol. 1, pp. 517–520), San Francisco, California, USA.
  • Goldrick, M., Keshet, J., Gustafson, E., Heller, J., & Needle, J. (2016). Automatic analysis of slips of the tongue: Insights into the cognitive architecture of speech production. Cognition, 149, 31–39. doi:10.1016/j.cognition.2016.01.002
  • Kirov, C., & Wilson, C. (2012). The specificity of online variation in speech production. In Proceedings of the the 34th Annual Meeting of the Cognitive Science Society, Sapporo, Japan.
  • Kuznetsova, A., Brockhoff, P.B., & Christensen, R.H.B. (2017). lmerTest: Tests in linear mixed effects models. Journal of Statistical Software, 82.
  • Lisker, L., & Abramson, A.S. (1964). A cross-language study of voicing in initial stops: Acoustical measurements. Word, 20, 384–422. doi:10.1080/00437956.1964.11659830
  • Nowozin, S., Gehler, P.V., Jancsary, J., & Lampert, C.H. (2014). Advanced structure prediction. Cambridge, Massachusetts, USA: MIT Press.
  • Paterson, N. (2011). Interactions in bilingual speech processing (Unpublished doctoral dissertation). Northwestern University.
  • Sha, F., & Saul, L. (2005). Real-time pitch determination of one or more voices by nonnegative matrix factorization. Advances in Neural Information Processing Systems, Proceedings of the 18th annual Neural Information Processing Systems (NIPS) conference, held in Vancouver, Canada, from 13–18 December 2004. (Vol. 17, pp. 1233–1240).
  • Sheena, Y., Hejn_A, M., Adi, Y., & Keshet, J. (2017). Automatic measurement of pre-aspiration. In The 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden.
  • Sonderegger, M., Bane, M., & Graff, P. (2017). The medium-term dynamics of accents on reality television. Language, 93, 598–640. doi:10.1353/lan.2017.0038
  • Sonderegger, M., & Keshet, J. (2012). Automatic measurement of voice onset time using discriminative structured prediction. The Journal of the Acoustical Society of America, 132, 3965–3979. doi:10.1121/1.4763995
  • Talkin, D. (1995). A robust algorithm for pitch tracking (RAPT). In W. Kleijn & K. Paliwal (Eds.), Speech coding and synthesis (pp. 495–518). New York: Elsevier.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.