1,766
Views
2
CrossRef citations to date
0
Altmetric
Articles

The calculus of language: explicit representation of emergent linguistic structure through type-theoretical paradigms

References

  • Abzianidze, Lasha, 2016, August. “Natural Solution to FraCaS Entailment Problems.” In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, 64–74. Berlin: Association for Computational Linguistics.
  • Apostel, L., B. Mandelbrot, and A. Morf. 1957. Logique, Langage et Théorie de l'Information. Paris: Presses Universitaires de France.
  • Avraham, Oded, and Yoav Goldberg. 2017, April. “The Interplay of Semantics and Morphology in Word Embeddings.” In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 422–426. Valencia: Association for Computational Linguistics.
  • Bengio, Yoshua. 2008. “Neural Net Language Models.” Scholarpedia 3 (1): 3881.
  • Blevins, Terra, Omer Levy, and Luke Zettlemoyer. 2018. “Deep RNNs Encode Soft Hierarchical Syntax.”
  • Bloomfield, Leonard. 1935. Language. London: G. Allen & Unwin, Ltd.
  • Bojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomáš Mikolov. 2016. “Enriching Word Vectors with Subword Information.” CoRR abs/1607.04606. http://arxiv.org/abs/1607.04606.
  • Bradley, Tai-Danae. 2020. “At the Interface of Algebra and Statistics.” ArXiv abs/2004.05631.
  • Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, and Arvind Neelakantan, et al. 2020. “Language Models are Few-Shot Learners.”
  • Chater, Nick, Alexander Clark, John A. Goldsmith, and Amy Perfors. 2015. Empiricism and Language Learnability. 1st ed. Oxford: Oxford University Press. OCLC: ocn907131354.
  • Chen, Tongfei, Yunmo Chen, and Benjamin Van Durme. 2020, July. “Hierarchical Entity Typing via Multi-level Learning to Rank.” In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 8465–8475. Association for Computational Linguistics.
  • Choi, Eunsol, Omer Levy, Yejin Choi, and Luke Zettlemoyer. 2018, July. “Ultra-Fine Entity Typing.” In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 87–96. Melbourne: Association for Computational Linguistics.
  • Chomsky, Noam. 1953. “Systems of Syntactic Analysis.” The Journal of Symbolic Logic 18 (3): 242–256.
  • Chomsky, Noam. 1955. “Logical Syntax and Semantics: Their Linguistic Relevance.” Language 31 (1): 36–45.
  • Chomsky, Noam. 1957. Syntactic Structures. The Hague: Mouton and Co.
  • Chomsky, Noam. 1969. Quine's Empirical Assumptions, 53–68. Dordrecht: Springer Netherlands.
  • Clark, Stephen Hedley, Laura Rimell, Tamara Polajnar, and Jean Maillard. 2016. “The Categorial Framework for Compositional Distributional Semantics.”
  • Clark, Kevin, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. “What Does BERT Look At? An Analysis of BERT's Attention.”
  • Coecke, Bob. 2019. “The Mathematics of Text Structure.”
  • Coecke, Bob, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. “Mathematical Foundations for a Compositional Distributional Model of Meaning.”
  • de Groote, Philippe (Ed.). 1995. The Curry-Howard Isomorphism, volume 8 of Cahier du centre de Logique. Louvain: Université Catholique de Louvain, Academia.
  • Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” CoRR abs/1810.04805. http://arxiv.org/abs/1810.04805.
  • Dinu, Georgiana, Miguel Ballesteros, Avirup Sil, Sam Bowman, Wael Hamza, Anders Sogaard, Tahira Naseem, and Yoav Goldberg, eds. 2018, July. Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP. Melbourne: Association for Computational Linguistics.
  • Ducrot, Oswald. 1973. Le structuralisme en linguistique. Paris: Éditions du Seuil.
  • Énguehard, Emile, Yoav Goldberg, and Tal Linzen. 2017. “Exploring the Syntactic Abilities of RNNs with Multi-task Learning.” In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), 3–14.
  • Firth, John Rupert. 1957. “A Synopsis of Linguistic Theory 1930–1955.” In Studies in Linguistic Analysis, 1–32. Oxford: Blackwell.
  • Fouqueré, Christophe, Alain Lecomte, Myriam Quatrini, Pierre Livet, and Samuel Tronçon. 2018. Mathématique du dialogue: sens et interaction. Paris: Hermann.
  • Gastaldi, Juan Luis. 2020. “Why Can Computers Understand Natural Language?.” Philosophy & Technology. https://link.springer.com/article/10.1007/s13347-020-00393-9#citeas
  • Girard, Jean-Yves. 1987. “Linear Logic.” Theoretical Computer Science 50 (1): 1–101.
  • Girard, Jean-Yves. 1989. Towards a Geometry of Interaction, Vol. 92 of Contermporary Mathematics, 69–108. AMS.
  • Girard, Jean-Yves. 2001. “Locus Solum: From the Rules of Logic to the Logic of Rules.” Mathematical Structures in Computer Science 11 (3): 301–506.
  • Goldberg, Yoav. 2019. “Assessing BERT's Syntactic Abilities.”
  • Harris, Zellig. 1960. Structural linguistics. Chicago: University of Chicago Press.
  • Harris, Zellig S. 1970a. “Distributional Structure.” In Papers in Structural and Transformational Linguistics, 775–794. Dordrecht: Springer.
  • Harris, Zellig S. 1970b. “Computable Syntactic Analysis: The 1959 Computer Sentence-Analyzer.” In Papers in Structural and Transformational Linguistics, 253–277. Dordrecht: Springer Netherlands.
  • Harris, Zellig S. 1970c. “Morpheme Boundaries within Words: Report on a Computer Test.” In Papers in Structural and Transformational Linguistics, 68–77. Dordrecht: Springer Netherlands.
  • Hewitt, John, and Christopher D. Manning. 2019, Juny. “A Structural Probe for Finding Syntax in Word Representations.” In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4129–4138. Minneapolis, Minnesota: Association for Computational Linguistics.
  • Hjelmslev, Louis. 1953. Prolegomena to a Theory of Language. Baltimore: Wawerly Press.
  • Hjelmslev, Louis. 1971. La structure fondamentale du langage, 177–231. Paris: Éditions de Minuit.
  • Hjelmslev, Louis. 1975. Résumé of a Theory of Language. Travaux du Cercle linguistique de Copenhague 16. Copenhagen: Nordisk Sprog-og Kulturforlag.
  • Jakobson, Roman. 1967. Preliminaries to Speech Analysis: The Distinctive Features and Their Correlates. Cambridge, MA: M.I.T. Press.
  • Jakobson, Roman. 2001. Roman Jakobson: Selected Writings. Berlin: Mouton De Gruyter.
  • Krishnamurthy, Jayant, Pradeep Dasigi, and Matt Gardner. 2017, September. “Neural Semantic Parsing with Type Constraints for Semi-Structured Tables.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 1516–1526. Copenhagen: Association for Computational Linguistics.
  • Krivine, Jean-Louis. 2010. “Realizability Algebras: A Program to Well Order R.” Logical Methods in Computer Science 7 (3).
  • Lambek, Joachim. 1958. “The Mathematics of Sentence Structure.” The American Mathematical Monthly65 (3): 154–170.
  • Landauer, Thomas K., Danielle S. McNamara, Simon Dennis, and Walter Kintsch, eds. 2007. Handbook of Latent Semantic Analysis. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Lenci, Alessandro. 2008. “Distributional Semantics in Linguistic and Cognitive Research.” From Context to Meaning: Distributional Models of the Lexicon in Linguistics and Cognitive Science, Italian Journal of Linguistics 1 (20): 1–31.
  • Lenci, Alessandro. 2018. “Distributional Models of Word Meaning.” Annual Review of Linguistics 4 (1): 151–171.
  • Levy, Omer, and Yoav Goldberg. 2014a. “Linguistic Regularities in Sparse and Explicit Word Representations.” In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, CoNLL 2014, Baltimore, Maryland, June 26-27, 2014, 171–180.
  • Levy, Omer, and Yoav Goldberg. 2014b. “Neural Word Embedding As Implicit Matrix Factorization.” In Proceedings of the 27th International Conference on Neural Information Processing Systems -- Volume 2, NIPS'14, 2177–2185. Cambridge, MA: MIT Press.
  • Levy, Omer, Yoav Goldberg, and Ido Dagan. 2015. “Improving Distributional Similarity with Lessons Learned From Word Embeddings.” TACL 3: 211–225.
  • Lin, Ying, and Heng Ji. 2019, November. “An Attentive Fine-Grained Entity Typing Model with Latent Type Representation.” In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 6197–6202. Hong Kong: Association for Computational Linguistics.
  • Linzen, Tal, Emmanuel Dupoux, and Yoav Goldberg. 2016. “Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies.”
  • MacWhinney, Brian, ed. 1999. The Emergence of Language. Carnegie Mellon Symposia on Cognition. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Maniglier, Patrice. 2006. La vie énigmatique des signes. Paris: Léo Scheer.
  • Manning, Christopher D., Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy. 2020. “Emergent Linguistic Structure in Artificial Neural Networks Trained by Self-Supervision.” Proceedings of the National Academy of Sciences.
  • McEnery, Anthony M., and Anita Wilson. 2001. Corpus Linguistics: An Introduction. Edinburgh: Edinburgh University Press.
  • McGee Wood, Mary. 1993. Categorial Grammars. London: Routledge.
  • Michel, Jean-Baptiste, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, and Joseph P. Pickett, et al. The Google Books Team. 2010. “Quantitative Analysis of Culture Using Millions of Digitized Books.” Science (New York, N.Y.) 331 (6014): 176–82.
  • Mikolov, Tomáš, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space.” CoRR abs/1301.3781.
  • Minsky, Marvin L. 1991. “Logical Versus Analogical Or Symbolic Versus Connectionist Or Neat Versus Scruffy.” AI Magazine 12 (2): 34.
  • Miquel, Alexandre. 2020. “Implicative Algebras: A New Foundation for Realizability and Forcing.” Mathematical Structures in Computer Science 30 (5): 458–510.
  • Moot, Richard, and Christian Retoré. 2012. The Logic of Categorial Grammars: A Deductive Account of Natural Language Syntax and Semantics. 1st ed., Lecture Notes in Computer Science 6850. Springer-Verlag Berlin Heidelberg.
  • Peters, Matthew E., Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. “Deep Contextualized Word Representations.” CoRR abs/1802.05365. http://arxiv.org/abs/1802.05365.
  • Pilehvar, Mohammad Taher, and Jose Camacho-Collados. 2020. “Embeddings in Natural Language Processing. Theory and Advances in Vector Representation of Meaning.” Synthesis Lectures on Human Language Technologies 13 (4): 1–175. https://doi.org/10.2200/S01057ED1V01Y202009HLT047.
  • Quine, W. V. 2013. Word and Object. Cambridge, Mass: MIT Press.
  • Radford, Alec. 2018. “Improving Language Understanding by Generative Pre-Training.”
  • Raiman, Jonathan, and Olivier Raiman. 2018. “DeepType: Multilingual Entity Linking by Neural Type System Evolution.”
  • Riba, Colin. 2007. “Strong Normalization as Safe Interaction.” In LiCS ‘2007, 13–22.
  • Sahlgren, Magnus. 2006. “The Word-Space Model: Using Distributional Analysis to Represent Syntagmatic and Paradigmatic Relations between Words in High-Dimensional Vector Spaces.” PhD diss., Stockholm University, Stockholm.
  • Sahlgren, Magnus. 2008. “The Distributional Hypothesis.” Special Issue of the Italian Journal of Linguistics 1 (20): 33–53.
  • de Saussure, Ferdinand. 1959. Course in General Linguistics. Translated by Wade Baskin. New York: McGraw-Hill.
  • Schnabel, Tobias, Igor Labutov, David M. Mimno, and Thorsten Joachims. 2015. “Evaluation Methods for Unsupervised Word Embeddings.” In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, 298–307.
  • Sennrich, Rico, Barry Haddow, and Alexandra Birch. 2016, August. “Neural Machine Translation of Rare Words with Subword Units.” In Proceedings of the 54th Annual Meeting of the ACL (Volume 1: Long Papers), 1715–1725. Berlin: ACL.
  • Shannon, Claude E. 1948. “A Mathematical Theory of Communication.” Bell Labs Technical Journal 27 (3): 379–423.
  • Turney, Peter D., and Patrick Pantel. 2010. “From Frequency to Meaning: Vector Space Models of Semantics.” CoRR abs/1003.1141.
  • Wijnholds, Gijs, and Mehrnoosh Sadrzadeh. 2019. “A Type-Driven Vector Semantics for Ellipsis with Anaphora Using Lambek Calculus with Limited Contraction.” Journal of Logic, Language and Information 28 (2): 331–358.
  • Yang, Charles. 2016. The Price of Linguistic Productivity: How Children Learn to Break the Rules of Language. Cambridge, MA: The MIT Press.