306
Views
4
CrossRef citations to date
0
Altmetric
Research Articles

Studying the Effect of Syntactic Simplification on Text Summarization

ORCID Icon & ORCID Icon

References

  • I. Mani, and M. T. Maybury. Advances in automatic text summarization. Cambridge, MA: MIT Press, 1999.
  • A. Nenkova, and L. Vanderwende, “The impact of frequency on summarization,” Microsoft Res Redmond Washington Tech. Rep. MSR-TR-2005, Vol. 101, 2005.
  • R. Mihalcea, and P. Tarau. “TextRank: Bringing order into text,” in Proceedings of the 2004 conference on empirical methods in natural language processing, 2004, pp. 404–11.
  • G. Erkan, and D. R. Radev, “Lexrank: graph-based lexical centrality as salience in text summarization,” J. Artif. Intell. Res., Vol. 22, pp. 457–79, 2004.
  • Y. Gong, and X. Liu. “Generic text summarization using relevance measure and latent semantic analysis,” in Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, 2001, pp. 19–25.
  • S. Park, B. Cha, and D. U. An, “Automatic multi-document summarization based on clustering and nonnegative matrix factorization,” IETE Tech. Rev., Vol. 27, no. 2, pp. 167–78, 2010.
  • N. Chatterjee, and N. Yadav, “Fuzzy rough set-based sentence similarity measure and its application to text summarization,” IETE Tech. Rev., Vol. 36, no. 5, pp. 517–25, 2019.
  • H. Jing. “Sentence reduction for automatic text summarization,” in Sixth Applied Natural Language Processing Conference, (Seattle, Washington, USA), Association for Computational Linguistics, Apr. 2000, pp. 310–5.
  • K. Knight, and D. Marcu, “Summarization beyond sentence extraction: A probabilistic approach to sentence compression,” Artif. Intell., Vol. 139, no. 1, pp. 91–107, 2002.
  • C. Zong, R. Xia, and J. Zhang. Text data mining. Singapore: Springer, 2021.
  • A. Siddharthan, A. Nenkova, and K. McKeown. “Syntactic simplification for improving content selection in multi-document summarization,” in COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, (Geneva, Switzerland), COLING, 2004, pp. 896–902.
  • M. Shardlow, “A survey of automated text simplification,” Int. J. Adv. Comput. Sci. Appl., Vol. 4, no. 1, pp. 58–70, 2014.
  • N. Chatterjee, and R. Agarwal. “DEPSYM: A lightweight syntactic text simplification approach using dependency trees,” in Proceedings of the First Workshop on Current Trends in Text Simplification (CTTS 2021), co-located with SEPLN, 2021, pp. 42–56.
  • S. Štajner, and M. Popovic. “Can text simplification help machine translation?” in Proceedings of the 19th Annual Conference of the European Association for Machine Translation, 2016, pp. 230–42.
  • E. Hasler, A. de Gispert, F. Stahlberg, A. Waite, and B. Byrne, “Source sentence simplification for statistical ma-chine translation,” Comput. Speech. Lang., Vol. 45, pp. 221–35, 2017.
  • T. Dadu, K. Pant, S. Nagar, F. A. Barbhuiya, and K. Dey. “Text simplification for comprehension-based question-answering,” arXiv preprint arXiv:2109.13984, 2021.
  • D. Vickrey, and D. Koller. “Sentence simplification for semantic role labeling,” in Proceedings of ACL-08: HLT, (Columbus, Ohio), Association for Computational Linguistics, June 2008, pp. 344–52.
  • R. Evans, and C. Orasan. “Sentence simplification for semantic role labelling and information extraction,” in Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP), (Varna, Bulgaria), 2019, pp. 285–94.
  • S. B. Silveira, and A. Branco. “Combining a double clustering approach with sentence simplification to produce highly informative multi-document summaries,” in2012 IEEE 13thInternational Conference on Information Reuse Integration(IRI), 2012, pp. 482–89.
  • B. M. Rebello, G. L. d. Santos, C. R. B. d. Ávila, and A. d. S. B. Kida, “Effects of syntactic simplification on reading comprehension of elementary school students,” Audiol-Commun. Res., Vol. 24, pp. 1–8, 2019.
  • R. Cervantes, and G. Gainer, “The effects of syntactic simplification and repetition on listening comprehension,” Tesol Q., Vol. 26, no. 4, pp. 767–70, 1992.
  • L. Vanderwende, H. Suzuki, C. Brockett, and A. Nenkova, “Beyond sumbasic: task-focused summarization with sentence simplification and lexical expansion,” Inf. Process. Manage., Vol. 43, no. 6, pp. 1606–18, 2007. Text Summarization.
  • C. Finegan-Dollak, and D. R. Radev, “Sentence simplification, compression, and disaggregation for summarization of sophisticated documents,” J. Assoc. Inf. Sci. Technol., Vol. 67, no. 10, pp. 2437–53, 2016.
  • F. Zaman, M. Shardlow, S.-U. Hassan, N. R. Aljohani, and R. Nawaz, “HTSS: A novel hybrid text summarisation and simplification architecture,” Inf. Process. Manage., Vol. 57, no. 6, pp. 102351, 2020.
  • R. Vale, R. D. Lins, and R. Ferreira. “An assessment of sentence simplification methods in extractive text summarization,” in Proceedings of the ACM Symposium on Document Engineering 2020, DocEng ‘20, (New York, NY, USA), Association for Computing Machinery, 2020.
  • A. Siddharthan. “Text simplification using typed dependencies: A comparison of the robustness of different generation strategies,” in Proceedings of the 13th European Workshop on Natural Language Generation, 2011, pp. 2–11.
  • A. Siddharthan, and A. Mandya. “Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules,” in Proceedings of the14th Conference of the European Chapter of the Association for Computational Linguistics, 2014, pp. 722–31.
  • D. Ferres, M. Marimon, H. Saggion, and A. AbuRa’ed. “Yats: yet another text simplifier,” in International Conference on Applications of Natural Language to Information Systems, Springer, 2016, pp. 335–42.
  • C. Scarton, A. P. Aprosio, S. Tonelli, T. M. Wanton, and L. Specia. “Musst: a multilingual syntactic simplification tool,” in Proceedings of the IJCNLP 2017, System Demonstrations, 2017, pp. 25–8.
  • A. Garain, A. Basu, R. Dawn, and S. K. Naskar. “Sentence simplification using syntactic parse trees,” in 4th Inter-national Conference on Information Systems and Computer Networks (ISCON), 2019, pp. 672–6.
  • R. Evans, and C. Orasan, “Identifying signs of syntactic complexity for rule-based sentence simplification,” Nat. Lang. Eng, Vol. 25, no. 1, pp. 69–119, 2019.
  • M. Lewis, et al. “BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, (Online), Association for Computational Linguistics, July 2020, pp. 7871–80.
  • S. Brin, and L. Page, “The anatomy of a large-scale hypertextual web search engine,” Comput. Netw. ISDN Syst., Vol. 30, no. 1–7, pp. 107–117, 1998.
  • S. Bird, E. Klein, and E. Loper. Natural language processing with Python: analyzing text with the natural language toolkit. Sebastopol, CA: O’Reilly Media, Inc. 335, 2009.
  • T. Wolf, et al. Transformers: State-of-the-art natural language processing, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Association for Computational Linguistics, Online, 2020, pp. 38–45.
  • K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom, “Teaching machines to read and comprehend,” Adv. Neural. Inf. Process. Syst., Vol. 28, pp. 1693–701, 2015.
  • C.-Y. Lin. “Looking for a few good metrics: Automatic summarization evaluation-how many samples are enough?” in NTCIR, 2004.
  • T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi. “Bertscore: Evaluating text generation with BERT,” in 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020, OpenReview.net, 2020.
  • S. Li, D. Lei, P. Qin, and W. Y. Wang. “Deep reinforcement learning with distributional semantic rewards for abstractive summarization,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), (Hong Kong, China), Association for Computational Linguistics, Nov. 2019, pp. 6038–44.
  • J. P. Kincaid, R. P. Fishburne Jr, R. L. Rogers, and B. S. Chissom. “Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel,” tech. rep., Naval Technical Training Command Millington TN Research Branch, 1975.
  • D. Schicchi, G. L. Bosco, and G. Pilato, “Machine learning models for measuring syntax complexity of English text,” in Biologically inspired cognitive architectures meeting, Advances in Intelligent Systems and Computing, vol. 948, A. Samsonovich Ed. Cham: Springer, 2019, pp. 449–54.
  • R. Agarwal, and N. Chatterjee. “Gradient boosted trees for identification of complex words in context,” in Proceedings of the First Workshop on Current Trends in Text Simplification (CTTS 2021) co-located with SEPLN, 2021, pp. 12–28.
  • S. Stajner, and I. Hulpus. “When shallow is good enough: Automatic assessment of conceptual text complexity using shallow semantic features,” in Proceedings of the 12th Language Resources and Evaluation Conference, (Marseille, France), European Language Resources Association, May 2020, pp. 1414–22.
  • S. Stajner, and I. Hulpus. Automatic assessment of conceptual text complexity using knowledge graphs,” in Proceedings of the 27th International Conference on Computational Linguistics, (Santa Fe, New Mexico, USA), Association for Computational Linguistics, Aug. 2018, pp. 318–30.
  • M. Afzal, F. Alam, K. M. Malik, and G. M. Malik, “Clinical context–aware biomedical text summarization using deep neural network: model development and validation,” J. Med. Internet Res., Vol. 22, no. 10, pp. e19810, 2020.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.