708
Views
0
CrossRef citations to date
0
Altmetric
Articles

The detection of distributional discrepancy for language GANs

, ORCID Icon, , , &
Pages 1736-1750 | Received 22 Jan 2022, Accepted 11 May 2022, Published online: 14 Jun 2022

References

  • Bengio, S., Vinyals, O., Jaitly, N., & Shazeer, N. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. International Conference on Neural Information Processing Systems, (pp. 1171–1179). https://dl.acm.org/doi/10.5555/2969239.2969370
  • Caccia, M., Caccia, L., Fedus, W., Larochelle, H., Pineau, J., & Charlin, L. (2020). Language GANs falling short. International Conference on Learning Representation. https://openreview.net/pdf?id=BJgza6VtPB
  • Cai, P., Chen, X., Jin, P., Wang, H., & Li, T. (2021). Distributional discrepancy: A metric for unconditional text generation. Knowledge-Based Systems, 2021(217), 1–9. https://doi.org/10.1016/j.knosys.2021.106850
  • Cao, Z., Zhou, Y., Yang, A., & Peng, S. (2021). Deep transfer learning mechanism for fine-grained cross-domain sentiment classification. Connection Science, 33(4), 911–928. https://doi.org/10.1080/09540091.2021.1912711
  • Che, T., Li, Y., Zhang, R., Hjelm, D., & Bengio, Y. (2017). Maximum-likelihood augmented discrete generative adversarial networks. https://arxiv.org/abs/1804.07972
  • Chen, L., Dai, S., Tao, C., Shen, D., Gan, Z., Zhang, H., Zhang, Y., & Carin, L. (2018). Adversarial text generation via feature-mover's distance. International Conference on Neural Information Processing Systems, (pp. 4671–4682). https://dl.acm.org/doi/10.5555/3327345.3327377
  • Cífka, O., Severyn, A., Alfonseca, E., & Filippova, K. (2018). Eval all, trust a few, do wrong to none: Comparing sentence generation models. https://arxiv.org/abs/1804.07972
  • de Masson, C., Rosca, M., Rae, J., & Mohamed, S. (2019). Training language GANs from scratch. International Conference on Neural Information Processing Systems, (pp. 4300–4311). https://dl.acm.org/doi/10.5555/3454287.3454674
  • Fedus, W., Goodfellow, I., & Dai, A. (2018). MaskGAN: Better Text Generation via Filling in the ______. International Conference on Learning Representation. https://openreview.net/pdf?id=ByOExmWAb
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. International Conference on Neural Information Processing Systems, (pp. 2672–2680). https://dl.acm.org/doi/10.5555/2969033.2969125
  • Gu, F., & Cheung, Y. (2018). Self-Organizing Map-Based Weight Design for Decomposition-Based Many-Objective Evolutionary Algorithm. IEEE Transactions on Evolutionary Computation, 22(2), 211–225. https://doi.org/10.1109/TEVC.2017.2695579
  • Guo, J., Lu, S., Han, C., Zhang, W., & Wang, J. (2018). Long text generation via adversarial training with leaked information. AAAI Conference on Artificial Intelligence, (pp. 5141–5148). https://dlnext.acm.org/doi/10.5555/3504035.3504665
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, (pp. 770–778). https://doi.org/10.1109/CVPR.2016.90
  • He, T., Zhang, J., Zhou, Z., & Glass, J. (2021). Exposure bias versus self-Recovery: Are distortions really incremental for autoregressive text generation? https://arxiv.org/abs/1905.10617
  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
  • Jang, E., Gu, S., & Poole, B. (2017). Categorical reparameterization with Gumbel-Softmax. International Conference on Learning Representation. https://openreview.net/pdf?id=rkE3y85ee
  • Kim, Y. (2014). Convolutional neural networks for sentence classification. Conference on Empirical Methods in Natural Language Processing, (pp. 1746–1751). https://doi.org/10.3115/v1/D14-1181
  • Li, Y., Dai, H., & Zheng, Z. (2022). Selective transfer learning with adversarial training for stock movement prediction. Connection Science, 34(1), 492–510. https://doi.org/10.1080/09540091.2021.2021143
  • Lin, K., Li, D., He, X., Zhang, Z., & Sun, M. (2017). Adversarial ranking for language generation. International Conference on Neural Information Processing Systems, (pp. 3158–3168). https://dl.acm.org/doi/10.5555/3294996.3295075
  • Lin, N., Li, J., & Jiang, S. (2022). A simple but effective method for Indonesian automatic text summarisation. Connection Science, 34(1), 29–43. https://doi.org/10.1080/09540091.2021.1937942
  • Nie, W., Narodytska, N., & Patel, A. (2019). RelGAN: Relational generative adversarial networks for text generation. International Conference on Learning Representation. https://openreview.net/pdf?id=rJedV3R5tm
  • Papineni, K., Roukos, S., Ward, T., & Zhu, W. (2002). Bleu: A method for automatic evaluation of machine translation. . Annual Meeting of the Association for Computational Linguistics, (pp. 311–318). https://doi.org/10.3115/1073083.1073135
  • Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners (Technical Report). OpenAI.
  • Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved techniques for training GANs. International Conference on Neural Information Processing Systems, (pp. 2234–2242). https://dl.acm.org/doi/10.5555/3454287.3454674
  • Santoro, A., Faulkner, R., Raposo, D., Rae, J., Chrzanowski, M., Weber, T., Wierstra, D., Vinyals, O., Pascanu, R., & Lillicrap, T. (2018). Relational recurrent neural networks. International Conference on Neural Information Processing Systems, (pp. 7299–7310). https://dl.acm.org/doi/epdf/10.5555/3327757.3327832
  • Semeniuta, S., Severyn, A., & Gelly, S. (2019). On accurate evaluation of GANs for language generation. https://arxiv.org/pdf/1806.04936
  • Shi, Z., Chen, X., Qiu, X., & Huang, X. (2018). Toward diverse text generation with inverse reinforcement learning. International Joint Conference on Artificial Intelligence, (pp. 4361–4367). https://dl.acm.org/doi/abs/10.5555/3304222.3304376
  • Sutton, R., McAllester, D., Singh, S., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. International Conference on Neural Information Processing Systems, (pp. 1057–1063). https://dl.acm.org/doi/10.5555/3009657.3009806
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. International Conference on Neural Information Processing Systems.
  • Williams, R. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3), 229–256. https://doi.org/10.1007/BF00992696
  • Wu, Q., Zhu, B., Yong, B., Wei, Y., Jiang, X., Zhou, R., & Zhou, Q. (2021). ClothGAN: Generation of fashionable Dunhuang clothes using generative adversarial networks. Connection Science, 33(2), 341–358. https://doi.org/10.1080/09540091.2020.1822780
  • Wu, S., Liu, Y., Zou, Z., & Weng, T. (2022). S_I_LSTM: Stock price prediction based on multiple data sources and sentiment analysis. Connection Science, 34(1), 44–62. https://doi.org/10.1080/09540091.2021.2021143
  • Xu, J., Ren, X., Lin, J., & Sun, X. (2018). Diversity-promoting GAN: A cross-entropy based generative adversarial network for diversified text generation. Conference on Empirical Methods in Natural Language Processing, (pp. 3940–3949). https://doi.org/10.18653/v1/D18-1428
  • Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., & Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. International Conference on Machine Learning, (pp. 2048–2057). https://dl.acm.org/doi/10.5555/3045118.3045336
  • Yu, L., Zhang, W., Wang, J., & Yu, Y. (2017). SeqGAN: Sequence generative adversarial nets with policy gradient. AAAI Conference on Artificial Intelligence, (pp. 2852–2858). https://dl.acm.org/doi/10.5555/3298483.3298649
  • Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. International Conference on Neural Information Processing Systems, (pp. 9054–9065). https://dl.acm.org/doi/10.5555/3454287.3455099
  • Zhu, Y., Lu, S., Lei, Z., Guo, J., Zhang, W., Wang, J., & Yu, Y. (2018). Texygen: A benchmarking platform for text generation models. International ACM SIGIR Conference on Research & Development in Information Retrieval, (pp. 1097–1100). https://doi.org/10.1145/3209978.3210080