0
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Is learning with ChatGPT really learning?

Received 12 Feb 2024, Accepted 23 Jun 2024, Published online: 22 Jul 2024

References

  • Allen, R. E. (1959). Anamnesis in Plato’s ‘Meno and Phaedo’. The Review of Metaphysics, 13(1), 165–174.
  • Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62. https://doi.org/10.61969/jai.1337500
  • Bashar, A. (2019). Survey on evolving deep learning neutral network architectures. Journal of Artificial Intelligence and Capsule Networks, 2019(2), 73–82. https://doi.org/10.36548/jaicn.2019.2.003
  • Berglund, L., Stickland, A. C., Balesni, M., Kaufmann, M., Tong, M., Korbak, T., Kokotajlo, D., & Evans, O. (2023). Taken out of context: On measuring situational awareness in LLMs (arXiv:2309.00667). arXiv. http://arxiv.org/abs/2309.00667
  • Berglund, L., Tong, M., Kaufmann, M., Balesni, M., Stickland, A. C., Korbak, T., & Evans, O. (2023). The Reversal Curse: LLMs trained on ‘A is B’ fail to learn ‘B is A’ (arXiv:2309.12288). arXiv. http://arxiv.org/abs/2309.12288
  • Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., & Amodei, D. (2020). Language models are few-shot learners (arXiv:2005.14165). arXiv. http://arxiv.org/abs/2005.14165
  • Burnyeat, M. F. & Barnes, J. (1980). Socrates and the jury: Paradoxes in Plato’s distinction between knowledge and true belief. Proceedings of the Aristotelian Society, supplementary volume, 54(1), 193–206. https://doi.org/10.1093/aristoteliansupp/54.1.173
  • Drake & Mysid. (2006). A simplified view of an artificial neural network [SVG]. Retrieved from https://commons.wikimedia.org/wiki/File:Neural_network.svg
  • Eliot, L. (2023, September 27). Does take a deep breath as a prompting strategy for generative ai really work or is it getting unfair overworked credit. Retrieved from https://www.forbes.com/sites/lanceeliot/2023/09/27/does-take-a-deep-breath-as-a-prompting-strategy-for-generative-ai-really-work-or-is-it-getting-unfair-overworked-credit/?sh=3d04d0b618c3
  • Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Wang, M., & Wang, H. (2024). Retrieval-augmented generation for large language models: A survey (arXiv:2312.10997). arXiv. http://arxiv.org/abs/2312.10997
  • Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N. V., Wiest, O., & Zhang, X. (2024). Large language model based multi-agents: A survey of progress and challenges (arXiv:2402.01680). arXiv. http://arxiv.org/abs/2402.01680
  • Halaweh, M. (2023). ChatGPT in education: Strategies for responsible implementation. Contemporary Educational Technology, 15(2), ep421. https://doi.org/10.30935/cedtech/13036
  • Kaiser, L., Gomez, A., Shazeer, N., Vaswani, A., Parmar, N., Jones, L., & Uszkoreit, J. (2017). One model to learn them all. (arXiv:1706.05137). https://doi.org/10.48550/arXiv.1706.05137
  • Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
  • Li, K., Hopkins, A. K., Bau, D., Viégas, F., Pfister, H., & Wattenberg, M. (2023). Emergent world representations: Exploring a sequence model trained on a synthetic task (arXiv:2210.13382). arXiv. http://arxiv.org/abs/2210.13382
  • Long, R. (2023). Introspective capabilities in large language models. Journal of Consciousness Studies, 30(9), 143–153. https://doi.org/10.53765/20512201.30.9.143
  • Menon, P. (2023, March 15). Discover how ChatGPT is trained. Linkedin Pulse. https://www.linkedin.com/pulse/discover-how-chatgpt-istrained-pradeep-menon
  • Nawar, T. (2013). Knowledge and True Belief at Theaetetus 201a–c. British Journal for the History of Philosophy, 21(6), 1052–1070. https://doi.org/10.1080/09608788.2013.822344
  • Ngo, H., Raterink, C., Araújo, J. G. M., Zhang, I., Chen, C., Morisot, A., & Frosst, N. (2021). Mitigating harm in language models with conditional-likelihood filtration (arXiv:2108.07790). arXiv. http://arxiv.org/abs/2108.07790
  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback (arXiv:2203.02155). arXiv. http://arxiv.org/abs/2203.02155
  • Plato, Emlyn-Jones, C. J., & Preddy, W. (2022). Lysis: Symposium ; Phaedrus. Harvard University Press.
  • Plato, & Fowler, H. N. (1921). Theaetetus. [dataset]. Harvard University Press. https://doi.org/10.4159/DLCL.plato_philosopher-theaetetus.1921
  • Plato, & Fowler, H. N. (2017). Phaedo. [dataset]. Harvard University Press. https://doi.org/10.4159/DLCL.plato_philosopher-phaedo.2017
  • Plato, Fowler, H. N., & Lamb, W. R. M. (1925). The Statesman [dataset]. Harvard University Press. https://doi.org/10.4159/DLCL.plato_philosopher-statesman.1925
  • Plato, & Lamb, W. R. M. (1924a). Euthydemus [dataset]. Harvard University Press. https://doi.org/10.4159/DLCL.plato_philosopher-euthydemus.1924
  • Plato, & Lamb, W. R. M. (1924b). Meno [dataset]. Harvard University Press. https://doi.org/10.4159/DLCL.plato_philosopher-meno.1924
  • Rozado, D. (2023). The political biases of ChatGPT. Social Sciences, 12(3), 148. https://doi.org/10.3390/socsci12030148
  • Sedley, D. N. (2004). The midwife of Platonism: Text and subtext in Plato’s Theaetetus. Clarendon Press ; Oxford University Press.
  • Seidel, A. (1991). Plato, Wittgenstein and artificial intelligence. Metaphilosophy, 22(4), 292–306. https://doi.org/10.1111/j.1467-9973.1991.tb00723.x
  • Smith, K. B., Oxley, D. R., Hibbing, M. V., Alford, J. R., & Hibbing, J. R. (2011). Linking genetics and political attitudes: reconceptualizing political ideology: Linking genetics and political attitudes. Political Psychology, 32(3), 369–397. https://doi.org/10.1111/j.1467-9221.2010.00821.x
  • Solaiman, I., & Dennison, C. (2021). Process for adapting language models to society (PALMS) with values-targeted datasets (arXiv:2106.10328). arXiv. http://arxiv.org/abs/2106.10328
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2023). Attention is all you need (arXiv:1706.03762). arXiv. http://arxiv.org/abs/1706.03762
  • Xu, J., Ju, D., Li, M., Boureau, Y.-L., Weston, J., & Dinan, E. (2021). Recipes for Safety in Open-domain Chatbots (arXiv:2010.07079). arXiv. http://arxiv.org/abs/2010.07079
  • Yang, R., & Narasimhan, K. (2023, May 5). The socratic method for self-discovery in large language models. Princeton NLP. Retrieved from https://princeton-nlp.github.io/SocraticAI/
  • Zhu, D., Chen, J., Shen, X., Li, X., & Elhoseiny, M. (2023). MiniGPT-4: Enhancing vision-language understanding with advanced large language models (arXiv:2304.10592). arXiv. http://arxiv.org/abs/2304.10592