Academia.eduAcademia.edu

J. Doomen, Substituting Intelligence (short contribution)

2023, Journal of Artificial Intelligence and Big Data vol. 3, no. 1

The development of ChatGPT is a topical subject of reflection. This short paper focuses on the (possible) use of ChatGPT in academia and some of its (possible) ramifications for users' cognitive abilities and, dramatically put, their existence.

Journal of Artificial Intelligence and Big Data, 2023, 3, 623 www.scipublications.org/journal/index.php/jaibd DOI: 10.31586/jaibd.2023.623 Communication Substituting Intelligence Jasper Doomen 1,* 1 Department of Law, Open University of the Netherlands, Heerlen, The Netherlands *Correspondence: [email protected] Abstract: The development of ChatGPT is a topical subject of reflection. This short paper focuses on the (possible) use of ChatGPT in academia and some of its (possible) ramifications for users’ cognitive abilities and, dramatically put, their existence. Keywords: Artificial intelligence; ChatGPT How to cite this paper: Doomen, J. (2023). Substituting Intelligence. Journal of Artificial Intelligence and Big Data, 3(1), 1–3. Retrieved from https://www.scipublications.com/journal/index.php/jaibd/article/view/623 Received: February 1, 2023 Accepted: February 21, 2023 Published: February 23, 2023 Copyright: © 2023 by the author. Submitted for possible open access publication under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses /by/4.0/). The development of ChatGPT is a topical subject of reflection. This short paper focuses on the (possible) use of ChatGPT in academia and some of its (possible) ramifications for users’ cognitive abilities and, dramatically put, their existence. ChatGPT has important potential advantages for those working in academia: collecting relevant sources when one is writing a paper is facilitated and even (largely) automated. ChatGPT can locate sources pertaining to the subject matter, select relevant information from those sources and present the output coherently and formulated in correct sentences. ChatGPT may at present leave something to be desired, but, first, whatever user tasks it reduces are worth mentioning and, second, the possibility cannot be excluded that something it produces may somewhere in the future be difficult or impossible to distinguish from a (competent) human being’s output. These advantages may be countered, or at least nuanced. In the most extreme case, it will be possible for people to use ChatGPT in a similar way as they use a calculator: when one calculates large numbers, one does not normally check the outcome it shows, and presume the outcome is correct, without reflecting on the matter. Arguably, there is not much to reflect on (arithmetic being different in this respect from other domains of mathematics, where one cannot operate mechanically and thinking is arguably involved, depending on one’s definition of ‘thinking’). The case of a spell checker is significantly different, especially if it checks both spelling and grammar. Users need to make an informed choice between the spell checker’s suggestions, which they can only do if they are able to evaluate those suggestions; this is predicated on their already having a command of the rules of spelling and grammar. Developing such a command is only possible, however, if one has not come to rely on the spell checker in the course of one’s education and indeed only – incidentally – uses it as a check. This observation may be extended to the entire process involved in creating a text. Two situations must be distinguished here: either the ChatGPT’s abilities are so limited that it produces results that need to be checked – or even corrected – by human beings or ChatGPT’s creations are (at least) on a par with those of (competent) human beings. In the first case, users must be able to check – or correct – the result (namely, a basic text), but do human beings still have the very capacity to check the validity and relevance of the sources and improve the text itself if they are no longer trained to do so? This is at present (hopefully) no serious concern, but schools and universities may decide to adopt policies to that effect. If they do not, students may find ways to use ChatGPT clandestinely; the means may presently be available to identify ChatGPT as the author, but it may evolve in such a way that this will no longer be possible (which is the second case, to be DOI: https://doi.org/10.31586/jaibd.2023.623 Journal of Artificial Intelligence and Big Data Jasper Doomen 2 of 3 addressed below). If, for either of these reasons, (future) students are no longer trained to collect sources and write texts themselves, they will not be able to tackle the task of writing an academic paper any better than that of calculating large numbers; in both cases they merely feed data into something and use its output. In the second case, where ChatGPT’s creations are (at least) up to standard, what was observed with respect to the first case is all the more concerning. To make this clear, it is useful to point to Searle’s Chinese room argument: an artificial intelligence entity (in whatever way one identifies it) that does not know the meaning of Chinese characters and only receives instructions (of course not in Chinese) how to respond to a Chinese character presented to it with one of the Chinese characters it already has available may be able to follow the instructions without understanding Chinese (J. Searle, Minds, Brains and Science (Cambridge, Mass.: Cambridge University Press, 1984), pp. 32, 33) [1]. (The issue of whether it does need to understand the language of the instructions may be forgone here.) If one merely provides minimal input to ChatGPT, then, presuming Searle’s Chinese room argument is correct, what ChatGPT itself does is limited to acting ‘mechanically’, so without any reflection, not knowing what it is doing. One may even wonder – if ChatGPT’s operations may be said to be reducible to the process itself, forgoing the idea of a ‘ghost in the machine’ and alternative metaphors – whether ‘it’, ‘acting’ and ‘doing’ are proper terms to use here, but absent more apt terms there is no objection against using them pragmatically, as placeholders. The invention of artificial intelligence is a great feat of human intelligence, but in the case of ChatGPT, a development in the opposite direction (so a diminution of a human intelligence) is imminent. All a user requires is a basic idea of what the outcome should be, the input for which is – by means of a few key words – fed into ChatGPT. ChatGPT then produces a result on the basis of that basic idea, and if the user is incapable to evaluate that result, he is for all intents and purposes no different than the artificial intelligence entity in Searle’s Chinese room. (A difference that does exist, of course, is that the artificial intelligence entity acts upon instructions of the user, who is the initiator, but the importance of this difference for this discussion is not to be overstated.) For completeness I remark that most of those who use ChatGPT presumably do not know the technical details, meaning that they would not know how to fix ChatGPT if it were to malfunction any better than they are able to fix a (difficult) problem with their computer. Even those who have been involved in ChatGPT’s conception or development may be said not to understand or be able to explain ChatGPT’s operations (J. Doomen, “Understanding and Explaining” (Logos & Episteme vol. 3, no. 3 (2012), pp. 413-428)) [2], let alone those whose knowledge is limited to how to use it. This issue is to be distinguished, however, from what was observed about the abilities of being able to find relevant sources, argue and compose a text. If the outcome that artificial intelligence is the only intelligence is to be warded off, a critical stance vis-à-vis ChatGPT is necessary. ChatGPT has the potential to be a great help for those working in academia: it can help with finding relevant sources, selecting pertinent information from them, and presenting the output coherently. However, it is important to note that users should not rely on ChatGPT so much that they neglect to learn the basics of how to effectively research and write, as this could lead to a lack of critical thinking skills and an inability to craft academic papers. Additionally, it is possible that ChatGPT could become so advanced that it is difficult to distinguish its output from a human being’s work. Overall, it is important to use ChatGPT with caution, as it can greatly benefit academics, but may also have a detrimental effect on the writing process. I hereby declare, first, that I am not ChatGPT and, second, that ChatGPT has not been used in creating this text. Jasper Doomen References [1] [2] J. Searle, Minds, Brains and Science (Cambridge, Mass.: Cambridge University Press, 1984) J. Doomen, “Understanding and Explaining.” Logos & Episteme vol. 3, no. 3 (2012), pp. 413-428 3 of 3