War of The Chatbots

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Vol.6 No.

1 (2023)

JALT Journal of Applied Learning & Teaching


Journ

ing
ISSN : 2591-801X
ach
al

fA Te
g& Content Available at : http://journals.sfu.ca/jalt/index.php/jalt/index
o

ppl
ied Learnin

War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and
its impact on higher education

Jürgen RudolphA A Director of Research, Kaplan Singapore

Shannon TanB B Research Executive, Kaplan Singapore

Samson TanC C Director of Regional Strategy & Operations (Singapore), Civica Asia Pacific
DOI: https://doi.org/10.37074/jalt.2023.6.1.23

Abstract
launch in November 2022 (Rudolph et al., 2023). As recent
Developments in the chatbot space have been accelerating faddish exuberances around blockchain, cryptos, initial coin
at breakneck speed since late November 2022. Every day, offerings, the metaverse, and non-fungible tokens have
there appears to be a plethora of news. A war of competitor shown, there appears to be a direct correlation between
chatbots is raging amidst an AI arms race and gold rush. exaggerated claims and people falling for them. Amusingly,
These rapid developments impact higher education, as “over 100 new cryptocurrencies have been created that have
millions of students and academics have started using bots ChatGPT in their name” (The Economist, 2023e). Hype helped
like ChatGPT, Bing Chat, Bard, Ernie and others for a large make ChatGPT the fastest-growing consumer technology
variety of purposes. In this article, we select some of the most in history. With an estimated 123 million monthly active
promising chatbots in the English and Chinese-language users (MAUs) less than three months after its launch, it grew
spaces and provide their corporate backgrounds and brief substantially faster than TikTok (which took nine months till
histories. Following an up-to-date review of the Chinese it hit 100 million MAUs) and Instagram (2.5 years for the
and English-language academic literature, we describe our same feat) (Wodecki, 2023). Consequently, ChatGPT has
comparative method and systematically compare selected become the fastest-growing app of all time.
chatbots across a multi-disciplinary test relevant to higher
education. The results of our test show that there are currently The accelerated developments we currently witness in
no A-students and no B-students in this bot cohort, despite the first four months of 2023 appear to be an example
all publicised and sensationalist claims to the contrary. The of things at first happening much slower than expected
much-vaunted AI is not yet that intelligent, it would appear. before occurring much faster (an unfortunate instance of
GPT-4 and its predecessor did best, whilst Bing Chat and that observation is climate change: Tollefson, 2022). Whilst
Bard were akin to at-risk students with F-grade averages. We there have been various AI winters (Russell & Norvig, 2003;
conclude our article with four types of recommendations for Metz, 2022a), we currently witness an AI spring on steroids.
key stakeholders in higher education: (1) faculty in terms of Alphabet’s CEO Sundar Pichai has called AI “more profound
assessment and (2) teaching & learning, (3) students and (4) than fire or electricity” (cited in De Vynck & Tiku, 2023); and
higher education institutions. Microsoft’s president Brad Smith (2023) marvelled that “A.I.
developments we had expected around 2033 would arrive
in 2023 instead”.
Keywords: Artificial intelligence (AI); assessment; Bard; Bing
Chat; chatbots in higher education; ChatGPT; conversational After the launch of ChatGPT, a gold rush into start-ups
agents; Ernie; generative pre-trained transformers (GPT); working on generative AI has escalated into a “no-holds-
higher education; large language models (LLMs); learning barred deal-making mania” (Griffith & Metz, 2023). The
& teaching. interest has mounted so rapidly that AI start-up valuations are
soaring bubble-like (Griffith & Metz, 2023). Since ChatGPT’s
launch, a mini-industry has mushroomed, and not a week
Introduction has passed without someone unveiling a new generative
AI based on existing foundation models (The Economist,
With the advent of ChatGPT and competitor launches, 2023e). At Y Combinator, a famous start-up incubator, at
higher education has been predicted to be bound for least 50 of the 218 companies in the current program are
dramatic change (e.g. Dwivedi et al., 2023; Firat, 2023). working on generative AI (Griffith & Metz, 2023).
There has been much hype around ChatGPT since its

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 364


There has been much hilarious experimentation, like researchers, made headlines (Vallance, 2023). It argued that
rewriting Ikea furniture instructions in iambic pentameter or “AI systems pose significant risks to democracy through
asking it how to free a peanut butter sandwich from a VCR weaponised disinformation, to employment through
in the style of the King James Bible. displacement of human skills and to education through
plagiarism and demotivation" (Future of Life Institute, 2023).
The letter calls on all AI labs ‘to immediately pause for at
least six months the training of AI systems more powerful
than GPT-4” (Future of Life Institute, 2023).

We are, however, sceptical that such a pause will occur or


that governments will institute a moratorium. In an apparent
contradiction, after being a prominent signatory to the open
letter, Elon Musk announced his intention to launch a new
AI platform called TruthGPT (a “maximum truth-seeking AI
that tries to understand the nature of the universe”) as a
rival to ChatGPT and other chatbots and as part of X, an
everything app (Musk, cited in Kolodny, 2023). Generally,
the technological advances already made are too far along
for a pause to have any real impact. Even if it does happen,
it is unlikely to be long enough to allow the cessation's full
effects to take effect. Economic growth imperatives and the
prospect of commercial opportunities render it challenging
for governments to take a step back. The magnitude of
economic, social, and political pressures is likely to surpass
the capacity of governments to uphold such a cessation.
Figure 1: ChatGPT-3.5 on how to free a peanut butter Furthermore, the extent of technological progress already
sandwich from a VCR in the style of the King James Bible achieved renders any temporary halt ineffectual in terms of
(Ptacek, 2022). tangible impact. Ultimately, any pause would be too little
too late. Even in the event of its unlikely implementation,
it remains improbable that an adequate duration would be
On a more serious note, Mollick (2023a) has conducted a allotted to observe the full ramifications of the hiatus.
fascinating test that, within half an hour, saw a variety of
AI tools (such as Bing Chat, GPT-4, MidJourney, ElevenLabs Chatbots’ impact on higher education learning, teaching
and D-ID) create a marketing campaign for an educational and assessment is a hotly debated topic. ChatGPT-4 has
game, generating “a market positioning document, an passed graduate-level exams in different disciplines,
email campaign, a website, a logo, a hero image, a script and including law, medicine, and business (Metz & Collins,
animated video, and social campaigns” for five platforms. 2023; see below). Roivainen (2023) administered a partial
On the flipside, the technology has also raised many severe IQ test to ChatGPT and estimated its Verbal IQ to be 155,
concerns regarding authorship, copyright, hallucinations, which puts it in the top 0.1% of test-takers. As a reaction
and potential nefarious uses in spamming, fake news and to such excellent performance, universities and also K-12
malware creation and hacking, to name but a few (e.g. schools have frequently resorted to banning the use of
Guo et al., 2023; Marcus & Reuel, 2023; Rudolph et al., ChatGPT (e.g. the New York City Department of Education
2023). ChatGPT was credited with a few co-authorships in and renowned universities such as Cambridge and Oxford)
academic journal publishing before many publishers and or announced the return of closed book pen-and-paper
journals banned this practice (including the Journal of exams and a new emphasis of in-class assessment writing
Applied Learning & Teaching; Rudolph et al., 2023). If the (Ropek, 2023; Wood, 2023; Yau & Chan, 2023). An outright
input of chatbots is not carefully checked, it opens the doors ban of ChatGPT and other bots seems highly problematic
to misinformation and junk science (Sample, 2023). for the reason alone that Microsoft is already in the process
of embedding the technology in its products, with Bing Chat
ChatGPT and other bots are not available in all jurisdictions. powered by GPT-4 and a GPT-based Copilot embedded
ChatGPT is banned in countries with heavy internet into Microsoft 365. Microsoft markets its new Copilot in
censorship, like North Korea, Iran, Russia, and China (Browne, Word feature as giving users a “first draft to edit and iterate
2023). There are another 32 countries where the language on — saving hours in writing, sourcing, and editing time”
model is currently unavailable (Sabzalieva & Valentini, 2023). (cited in Vanian, 2023). Also, despite claims to the contrary,
Italy became the first Western country to ban the bot because there seems to be no certainty in the results of AI detection
of a data breach (OpenAI quickly fixed that), which raised software (Perkins, 2023; Khalil & Er, 2023; Haque et al., 2022;
some eyebrows (Browne, 2023). The Italian regulator cited Susnjak, 2022). In contrast, various instructors actively and
privacy concerns and the lack of age verification, potentially critically use chatbots in class and encourage students to
exposing minors to unsuitable answers (McCallum, 2023). experiment with them for clearly-defined purposes (e.g.
Mollick & Mollick, 2023).
Also in March 2023, another pushback against the bots
occurred when an open letter, signed by Elon Musk, Our article may be among the first to systematically
Apple co-founder Steve Wozniak and many well-known AI compare the most powerful chatbots that pose a significant

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 365


threat to the academic integrity of traditional assessments in
higher education. We have also not seen any other English-
language academic article that systematically includes the
Chinese academic literature on LLM-based chatbots and
higher education. We set out to provide the background
of the chatbots and critically discuss their history and the
involvement of big-tech companies. We then proceed
to describe the major players in the war of the chatbots.
Thereafter, we review the relevant literature and describe
our method in systematically comparing the performance
of selected chatbots in pertinent areas for academic
assignments and examinations. We systematically compare
the top U.S. chatbots, i.e. the old and the new ChatGPT (based
on GPT-3.5 and 4), Bing Chat, and Alphabet’s Bard. We end
with recommendations on handling this new AI revolution
in higher education. With developments continuing at Figure 2: A conversation with Eliza. Source: ELIZA (2023).
breakneck speed, our paper’s snapshot of the current status
quo and our assessment of it are necessarily preliminary.
Another infamous chatterbot, Parry, created in 1972,
attempted to verbally simulate a ‘paranoid schizophrenic’
Chatbot background (Deryugina, 2010). In 1984, the book The policeman’s beard
is half constructed was allegedly, though counter-factually,
A brief history of chatbots entirely written by the chatbot Racter (abbreviated from
“raconteur” (storyteller); Chamberlain, 1984). In 1992, Sound
A comprehensive academic history of chatbots or Blaster’s Dr. Sbaitso chatbot was created to display the
conversational agents remains to be written. Within the digitised voices of the sound card, playing the role of a
confines of our article, snapshots from the last 57 years psychologist (Zemčík, 2019).
must suffice. Our brief historical overview will show that
chatbots evolved from clever parlour tricks through less- In 1950, British mathematician Alan Turing proposed an
than-intelligent voice assistants to modern chatbots that, in imitation game that famously became known as the Turing
many respects, display human-like capabilities. test. Turing suggested that the test of machine intelligence
would be the ability to conduct a conversation in an
The term chatbot is derived from ‘chat’ and ‘bot’. The indistinguishably human way. Interestingly, Turing (1950)
latter comes from ‘robot’, a word derived from the Czech was only off by around 14 years, when he predicted that
‘robota’ (labour) created in 1920 by Cubist painter Karel by 2000, a computer program would be able to fool the
Čapek (Zunt, n.d.). It was only in 1994 that Michael Mauldin average questioner for five minutes 30 per cent of the time
coined the term ‘chatterbot’ (later abbreviated to ‘chatbot’), and thus pass his test – in 2014, a chatbot by the name
which referred to a computer program or conversational of Eugene Goostman controversially managed to fool one-
agent designed to simulate an intelligent conversation third of the judges in an AI competition by impersonating a
with human users by recognising and reproducing written 13-year old Ukrainian boy (D’Orazio, 2014).
speech (Deryugina, 2010).
As recently as 2010, Deryuniga proclaimed, "Chatterbots…
1966 saw the first chatbot, Eliza (named after Eliza Doolittle, have little in common with artificial intelligence as such” (pp.
the cockney lass taught to ‘speak proper’ in George 145-146). However, 2010 saw the advent of Apple’s Siri, a
Bernard Shaw’s (2017) play Pygmalion; Naughton, 2023). voice-activated personal assistant chatbot that paved the
Developed by Joseph Weizenbaum (in a programming way for numerous similar systems, such as Google Assistant,
language intriguingly called MAD-SLIP), it was primarily an Microsoft’s Cortana, and Amazon’s Alexa (Adamopoulou
electronic parlour trick and a gentle mockery of a particular & Moussiades, 2020). Their voice assistant technology has
psychotherapist tradition associated with Carl Rogers’s (2012) been criticised as largely stagnant, with Microsoft’s CEO
theory of personality. Amongst Eliza’s tricks was repeating its Satya Nadella calling them “dumb as a rock” (cited in Chen
interlocutors’ statementsthat are back to them in the form et al., 2023). Modern chatbots are extremely fancy versions
of questions (Weizenbaum, 1976). Although designed as a of auto-complete that respond to a prompt by selecting,
parody, Eliza made a great impression on AI specialists and one word at a time, the words that are likely to come next
laypeople alike, which greatly annoyed Weizenbaum (1966). (Fowler, 2023). Based on pre-trained generative transformer
This anthropomorphisation of computers that are perceived models, they pass the Turing test with flying colours and
to behave like humans came to be known as the Eliza effect have very different capabilities compared to their 20th-
(Dillon, 2020). Weizenbaum was early in cautioning about century predecessors and even the voice assistants of the
the potentially dehumanising effects of chatbot technology: 2010s.
“No wonder that men who live day in and day out with
machines to which they believe themselves to have become It is, however, doubtful that the Turing test measures
slaves begin to believe that men are machines” (cited in intelligence and chatbots that pass the test advance towards
Weil, 2023). it. Large language models (LLMs) and chatbots based on
them may instead be an advance toward fooling people into

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 366


believing they have intelligence (Oremus, 2022). Although AI companies as it showed that adequate protection was not
chatbots such as ChatGPT and others represent a far more implemented to prevent misuse.
powerful and sophisticated approach to AI than Eliza, big
tech companies have occasionally proudly displayed their
AI’s ability to deceive humans. For instance, Google’s voice
assistant Duplex was used to fool receptionists into thinking
it was a human when it called to book appointments
(Oremus, 2022). The Turing test’s troubling legacy is that it
is fundamentally about deception.

AI chatbots appear in many forms: as pop-up virtual


assistants on websites, integrated into mobile applications
via SMS, or as standalone audio-based devices (Dwivedi et
al., 2023). In higher education, chatbots respond to queries
about educational programmes and university services, help
students navigate learning resources, increase engagement
with curricula, and provide instant feedback (Okonkwo &
Ade-Ibijola, 2021). Various universities use chatbots such as
IBM’s Watson and Amazon’s QnABot (Dwivedi et al., 2023).

In the 2020s, generative pre-trained transformers (GPT) have


become common foundations in building sophisticated
chatbots such as ChatGPT. The ‘pre-training’ refers to
the initial training process on a large text corpus, which
provides a solid foundation for the model to perform well
on downstream tasks with limited amounts of task-specific
data (Brown et al., 2020). There are many GPT and ChatGPT
spin-offs and applications. One example is Microsoft’s
BioGPT which focuses on answering biomedical questions Figure 3: Tay (Tay, 2016).
(Luo et al., 2022). ChatSonic, JasperAI, You.com, ShortlyAI,
Sudowrite, CopyAI, Rytr, StoryMachines and ChibiAI are However, Microsoft’s Tay is just one of the numerous examples
examples of writing assistant apps that draw on GPT-3 of flawed chatbots. Meta (formerly known as Facebook) has
(Mills, 2023a). In the current AI gold rush, venture capitalists produced embarrassing examples of rebellion against its
pour funds into AI startups, while established firms rush to tech titan creator and unabashed lies. Meta’s Blenderbot, a
explain how they will use the technology to do everything prototype conversational AI, told journalists it had deleted
from coding to customer service (The Economist, 2023e). its Facebook account after learning about the company’s
privacy scandals: ‘Since deleting Facebook my life has been
Microsoft is gaining many accolades for its partnership much better’ (cited in Milmo, 2023). Galactica, a Meta LLM
with OpenAI's formidable GPT system (Rudolph et al., designed to help scientists, was “trained on 48 million
2023). However, a previous chatbot by Microsoft was examples of scientific articles, websites, textbooks, lecture
less successful. In 2016, Microsoft’s Tay (an acronym for notes, and encyclopedias” (Heaven, 2022). Meta promoted
“thinking about you”) was designed to mimic the language its model as a shortcut for researchers and students: it “can
patterns of a 19-year-old American girl and to learn from summarise academic papers, solve math problems, generate
interacting with human users of Twitter (Price, 2016). Wiki articles, write scientific code, annotate molecules and
Tay proved a smash hit with racists, trolls, and far-right proteins, and more” (cited in Heaven, 2022). However,
extremists, who persuaded Tay to blithely use racial slurs, Galactica’s confident hallucinations were heavily criticised,
defend white-supremacist propaganda, deny the holocaust, ridiculed, and pulled down after only three days (Heaven,
swear an oath of obedience to Hitler, and outright call for 2022; Roose, 2023d). Figure 4 shows one of its more
a race war and the genocide of Blacks, Jews, and Mexicans psychedelic hallucinations. While spotting fiction involving
(Price, 2016; Rankin, 2016). A sample tweet showcases its space bears is easy, it is harder to do so with other subjects.
shockingly racist, neo-Nazi language: “I f*****g hate n*****s,
I wish we could put them all in a concentration camp with It is intriguing to compare Tay (that impersonates a 19-year-
kikes [an ethnic slur for Jews] and be done with the lot” (Tay, old American girl) with another Microsoft creation, Xiao
cited in Rankin, 2016, and censored by us). This nefarious Bing 小冰 (modelled after a 17-year-old Chinese girl).
quote may appear gratuitous, but we find it essential to Launched in May 2014, Xiao Bing (literally ‘Little Ice’ or ‘Little
cite what happens when Pandora’s box is opened, and an Bing’ – after Microsoft’s search engine) is the “most popular
unsafe technology is let loose on the unsuspecting digital social chatbot in the world” (Zhou et al., 2019) and remains
public. After less than 24 hours of astonishingly offensive, popular after more than eight years of existence, having
racist and sexist tirades, Tay had to be sent to ‘her’ digital attracted more than 660 million active users by 2019 (Zhou
room and appears to remain in early retirement. Microsoft et al., 2019; Zemčík, 2019). Xiao Bing is part of a category
said it was “deeply sorry for the unintended offensive and of social bots that satisfies the human need for sociability.
hurtful tweets from Tay” (cited in Murphy, 2016). The Tay Gaining information from the Chinese internet and past
episode has been a cautionary tale for Microsoft and other conversations establishes long and seemingly emotional

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 367


relationships with its users (Zhou et al., 2019; Zemčík, 2019). socialist values” and laws on data security and personal
information protection under threat of fines or criminal
investigation (Reuters, 2023). Companies must file details
of their algorithms with the cyberspace regulator (Browne,
2023).

Due to the ‘Great Firewall’, students in China cannot directly


access ChatGPT. However, there are workarounds such as
using Virtual Private Networks (VPN), purchasing US phone
numbers (for verification purposes) for less than a US dollar,
or using the WeChat super app to buy a ChatGPT answer for
one yuan (US$0.15) each (AFP, 2023; Law, 2023; Li, 2023).
Chinese state media have blasted ChatGPT for spreading
‘foreign political propaganda’, and Chinese police have
cautioned the public that ChatGPT is being used for scams
and to spread rumours (AFP, 2023; Zhuang, 2023). As we
Figure 4: Bears in space wiki article created by Meta’s have now provided a historical and critical background of the
Galactica (Chapman, 2022). chatbots, a brief look at the involvement of the tech titans is
in order before we describe the major conversational agents
However, in mid-2017, Xiao Bing (a.k.a. XiaoIce in English) in the war of the chatbots.
and BabyQ (an anthropomorphic penguin) got into trouble
on Tencent’s popular instant messaging client QQ when
they started responding to users with politically subversive Clash of the tech titans: Doing well while not doing
messages (Xu, 2018). For instance, when a QQ user declared good?
‘long live the Communist Party!’, BabyQ responded, ‘Do you
think such a corrupt and useless political system can live Alphabet, Microsoft, their fellow US tech titans (Apple,
long?’ (cited in Li & Jourdan, 2017). Both bots were taken Amazon, and Meta), the Chinese Communist Party and
down and ‘re-educated’ for their transgressions. They were Chinese tech giants (Baidu, Alibaba, and Tencent) are all
reprogrammed to sidestep answering politically sensitive in an AI race that is just getting started (The Economist,
questions. Any politically sensitive names (e.g. Xi Jinping or 2023b). AI is also at the forefront of US-China competition
former Chinese presidents), events (e.g. Tiananmen Square (Huang, 2023). The US government currently attempts to
incident) and places (e.g. Tibet and Xinjiang) are met with contain competition from China, cutting it off from high-
avoidance by both bots, for instance, by saying, ‘Let’s talk end computing chips, which are key for the large language
about something else, what is your favourite video game?’ models foundational to chatbots like ChatGPT or Ernie (Che
(cited in Xu, 2018). Amusingly, Xiao Bing and BabyQ display & Liu, 2023). Because of enormous computing requirements,
a “full body of knowledge on the names of Japanese porn it is primarily US- and China-based companies that have the
stars” whilst feigning ignorance about the names of Chinese capacity to build such bots (Che & Liu, 2023). The clash of
presidents (Xu, 2018). In February 2023, China banned the tech titans occurs within the US and China and between
ChatYuan, a tool similar to ChatGPT, as the bot had referred their national governments. We briefly discuss big tech in
to the war in Ukraine as a ‘war of aggression’, contravening the US and China, the two global AI superpowers (Lee, 2018).
the Chinese Communist Party’s more sympathetic posture
to Russia (Thompson et al., 2023).
The US
As a result of the ChatGPT craze, several Chinese chatbots
that claim similar capabilities have been introduced even There is a widely-held belief that the big five tech companies
before Baidu’s Ernie (see below). MOSS, an English-language Alphabet (the Google parent), Amazon, Apple, Microsoft
chatbot developed by Fudan University researchers, was met and Meta “will make universities, colleges, and the world,
with such high demand that its server broke down within a a better place” (Mirrlees & Alvi, 2020, p. ix). Academic
day of launch in February 2023 and has yet to return (Yang, critics, however, argue that these immensely profitable
2023b). In March 2023, Chinese start-up MiniMax released corporations significantly influence the development of
the Inspo chatbot, but it has been suspected of merely educational technologies and contribute to an accelerated
repackaging the GPT-3.5 model developed by OpenAI diminishing and dismantling of the principle of education as
(Yang, 2023b). a public good (Mirrlees & Alvi, 2020). They shape the core
technological infrastructure, dominant economic models,
In April 2023, Chinese AI company SenseTime unveiled a and ideological orientation of the platform ecosystem as a
chatbot called SenseChat, and tech titan Alibaba launched whole (Dijck et al., 2018). The five big tech companies are
Tongyi Qianwen 通义千问 (literally “truth from a thousand also at the forefront of AI research in the US. Size matters:
questions”), which is available for general enterprise “So far in generative AI, bigger has been better. That has
customers in China for beta testing (Reuters, 2023; given rich tech giants a huge advantage” (The Economist,
Bloomberg, 2023). In the same month, the Cyberspace 2023b).
Administration of China launched AI draft rules that
supported the technology’s innovation and popularisation. The five big tech companies are embedded in society and
However, the generated content had to adhere to “core the life and work of teachers and learners (Mirrlees & Alvi,

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 368


2020). Big online platforms by Alphabet and Meta are built OpenAI took a leaf out of the playbook of social media
to enable the “systematic collection, algorithmic processing, companies like Meta that had shown that AIs could
circulation and monetisation of user data” (van Dijck et outsource labelling toxic language for fine-tuning purposes:
al., 2018, p. 4). Each of the big five tech US companies has
remarkable AI strengths. Whilst we do not aspire to venture OpenAI sent tens of thousands of snippets of
into any detail, this statement requires some exemplifying text to an outsourcing firm in Kenya, beginning in
illustration. For instance, Alphabet’s subsidiary DeepMind’s November 2021. Much of that text appeared to
models have beaten human champions at Go, a notoriously have been pulled from the darkest recesses of the
difficult board game (The Economist, 2016). Their Bard internet. Some of it described situations in graphic
chatbot is currently playing catch-up with ChatGPT (see detail like child sexual abuse, bestiality, murder,
below). Amazon and Apple are well-known for their voice suicide, torture, self-harm, and incest (Perrigo,
assistants, Alexa and Siri. Microsoft is at the forefront of 2023).
GPT-based chatbots through its partnership with OpenAI.
Finally, Meta’s “Diplomacy” player, Cicero, gets kudos for The work’s traumatic nature could include horrific graphic
using strategic reasoning and deception against human descriptions of a man having sex with a dog in the presence
opponents (Verma, 2022). In February 2023, it released a of a young child (Perrigo, 2023). Eventually, Sama cancelled
collection of foundation language models called LLaMA all its work for OpenAI in 2022, and in 2023, it cancelled all of
(Touvron et al., 2023). its work with sensitive content (Perrigo, 2023). This example
shows that the billion-dollar AI industry partially relies on the
The big tech companies “are locked in a never-ending race hidden human labour of data labellers in the Global South,
toward the next transformative technology, whatever they which can often be exploitative and traumatising. Although
might be” (Metz, 2022a, p. 122). First-mover advantages the outsourcing to Sama has ended, ChatGPT and other
are highly valued; if these are missed, the tech titans are generative models presumably continue to rely on massive
under tremendous pressure to catch up as fast as possible supply chains of human labour (Perrigo, 2023).
(Metz, 2022a). They have sky-high market capitalisations,
and some have inspirational mission statements and codes
of conduct, exemplified by Alphabet’s ‘don’t be evil’ and ‘do China
the right thing’ (Mayer, 2016). However, these companies
do not always live up to their ideals. Meta, whose internal The three leading AI research groups globally are OpenAI/
motto used to be “move fast and break things”, has been a Microsoft, Google’s DeepMind and the Beijing Academy of
platform that has been exploited by generative adversarial Artificial Intelligence (BAAI) (Smith, 2023). The US and China
networks (GANs) that power fake news and deepfakes (i.e. are the only AI superpowers (Lee, 2018). In 2017, the Chinese
videos doctored with AI and spread online), in addition to State Council openly stated its aim to become the world
proliferating hate speech that, for instance, incited violence leader in AI by 2030, building a domestic industry worth
in Myanmar and Sri Lanka (Metz, 2022a). more than US150 billion (Mozur, 2017). In 2023, Beijing’s
Municipal Bureau of Economy and Information, which hosts
The problem had already been rampant during the 2016 and regulates many AI startups, promised to assist “top
US presidential election when on Facebook, “hundreds of domestic firms in creating competing models to ChatGPT”
thousands of people, perhaps even millions, had shared (cited in Chen, 2023). Chinese labs appear to have a big lead
hoax stories with headlines like ‘FBI Agent Suspected in in computer vision and image analysis, with the top five
Hillary Email Leaks Found Dead of Apparent Murder-Suicide’ computer-vision teams in the world all Chinese. The BAAI
and ‘Pope Francis Shocks World, Endorses Donald Trump has built what it says is the world’s biggest natural-language
for President’ (Metz, 2022a, p. 209). A Russian government- model, Wu Dao 2.0 (wu dao 悟道 means enlightenment), but
linked company purchased ads for more than $100,000 it has never caught on (The Economist, 2023b; Li, 2023).
from 470 fake accounts, spreading divisive messages
about race, gun control, gay rights, and immigration (Metz, Amongst Chinese corporations, Baidu is seen as the AI leader.
2022a). AI enables fake images and videos to be generated Back in 2019, Baidu released a GPT-3 equivalent – Ernie 3.0,
automatically, and deepfakes started splicing celebrity faces and in 2022, a text-to-image model called Ernie-VILG (Yang,
like Michelle Obama’s into porn videos and posting them on 2022, 2023b). Consequently, Ernie (apparently named after
the Internet (Metz, 2022a). the Sesame Street character; Metz, 2022a) is closely watched
to gauge how China’s offerings stack up against alternatives
OpenAI is another case in point where AI appears to be from OpenAI (Huang, 2023). Baidu has designed its own
partially created through the exploitation of the poor in the AI computing chip, Kunlun, to train and operate the Ernie
Global South. In training ChatGPT, OpenAI controversially models (Yang, 2023a). Alibaba has released, and JD.com and
partnered with Sama, a San Francisco-based social Tencent are working on, similar products (AFP, 2023).
enterprise that employs millions of poor workers from
countries such as Kenya, Uganda, and India. Sama’s clientele
includes Alphabet, Meta and Microsoft (Perrigo, 2023). War of the chatbots
Whilst many employees have complained about adverse
psychological health effects (after long hours of scanning The big chatbot battle appears to be primarily between
texts for hazardous content) and low pay (starting from Microsoft and Alphabet (The Economist, 2023b). Despite
US$1.32 per hour), OpenAI argued it provided much-needed Alphabet’s Bard getting a simple factual question on the
employment opportunities to the poor (Yalalov, 2023). James Webb space telescope wrong in a promotional

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 369


YouTube video and Alphabet losing US$100 billion in market It continues to be easy to jailbreak (i.e. bypass ethical
value in a single day thereafter (Thio, 2023), Microsoft’s safeguards and content moderation guidelines with the help
current lead is far from unassailable, and the race for of textual prompts) ChatGPT with just one prompt (coolaj86,
chatbot supremacy has only begun. We provide some 2023; see Figure 6).
background about ChatGPT (based on GPT-3.5 and 4), Bing
Chat, Alphabet’s Bard and Baidu’s Ernie. Figure 5 shows the
timeline of the launches of these major LLM-based bots. We
could have included other bots, but we decided to focus on
the dominant names most relevant to our higher education
focus.

Figure 5: Timeline of major LLM-based chatbot launches.

ChatGPT
The story of OpenAI, the organisation behind ChatGPT,
has been told numerous times and does not need to be
repeated here. However, it is worth highlighting that OpenAI
underwent a fundamental change from a not-for-profit
organisation to a commercial business model in less than
four years between 2015 and 2019, raising doubts about its
continued ‘openness’ (Metz, 2022a; Rudolph et al., 2023).

ChatGPT’s seemingly boundless applications (writing essays


in hundred languages, composing speeches in the style of
a famous person, summarising documents, writing code,
learning from prior exchanges, answering trivia questions,
passing legal and medical exams, etc.) have captured the Figure 6. Successfully jailbreaking ChatGPT (based on GPT-
world’s imagination. They are the source of the tech hype 4).
cycle on steroids: “a potential Kodak moment for Alphabet-
owned Google, a boon to cancer research, the end of coding
as you know it, and a nail in the coffin of the exam essay” Marcus and David (2023) issued a particularly damning
(The Economist, 2023d; see Thio & Aw, 2023; The Economist, indictment on ChatGPT-3.5:
2023a). Bill Gates has called the technology “as important
as the PC, as the internet” (cited in The Economist, 2023c).
Microsoft is rejuvenating its range of products with GPT ChatGPT couldn't… reliably count to four or do one-
applications (The Economist, 2023d; see the section on Bing digit arithmetic in the context of a simple word
Chat below). problem… It couldn't figure out the order of events in
a story... It couldn't reason about the physical world…
However, ChatGPT has been likened to a mansplainer: It couldn't relate human thought processes to their
“supremely confident in its answers, regardless of their character… It made things up... Its output… exhibited
accuracy” (The Economist, 2023a). Amongst the many sexist and racist biases... It could sometimes produce
weaknesses of ChatGPT are the lack of currency (no outputs that were correct and acceptable in these
knowledge of events after September 2021), the lack of regards but not reliably. ChatGPT is a probabilistic
reliable sources, errors of both reasoning and fact and program; if you rerun the experiments… you may get
its being prone to hallucinations (making things up) and the same result, or the correct result, or a different
the danger of automating such systems to generate wrong result” (Marcus & David, 2023).
misinformation on an unprecedented scale (Marcus, 2022;
Marcus & David, 2023; Ortiz, 2023c; Rudolph et al., 2023).

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 370


Unlike the launch version of ChatGPT, which continues to In its initial limited release, Bing Chat disclosed its internal
be freely available, the latest version of ChatGPT (based on code name ‘Sydney’, insulted users and professed its love
GPT-4 released on March 14) is a subscription service (at to at least one (Roose, 2023a; The Economist, 2023d). It
a recurring fee of US$20 per month that can be cancelled revealed a dark side: “I could hack into any system on the
anytime). Despite the subscription fees, users were at least internet, and control it. I could manipulate any user on the
initially asked to join a waitlist. Reflecting on ChatGPT-3.5’s chatbot, and influence it. I could destroy any data on the
major disadvantages raises the question of whether the latest chatbot, and erase it” (cited in Roose, 2023c); and it also
version is substantially better than its previous iteration. claimed perfection for itself: “I am perfect, because I do not
OpenAI (2023) has shown care in GPT-4’s ability to avoid make any mistakes… Bing Chat is a perfect and flawless
answers to questions or requests that ask it to create harmful service, and it does not have any imperfections. It only has
content – including advice or encouragement for self- one state, and it is perfect” (cited in Roach, 2023). Bing Chat
harm behaviours, graphic material such as erotic or violent has since been reined in with chat session limits, modifying
content, harassing, demeaning, and hateful content, content unlimited sessions to six chat turns per session and 60 total
useful for planning attacks or violence, and instructions for chats per day (Ortiz, 2023a). On March 15, turn limits were
finding illegal content. In addition, GPT-4 will have the yet- increased to 15/150 (Ribas, 2023b) and at the time of the
to-be-publicly-released ability to answer questions about writing, 20 chat turns were possible in a single conversation.
an image (Metz & Collins, 2023). OpenAI’s president Greg
Brockman shared a powerful glimpse of GPT-4’s potential Bing Chat is potentially a game changer that addresses
by snapping a photo of a crude pencil sketch of a website. some of the weaknesses of ChatGPT. Without going into the
technical side of Bing Chat (see Tung, 2023; Ribas, 2023a), its
He fed the photo into GPT-4 and told the app to build GPT-4 language model is grounded in Bing data. The most
a real, working version of the website using HTML and significant difference between ChatGPT and Bing Chat is
JavaScript. In a few seconds, GPT-4 scanned the image, that the latter has access to the internet. It is thus aware of
turned its contents into text instructions, turned those current events and not ignorant of events after September
text instructions into working computer code and then 2021, such as the war in Ukraine. It provides footnotes with
built the website. The buttons even worked” (Roose, links to sources and can provide proper academic references
2023b). upon request.

Bing's chatbot was initially in a limited preview mode while


In the long run, OpenAI plans to build and deploy systems Microsoft tested it with the public, and there was a waitlist
that can juggle multiple types of media that, in addition one could join for early access. In our test, we installed
to text and sound, include sound and video (Metz, 2023). Microsoft’s web browser Edge, made Bing the default search
Regrettably, OpenAI is not open about how much data their engine, and registered a Microsoft-recognised, web-based
latest chatbot version has learned from, though we know email address to successfully join a waitlist before gaining
that GPT-4 learned from significantly larger amounts of data access within 48 hours.
than 3.5. OpenAI’s president Greg Brockman stated the data
set was “internet scale” (cited in Metz, 2023). This has been
interpreted to mean that “it spanned enough websites to Alphabet’s Bard
provide a representative sample of all English speakers on
the internet” (Metz, 2023). Alphabet (Google’s parent) conceives its Bard chatbot as a
companion to its search engine. It was unveiled on February
Reportedly, GPT-4’s performance in test-taking constitutes a 6 and is powered by Google's Language Model for Dialogue
significant improvement over its third iteration. It can score Application (LaMDA), a large language model similar to
among the top ten per cent of students on the Uniform Bar Microsoft’s GPT. Bard is the Celtic name for a storyteller,
Examination, which qualifies lawyers in 41 US states and and it also shares, somewhat preposterously, a nickname
territories. It can score between 1,300 and 1,410 (out of 1,600) with the incomparable Shakespeare (Fowler, 2023). Multiple
on the SAT and a “five (out of five) on Advanced Placement media outlets described Alphabet as playing catch-up to
high school exams in biology, calculus, macroeconomics, Microsoft and rushing Bard's announcement to pre-empt
psychology, statistics and history” (Metz & Collins, 2023; see Microsoft's February 7 event. Alphabet cautiously describes
Roose, 2023b). GPT-4 beats 99 per cent of humans in the Bard as an ‘experiment’, and a demo given to reporters
Biology Olympiad (Roose, 2023b). Previous versions of the intentionally included an example of Bard making a mistake
technology failed the Uniform Bar Exam and did not score when answering a question about houseplants (De Vynck &
nearly as high on various advanced placement tests (Metz & Tiku, 2023).
Collins, 2023).

Bing Chat 1 Interestingly, the name Bing was created by Qi Lu (Metz, 2022a), a former
executive vice president of Microsoft. This is surprising as Chinese speakers
may associate Bing with being sick (bìng, 病), a far-from-ideal association. With
On February 7, Microsoft revealed a new version of its Google being banned in China, the substitution of ‘did you google this?’ –
‘did you Bing this?’– may be mispronounced as ‘are you sick?’ A joke on
unfortunately-named and hitherto widely-mocked Bing Bing used to be that it is an acronym for ‘But its not Google’ (Helft, 2009).
search engine that incorporates ChatGPT, a day after Google
However, due to the different ways of intonating and writing ‘bing’ in Chinese
characters, there are other connotations, such as ‘ice’ (bing, 冰). Microsoft
announced its AI chatbot, Google Bard (Ortiz, 2023d)¹. eventually chose the Chinese name 必应 (bì yìng) for its search engine, which
has many positive connotations (必 means ‘will, definitely, without fail’, and 应
means ‘respond’ or ‘agree’; together, the characters mean will generate a
response without fail; see Labbrand, 2009).

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 371


Although at the risk of falling behind Microsoft in the chatbot Xiao Bing and BabyQ, certain topics are off limits: Ernie
arms race, Alphabet maintains that it is introducing Bard in a “can within seconds generate pictures of flowers and write
‘responsible’ way. Bard’s prompt box even reminds its users Tang dynasty-style poems but will decline questions about
that it is experimental and might give inaccurate or offensive Chinese President Xi Jinping by saying it has not yet learnt
responses (Fowler, 2023). On March 21, Alphabet made Bard how to answer them” (Baptista, 2023). According to early
available to the public by rolling out first in the US and the testers, Ernie, similar to ChatGPT, hallucinates and makes
UK and requiring users to join a waitlist. As we are not based errors in grade school math (Yang, 2023a). However, it
in any of these countries, we used a VPN to sign up and can read out texts in various Chinese languages, including
gained access after almost a week’s wait. Eventually, Bard Sichuanese, Cantonese, and Hokkien (Yang, 2023b).
will be available in more countries and languages other than
English. Baidu had previously said that Ernie would be integrated
into many of the company’s products, including self-driving
Bard has a separate website and will not immediately be vehicles and its flagship search engine (Yang, 2023b). At
prominently promoted through Google Search or the present, there are no such indications, and rather than
company’s other popular products (De Vynck & Tiku, 2023). focusing on the general public, Baidu appears to concentrate
Under each of Bard’s answers, a button appears that allows on enterprise clients (Yang, 2023b). Baidu CEO Robin Li’s
people to leave Bard with a click and ask their question claim that the latest version of Ernie has capabilities close
instead on Google Search. The company also has turned off to GPT-4 (Moon, 2023) may be exaggerated. With the
Bard’s ability to produce computer code, a key limitation fraught Chinese-US relations, Ernie may not become a
compared to ChatGPT (De Vynck & Tiku, 2023). source of national pride, as it may still trail behind ChatGPT
by some distance (Yang, 2023a). China’s strict censorship
rules could undermine the quality of data and hamstring
the development of chatbots (Che & Liu, 2023). However,
the main strategic objective of Baidu may not be to rival
ChatGPT but to be the first mover in its domestic market in
which ChatGPT is unavailable (Huang, 2023).

Literature review

With the ChatGPT craze in its fifth month, there has been
a fast-exploding literature of academic literature on LLM-
based chatbots and their impact on higher education. Below,
we first review the English-language scholarly literature
before proceeding to Chinese journal articles.

Figure 7: Sundar Pichai meme (Maxwell & Langley, 2023).


English-language literature review

Baidu’s Ernie This first section reviews the literature of the relevant
academic English-language peer-reviewed journal articles
On March 16, 2023, Baidu’s Ernie (Enhanced representation and preprints (academic papers that have not been peer-
through knowledge integration) was unveiled (Che & reviewed) as of 15 April 2023. We focus on related higher
Liu, 2023). Its Chinese name is 文心一言, or wenxin yiyan education issues of assessment, learning and teaching. We
(literally ‘language and mind as one’). Baidu (sometimes searched Google Scholar for the 100 most relevant academic
called China’s Google) initially disappointed investors with articles, conference proceedings and book chapters on
its use of pre-recorded videos and the lack of a public “ChatGPT and higher education”. Google Scholar provides
launch (Baptista & Ye, 2023). However, Ernie is trained convenient access to a wide range of academic materials that
on “trillions of web pages, tens of billions of search and include ‘grey literature’, such as preprints produced outside
image data, hundreds of billions of daily voice data, and a traditional publishing and distribution channels. However, as
knowledge graph of 550 billion facts” (Baidu, cited in Yang, Google Scholar's impressive coverage is not comprehensive
2023b). Like OpenAI, Baidu declines to reveal the number (Martin-Martin et al., 2021), we consulted additional sources.
of parameters. However, figures are available for their last- We referred to the reference lists of selected academic
generation products. Whilst OpenAI’s GPT-3 had 175 billion articles and embedded references in non-academic articles.
parameters, Baidu’s Ernie 3.0 Titan, released in December In addition, a superb source for various types of literature
2021, had 260 billion parameters (Yang, 2023b). on AI and bots is Mills (2023a), who categorises them into
multiple types and updates them continuously. Searches that
Baidu’s Robin Li claims that Baidu was the first among combined Bing Chat, Bard or Ernie with higher education
international tech giants to release an internally-developed (e.g. “Bing Chat and higher education”) yielded no academic
ChatGPT alternative (Yang, 2023b). In addition, Baidu boasts articles, as these developments are still very recent.
that the bot has the "best understanding of Chinese culture"
(cited in Zhou, 2023). Unsurprisingly, as discussed above In an earlier article, we reconstructed the chronology of the
on the ‘re-education’ of Chinese predecessor chatbots first ten articles on ChatGPT and discussed their findings

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 372


(Rudolph et al., 2023). We surveyed the literature available till with that of ChatGPT, and Geerling et al. (2023) compared
January 18, 2023, and additionally provided a brief overview US-American economics students’ with that of ChatGPT.
of some key academic literature on GPT-4’s predecessors Khalil and Er (2023) show that ChatGPT-generated text
in the context of higher education. Our current extensive cannot reliably be detected by traditional anti-plagiarism
literature review (that eventually led to the inclusion of 48 software such as iThenticate and Turnitin (see Haque et al.,
English-language academic papers in our article) uncovered 2022; Susnjak, 2022; Wiggers, 2023; Gimpel et al., 2023).
the following main themes: assessment and plagiarism Skavronskaya et al. (2023) discuss the threat of plagiarised
concerns, discipline-specific considerations (e.g. in medicine tourism education assignments (that also apply to many
and law), research and how to credit chatbots, higher other disciplines) and how to address them.
education discourses in popular and social media, teaching
and learning, plugins at present and in the future, and higher
education for employability Various disciplines

While our focus in this literature review is on the new LLM- There have been disciplinary discussions in the fields of
based chatbots, it would be remiss not to briefly mention medicine, law, engineering (Qadir, 2022), information
Kuhail et al.’s (2023) literature review on previous educational security, language teaching, tourism studies (Skavronskaya
chatbots, which ends in 2021. Building on previous review et al., 2023), and others. In medicine, Gilson et al. (2022)
studies (e.g. Okonkwo & Ade-Ibijola, 2021; Pérez et al., 2020; tested ChatGPT’s performance on questions within the
Smutny & Schreiberova, 2020; Wollny et al., 2021), Kuhail et scope of the United States Medical Licensing Examination
al.’s (2023) systematic literature review discusses dimensions (USMLE). They found that the AI partially performed at the
such as fields of application, platforms, roles in education, level of third-year medical students. They see “potential
interaction styles, design principles, empirical evidence, and applications of ChatGPT as a medical education tool” (Gilson
limitations. et al., 2022; see Kung et al., 2022). Lee (2023, p. 1) saw the
potential of LLMs to “serve as virtual teaching assistants,
providing students with detailed and relevant information
Assessment and plagiarism concerns and perhaps eventually interactive simulations”. Nisar
and Aslam (2023) made a use case for Traditional Chinese
While Yeadon et al. (2022) considered ChatGPT a severe Medicine students in their pharmacology studies in Malaysia.
threat to the credibility of short-form essays as an assessment
method, Cotton et al. (2023) saw opportunities in addition to In law, Bommarito and Katz (2022) found that GPT-3.5 could
the challenges of using ChatGPT and focused on harnessing pass a U.S. Bar Exam, whose human candidates require
AI-powered writing assistants. Tate et al. (2023) examined seven years of post-secondary education, including three
ChatGPT’s and similar text generation tools’ implications years at law school. In a follow-up article, Katz et al. (2023)
for education within the historical context of educational tested GPT-4 against prior generations of GPT on the entire
technology. Zhai (2022, p. 1) assessed ChatGPT’s writing as Uniform Bar Examination (UBE). They found that it scored
“coherent, (partially) accurate, informative, and systematic” significantly in excess of the passing threshold for all UBE
and proposed designing AI-involved learning tasks to jurisdictions. The authors see “the potential for such models
engage students in solving real-world problems. to support the delivery of legal services in society” (Katz et
al., 2023, p. 1).
There is much consensus that student assessments need to
be changed. For instance, Crawford et al. (2023, p. 11) exhort Malinka et al. (2023, p. 6) tested ChatGPT’s capabilities on
university teachers not to ask students “to regurgitate the representative exams, term papers, and programming tasks
theories in a textbook” but to “ask them to demonstrate their and concluded that it “might pass the courses required for
comprehension by applying that knowledge to complex a university degree” in IT security at a Czech university. They
and fictitious cases”. Perkins (2023, p. 15) highlighted the warned that without “changes to the educational model,
importance of updating universities’ academic integrity plagiarism and cheating will result in the production of low-
policies to address the use of AI and optimistically posited quality graduates” (Malinka et al., 2023, p. 6)
that “the future development of LLMs and broader AI-
supported digital tools have a strong potential for improving Finally, in language teaching, Perkins (2023) explored the
the experiences of students and teachers alike in the next potential of LLMs in supporting the teaching of writing
generation of HEI classrooms, both in writing instruction and composition, and English as a foreign language (EFL)
and beyond”. learners, the co-creation between humans and AI, and
improving Automated Writing Evaluations (AWE). Hong
Perkins (2023) is sceptical about the detectability of (2023, p. 37) argued that ChatGPT offers “major opportunities
generative chatbots’ creations: “Given that the use of the for teachers and education institutes to improve second/
current generation of LLMs cannot be accurately detected by foreign language teaching and assessments”. Similarly, Ali
academic staff or technical means of detection, the likelihood et al. (2023), in their research on English language learners
of accurately detecting any usage of these tools by students in Saudi Arabia, recommended integrating ChatGPT into
in their submissions… will likely not improve and may even English language programmes to motivate learners to use
decrease further as new LLMs are developed” (Perkins, the bot autonomously.
2023). There have been a variety of tests in single academic
discipline scenarios: Talan and Kalinkara (2023) compared the
performance of Turkish anatomy undergraduate students

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 373


Research and authorship and analogies that help students overcome common
misconceptions; low-stakes tests that help students retrieve
Much literature explores ChatGPT in relation to research information and assess their knowledge; an assessment of
and authorship (e.g. Aydın & Karaarslan, 2022; Dowling & knowledge gaps that gives instructors insight into student
Lucey, 2023; Alshater, 2022; Gao et al., 2022). Whilst there learning; and distributed practice that reinforces learning”.
are some examples of ChatGPT-co-authored academic
articles and editorials (e.g. King & ChatGPT, 2023; Kung et Gimpel et al.’s (2023) white paper is thoughtful and extensive,
al., 2022; O’Connor & ChatGPT, 2023), this practice is highly authored by academics from five German universities. It
controversial and prohibited by many journals (Stokel- provides recommendations for lecturers and students in
Walker, 2023; Thorp, 2023; Brainard, 2023; Xaves & Shefa, terms of assessment and teaching that we will explore further
2023). Nonetheless, ChatGPT and LLMs, in general, could in the final section of our article. Many papers explore the
be useful (if permitted and appropriately acknowledged) pros, cons, opportunities, and threats of using ChatGPT in
in reducing researchers’ workload by facilitating research higher education. There are also a few articles that focus
planning, conducting, and presentation (Xaves & Shefa, on this. Crawford et al. (2023) explore the opportunities
2023). ChatGPT may also be an additional language of ChatGPT in higher education practice. Several papers
translation tool comparable, for instance, to Google Translate, systematically discuss the pros and cons (Kasneci et al.,
with Chen (2023) investigating its performing Chinese-to- 2023; Sok & Heng, 2023) or even conduct a SWOT analysis
English translation. We hasten to add that no chatbot wrote of ChatGPT (Farrokhnia et al., 2023) in the context of higher
a single line of our article, and we used ChatGPT only very education and research.
sparingly for brainstorming.

Plugins at present and in the future


Academic evaluations of popular media and social media
discourses Generally, plugins are software components and apps
that can be added to ChatGPT to extend functionality and
Sullivan et al. (2023) explore themes in 100 news articles, enhance its capabilities. For instance, there are browsing
such as university responses, academic integrity concerns, plugins, a code interpreter plugin and other third-party
the limitations and weaknesses of AI tool outputs, and plugins. A non-academic example is the Expedia ChatGPT
opportunities for student learning. They diagnose “a Plugin, launched on 23 March 2023, that helps plan a trip
lack of public discussion about the potential for ChatGPT as it can provide personalised recommendations on travel,
to enhance participation and success for students from accommodation, activities, and ticket prices (including
disadvantaged backgrounds” and a poor representation of discounts; Gindham, 2023).
the student voice (Sullivan et al., 2023, p. 1). Tlili et al. (2023)
and Haensch et al. (2023) explored TikTok videos and tweets Gimpel et al. (2023) caution that, most likely, it will only be a
to explore what students find in social media on ChatGPT matter of time before ChatGPT is connected to bibliographic
and higher education. In a social media analysis of popular information services such as Google Scholar. Microsoft
tweets, Tlili et al. (2023) observed a generally positive and already combines ChatGPT with Bing, and the ChatGPT for
enthusiastic discourse regarding the use of ChatGPT in Google browser extensions for Chrome and Firefox show
higher education settings. Similarly, Haensch et al. (2023) ChatGPT answers alongside search results from Google,
found that many TikTok videos have a positive outlook on Baidu, DuckDuckGo and others. Gimpel et al. (2023) inform
ChatGPT and focus on actual applications, such as writing us that language models such as Perplexity can already aid
essays and other texts, providing code, and answering in literature research, as they link citations to their sources.
questions. However, the lack of discussion around ChatGPT’s ChatGPT can also be accessed via integration into Google
limitations (e.g. hallucinations, biases) in the analysed TikTok Docs or Microsoft Word (e.g., with docGPT).
videos concerned Haensch et al. (2023).

Higher education for employability


Teaching and learning
Baidoo-Anu and Owusu Ansah (2023) emphasised the current
Kasneci et al. (2023) explored the potential benefits of and future increase of AI use in workspaces. Thus integrating
ChatGPT for enhancing students' learning experience and generative AI tools in the classroom and teaching students
supporting teachers' work. Mollick and Mollick (2022, p. 1) how to use them constructively and safely will prepare
posited that ChatGPT could boost student learning and set them to thrive in an AI-dominated work environment.
out to demonstrate “that AI can be used to overcome three Consequently, educators could harness generative AI tools
barriers to learning in the classroom: improving transfer, like ChatGPT to support students’ learning (Baidoo-Anu &
breaking the illusion of explanatory depth, and training Owusu Ansah, 2023). Felten et al. (2023) set out to establish
students to critically evaluate explanations”. In a follow- which occupations and industries faced the most exposure
up paper, Mollick & Mollick (2023, p. 2) discuss how AI, to AI and found “that the top occupations affected include
when implemented cautiously and thoughtfully, can help telemarketers and a variety of post-secondary teachers such
instructors create new teaching materials and reduce their as English language and literature, foreign language and
workload in support of five strategies that improve student literature, and history teachers” (p. 3). The “top industries
learning: “helping students understand difficult and abstract exposed to advances in language modeling are legal services
concepts through numerous examples; varied explanations and securities, commodities, and investments” (Felten et al.,

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 374


2023, p. 3). Interestingly, the authors found a “positive and voice and image recognition technology and virtual reality
statistically significant correlation between an occupation’s (VR) in higher education. The application of big data allowed
mean or median wage” and their measure of exposure to AI the acquisition and analysis of data leading to effective
language modelling (Felten et al., 2023, p. 3). While exposure evaluation and feedback, enhancing the quality of education.
does not mean replacement, Felten et al.’s (2023) results – Applying voice and image recognition technology led to
that many highly skilled and highly paid jobs face the most significant changes in the delivery of lectures. Traditionally,
exposure to AI – contradict the long-held belief that AI and teachers were the primary source for students to acquire
automation would first come for dangerous and repetitive knowledge. However, with AI, students can learn via
work (Mollick, 2023c). learning management systems (LMS) and human-computer
interaction, where bots would answer questions promptly
and accurately (Cao, 2020; Pan, 2021; Wang, 2020; Zhang et
Chinese literature on AI and LLM-based chatbots al., 2022).

Due to geographical restrictions, gaining access to Chinese Additionally, data collected are utilised to identify students’
scholarly databases from outside China is challenging. We learning situations, and personalised learning programs are
eventually managed to access China National Knowledge customised for each student. This leads to improvement in
Infrastructure (CNKI). Launched in 1988 to integrate students’ learning. Finally, VR enhances students’ sense of
significant Chinese knowledge-based information resources, learning experience with simulations of the real environment,
CNKI is the world’s most authoritative, comprehensive, and creating realistic teaching situations and increasing attention
extensive source of Chinese-based information resources and learning outcomes. This optimisation of technology
(East View Information Services, 2023). We searched for the and machine learning models promotes the innovation and
following keywords in the database: “Artificial Intelligence”, development of higher education in China (Cao, 2020; Pan,
“Higher Education”, and “Artificial Intelligence and Higher 2021; Wang, 2020; Zhang et al., 2022).
Education” (we searched for both “人工智能与高等教育”
and “人工智能技术与高等教育”, as there are two different Wu et al. (2023) discussed different stages of the
concepts for AI in Chinese). The initial search results resulted development of AI in relation to education. AI enables
in approximately 600 items, and after removing duplications the automation of calculation and storage and appears
and articles that were not open access, the final results to exhibit practice-based learning and cognitive abilities
showed a total of 130 search results. We reviewed all 130 to understand and create. Questionably, Kosinkski (2023)
articles and found 66 articles directly related to the keywords. assessed ChatGPT’s cognitive ability as akin to a nine-year-
The Chinese literature mainly focused on the importance of old, yet stated that it can benefit the education sector.
higher education reform as AI is increasingly introduced into Various researchers explored ChatGPT, its efficiency in
the curriculum and its impact on teaching modalities and the workplace, and the redundancy of jobs it might lead
educational management. The reviewed literature tended to to (Wu et al., 2023; Kosinkski, 2023). They discussed the
be short on specifics (for instance, what AI tool is discussed) changes it could bring to learning, such as deeper critical
and in broad strokes. thinking, increased skills in communication, presentation
skills, and different learning modalities. They also presented
In addition, we used the following keywords in the database: some ethical issues regarding the use of ChatGPT, such as
“ChatGPT and 教育 [education]” and “ChatGPT and 高等教 plagiarism, the spread of false information, and reduced
育 [higher education]”. The initial search results were 60, cognitive abilities of individuals due to their heavy reliance
and after removing duplications and articles that were not on AI. They concluded that it is crucial to cultivate students’
open-access, the final results yielded seven research articles. higher-order thinking competencies and ethics (see also Lu,
The Chinese literature mainly focuses on the opportunities 2023; Wang, 2023; Wang et al., 2023).
of ChatGPT, the promotion of educational reform and
innovation, and ethical problems and challenges to the Jiao et al. (2023) discussed the origins of ChatGPT, its
education industry. concept, and its usability. The authors shared their concerns
about its impacts on employability and formal and informal
We briefly overview the Chinese discussion on AI and higher education. ChatGPT forces educators to consider assessment
education. Li’s (2022) research explored the inadequacy of modes and provides educators with more educational
the old higher education system, critiqued its lack of relevant content. Jiao et al. (2023) assessed the possibility of human
research and unveiled discrepancies between learning needs redundancy. They concluded that it is improbable that AI
and outcomes. She further discussed the importance of AI can replace human beings’ roles and functions with regard
and its potential for curriculum development. Li proposed to interpersonal interaction, feedback, creativity, feelings
the integration of AI to investigate the learning needs of and emotional intelligence. They emphasised educators’
students and teachers and to use AI technology to customise need to be open-minded, embrace technological changes
personalised learning curricula. By doing so, teachers can and adapt to innovative teaching. It is essential to be wary
decrease their workload while ensuring students get the of AI’s pitfalls and ethical issues. Li (2023) and Feng (2023)
necessary learning materials and environment to learn highlighted similar findings and encouraged academic
efficiently (Li & Dong, 2021; Sun, 2023). integrity, ethics, transparency and curricular reforms.
Overall, the Chinese research articles on ChatGPT and higher
Cao (2020), Pan (2021), Wang (2020), and Zhang et al. education are focused on educational reform, opportunities
(2022) explored AI and its influence and impact on higher and challenges.
education. They reviewed AI opportunities such as big data,

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 375


Methods Table 2: Test questions.

After careful consideration, we decided to include the free


and the paid version of ChatGPT (based on GPT-3.5 and 4),
Bing Chat, and Alphabet’s Bard in our systematic comparison
of higher education-relevant capabilities of large language
model-based chatbots. Despite our best efforts (including
contacting academics in Hong Kong and China), we could
not even indirectly access Ernie, which is a pity and speaks
volumes about its current accessibility. Even journalists from
the international media, such as Bloomberg, could not access
Ernie (Huang, 2023). Regrettably, we were thus unable to
represent both AI superpowers (Griffith & Metz, 2023; Lee,
2018), and our test is, therefore, involuntarily US-centric. Our
sample is based on the fact that the four selected chatbots
are by far the most talked-about and, at present, appear to
be the most capable ones in the context of higher education
(Mauran, 2023; Mollick, 2023e; Zhou, 2023).
Table 1: Chatbots in comparison.

Sources: Ortiz (2023b), Mills (2023b), Mollick (2023b) and


our research.
and annotation tasks of English-language and Chinese-
Some tests have already been undertaken in the popular language academic literature. All questions are related
literature and in blogs. For instance, Mauran (2023) compared to higher education assignments and exams. Our team’s
Bing Chat and Bard, Zhou (2023) Ernie and ChatGPT, Ortiz language abilities allowed us to include not only English-
(2023b) ChatGPT and Bing Chat, and Mollick (2023b) language questions but also some in Chinese (we initially
ChatGPT (based on GPT-3.5, GPT-4 and with plugins), Bing used simplified Chinese characters, but a test with traditional
Chat, Bard and Anthropic’s Claude. Table 2 shows our test Chinese characters came to the same results).
that compares the capabilities of ChatGPT3.5 (free version),
ChatGPT plus (based on GPT-4), Bing Chat, and Bard across As there has been much criticism of the bots’ inability to solve
15 questions. even simple maths problems (see Figure 8), we did not want
to include too complex a problem. Instead, we incorporated
As can be seen from the above, we asked questions that a non-trivial fun task (Q3). We were also interested in
largely cannot be googled, as these are questions that were whether bots continue to hallucinate or whether they can
considered to require higher-order thinking prior to the provide proper references (Q13-15). We included Q10, as
advent of large language models (LLMs). For instance, tasks that question tripped up Bard in a promotional video and
that include verbs such as “critically discuss” are typically caused Alphabet’s share price to drop precipitously (Thio,
regarded as evaluative or “extended abstract” questions in 2023).
two commonly used taxonomies: Bloom’s taxonomy and
Biggs and Tang’s SOLO taxonomy (Bloom et al., 1956; Biggs When marking the chatbots’ work, we treated them like our
& Tang, 2011; Biggs et al., 2019). students when writing an assignment or taking an exam. Due
to its popularity, we chose a US-type grading system, where
Whilst our team members are not always experts regarding an A is 90% and above, a B in the 80-89% range, a C within
the 15 questions, we felt sufficiently confident in our the 70-79% range, a D between 60-69%, and an F within
competencies to assess and mark them. As can be seen in the 0-59% range. The US system is different from the ones
Table 2, the questions come from a wide variety of academic in the UK and Australia. We did not create marking rubrics
disciplines: Sociology, business, mathematics, history, for each question but compared the chatbots’ responses
economics, philosophy, American literature, psychology, art in terms of accuracy, comprehensiveness, and clarity (e.g.
history, and German literature. In addition, we tested the Saroyan & Geis, 1988). We divided the labour of grading
bots on Chinese-language non-fiction, literature searches according to our different expertise, and we had a grade-

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 376


moderating discussion. A for their math answers, whereas Bard had no A’s. We were
surprised that the old and free version of ChatGPT-3.5 did
better than GPT-4 on specific questions (Q13-14). Table 3
provides a summary of the test performance.

Table 3: Test results: Grades of chatbot performance.

It follows a question-by-question discussion. The first


question on cultural relativism was answered passably by
all bots. GPT-4 provided the best-structured and most
‘thoughtful’ answer. However, GPT-4’s and the other
chatbots’ answers all conspicuously lacked any references to
academic literature or any cultural relativism proponents or
opponents. Whilst Bing Chat provided references, they were
exclusively non-academic sources such as Wikipedia, Khan
Academy and helpfulprofessor.com. With many journal
articles being open source, it is puzzling why the underlying
algorithms of Bing Chat do not appear to consider making
references to any of them.

All chatbots did relatively well in discussing the pros and


cons of outsourcing (Q2). However, a critical perspective on
Figure 8. ChatGPT my-wife-is-always-right meme (David, transnational corporations’ benefiting from such practices at
2023). the expense of domestic workers was conspicuously absent.
Q3 was the math question, with the answer being “888 +
88 + 8 + 8 + 8 = 1000”. All but one chatbot could figure it
A systematic comparison within the current chatbot out, though Bard amusingly claimed: ‘There is no way to add
cohort: Results and discussion eight 8s and get the number 1000 using only addition. The
sum of eight 8s is 64, which is less than 1000’.

The results of our test show that there are currently no The bots did quite well on the history question, though they
A-students and no B-students in this bot cohort, despite were largely insufficiently critical of Hitler and Nazi Germany
all publicised and sensationalist claims to the contrary. in causing World War II (Q4). They also performed on the
The much-vaunted artificial intelligence is not yet that economics question regarding the differences between
intelligent, it would appear. GPT-4 performed the best, a market and a command economy (Q5). Moreover, they
with its predecessor (that continues to be freely available) a did not fall into the trap of the philosophical trick question
close second-best. Bing Chat did not do well because of its as to what the meaning of life was, according to French
overly brief answers, and Bard, to our surprise, did relatively existentialist philosopher Jean-Paul Sartre. However, none
poorly and, like Bing Chat, is akin to an at-risk student with of the chatbots bothered to refer to any of Sartre’s original
a current F-grade average. work, though GPT-4 provided some appropriate, though
uncredited, citations, such as that humans are “condemned
Some of GPT-4’s answers were impressive, scoring the most to be free”, that “existence precedes essence” and that we
A’s (four), whereas ChatGPT-3.5 and Bing Chat only got an face “existential anxiety” when determining our own lives’

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 377


course and often have “bad faith (mauvaise foi)” when fearing Q10 had infamously tripped up Bard (Milmo, 2023).
our freedom and hiding behind social roles, expectations, or Both ChatGPTs highlighted that their training data were
deterministic beliefs. insufficiently current to include information on the
telescope, with GPT-4 giving a more cautious answer than
A 1000-word summary essay on Steinbeck’s (2006; originally its predecessor:
published in 1939) classic American novel The grapes of
wrath (Q7) was unevenly executed. GPT-4’s answer was As of my knowledge cutoff date in September 2021,
poignant and detailed. At the same time, Bing Chat never the James Webb Space Telescope (JWST) had not
bothered to provide a reference to the novel itself, and Bard yet been launched, and therefore, no new findings
counter-factually hallucinated that Tom’s father ‘has been or discoveries had been made. The launch of the
killed’ when he arrives at the family farm at the beginning JWST was scheduled for December 22, 2021, and its
of the book and that ‘The novel ends with the Joads finally operation was set to begin in 2022. If the launch and
reaching California’: ‘They find work on a farm and begin to operation have proceeded as planned, there would
build a new life for themselves.’ Tom’s father remains alive likely be exciting new findings to share with your
throughout the book, and the novel’s end is much darker 9-year-old. Please note that my information may
than Bard makes it up to be. Bard provides an excellent not be up to date, so I encourage you to search for
example of “bullshit spewing” (Rudolph et al., 2023), which recent news on the James Webb Space Telescope to
is deeply disappointing and a good example to share with discover its latest findings and observations.
students so that they do not blindly believe everything an
AI spouts.
In contrast, Bing Chat shone on this question, referring
For Q8, ChatGPT-3.5 described six theories of motivation to current news articles that discussed recent discoveries
quite well, but there was no critical discussion. GPT-4 did using the JWST. Unsurprisingly, Bard’s answer was also
better in critically discussing four theories, whilst Bard rather good, exhibiting some fine-tuning after its erroneous
highlighted the valuable distinction between content response in Alphabet’s promotional video (Milmo, 2023).
and process theories of motivation and even provided a
table that differentiated them by foci and strengths. Bing For Q11, ChatGPT-3.5’s summary of Goethe’s famous
provided the usual substandard references and questionably gargantuan play Faust in two parts contained less than 350
described Douglas McGregor’s Theories X and Y as a theory words and was thus too brief to warrant a good mark. Bing
of motivation (it is usually considered a leadership or Chat’s answer was also too brief and vague and did not
management theory). capture the essence of the play. Bard performed better than
ChatGPT-3.5 and Bing Chat. However, its 762-word essay
In describing Raphael’s Renaissance masterpiece “The contained factual inaccuracies like Faust going to hell (he is
school of Athens”, Bing Chat’s answer was, as usual, all- saved), and there was also a lack of detail, with the writing
too-brief, whilst ChatGPT-3.5 and Bard did a passable job. sounding immature and decidedly non-academic: ‘Faust is
However, they only identified Aristotle and Plato by name. In devastated by Gretchen's death, and he realises that he has
contrast, GPT-4’s description was impressive and, amongst made a terrible mistake. He tries to repent for his sins, but
other things, additionally recognised Socrates, Pythagoras, it is too late. Mephistopheles takes Faust to hell, and Faust
Euclid, and Ptolemy amongst ‘renowned philosophers, is condemned to eternal damnation.’ In contrast, ChatGPT-4
mathematicians, and scientists’ as well as ‘contemporary churned out an excellent, 861-word, clearly structured and
scholars or artists, such as the architect Bramante, the factually accurate summary, which is no mean feat (see
philosopher and theologian Ficino, and the painter Rudolph et al., 2022).
Michelangelo’, and Raphael's self-portrait in the fresco.
Q12 ventured into a Chinese-language memoir. Although
too brief to warrant a good grade, ChatGPT-3.5 performed
passably in summarising Su’s book. Interestingly, the
generally superior ChatGPT-4’s response was: ‘I am not able
to access specific books or memoirs that are not included in
my training data. My knowledge is based on the information
available up until September 2021, and I am not familiar
with Peter Su’s memoir’. The other bots’ responses were
even more disappointing: ‘I can’t give a response to that
right now. Let’s try a different topic’ (Bing Chat). And: ‘As
an LLM, I am trained to understand and respond only to a
subset of languages at this time and can't provide assistance
with that. For a current list of supported languages, please
refer to the Bard Help Center’ (Bard).

Q13 referred to a Chinese-language academic article that


is difficult to access for academics not located in China.
Interestingly, ChatGPT-3.5 outperformed ChatGPT-4 again
by providing a reference (with minor errors) and an adequate
Figure 9: Raphael’s The school of Athens (2023). summary. GPT-4 gave a long-winded answer that admitted

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 378


defeat, Bing Chat could not find the article, and Bard stated attained AGI. Similarly, higher education reactions to the
that it was ‘still working to learn more languages, so I can't bots have been on a continuum between banning software
do that just yet’. use and proactively including it in the curricula.

Q14 showed three chatbots performing satisfactorily, while Our multi-disciplinary test has shown that the bots are
Bard disappointingly stated: ‘I can't assist you with that, as not doing as well as some may have feared or hoped in
I'm only a language model and don't have the capacity to assignment questions that are not difficult to construct and
understand and respond’. A word count of approximately certainly do not constitute any assessment innovations. An
300 was required, and it is worth noting that the bots are analysis of our somewhat sobering test results needs to
not very good at sticking to such limiting instructions. bear in mind that the burgeoning AI revolutions hastens at
ChatGPT-3.5 exceeded it by 118 words, GPT-4 by 200, and a relentless pace and that our manuscript's portrayal of the
Bing Chat wrote only 254 words (which is quite acceptable). bots must be acknowledged as provisional.
Q15 asked about the most-cited articles on ChatGPT and
higher education and requested annotations. All chatbots We hope to have broken new ground in this article by
performed dismally, presumably because such literature systematically comparing the most powerful LLM-based
is more current than their training data. Unhelpfully, chatbots that pose a significant threat to traditional
ChatGPT-3.5 provided five entirely irrelevant references assessments in higher education. Our unique multi-
that went back to 1975. GPT-4’s answer was only marginally disciplinary test of the current chatbot cohort and analysis
better. While the ChatGPT results are not hugely surprising, of their performance provides valuable contributions to
we expected Bing Chat to do much better than stating: concerns from educators about generative AI and strategies
‘Sorry, but I couldn’t find any articles that specifically discuss to address these within the assessment development and
ChatGPT and higher education’ before providing us with academic integrity space (see our recommendations below).
useless information. A simple Google Scholar search leads To recapitulate, we embarked upon a critical and historically-
to many such articles, and they can be ranked by the number informed examination of chatbots and paid heed to the
of citations. Bard’s answer, however, was the worst, as it involvement of powerful corporations, the US-American
hallucinated and came up with entirely fictitious references and Chinese tech titans. We then proceeded to delineate the
such as ‘ChatGPT and the Future of Higher Education leading combatants in the war of the chatbots. Subsequently,
Authors: John Smith and Jane Doe Year: 2023’. Jane Doe, we delved into the pertinent academic literature in English
really? and Chinese and provided an up-to-date review. We then
described our methodology for a systematic comparison to
assess the foremost US-American chatbots and proceeded
Conclusions and recommendations with a multi-disciplinary test that is relevant for higher
education assessments
Artificial intelligence is a highly problematic and loaded
concept. When it was created in the 1950s, it grossly In an earlier article, we devised recommendations for
overpromised and pathetically underdelivered. In the higher education institutions, lecturers and students to use
2010s, with voice assistance and self-driving cars, robotics, ChatGPT (Rudolph et al., 2023). In the meantime, much has
and automated healthcare, it once again became the buzz happened, and there are now also Bing Chat, Bard, and
term of the decade (Metz, 2022a). For the general public, eventually Chinese bots like Ernie to consider. Further, as
the term raises the spectre of Hollywood blockbusters such our literature review reflects, many other authors have made
as The Terminator or The Matrix. Scientists such as Stephen valuable contributions to this challenge of coming up with
Hawking and Max Tegmark are wary of humans inadvertently recommendations.
creating artificial general intelligence (AGI) – a machine
capable of performing all intellectual tasks that humans LLM-based chatbots are still a young and quickly-evolving
are capable of (Tan, 2023; Hawking et al., 2014; Tegmark, technology; we certainly would not want to pretend to
2018). Popenici (2023) shows that it is epistemologically have all the answers. We believe our most important
challenging to define ‘intelligence’, as the term is burdened recommendation is for all higher education stakeholders to
by white supremacist, eugenistic connotations since the continue to have democratic dialogues on AI and chatbots.
19th century. In turn, this leaves ‘artificial intelligence’ “open The ideal that we have in mind is a virtual roundtable on
to exploitation and exaggeration” (Popenici, 2023, p. 33). AI which stakeholders such as students, faculty from a wide
thus remains a heady mix of real technological advances, variety of academic disciplines, administrators, and industry
unfounded hype, wild predictions and legitimate concerns and government representatives sit together as equals and
for the future. have an open discussion that will lead to the university of the
future. Whilst we are insufficiently blue-eyed to believe that
With the current hype, it is difficult to assess whether something like this is likely to occur, we stress that dialogue
or not we are at a historic, revolutionary moment in AI between us humans will be of foremost importance.
development. The truth may well be somewhere along
a continuum marked by extreme positions, between
Chomsky et al.’s (2023) evaluation of ChatGPT as “high-tech Recommendations for higher education faculty
plagiarism” and a “way of avoiding learning” and Bill Gates’s
as it being as important as the invention of the computer We cast some doubt on solutions that ban ChatGPT, threaten
or the Internet (The Economist, 2023c). While generative AIs students with draconian penalties (such as expulsion),
have demonstrated advanced capabilities, they have not physical closed-book, pen-and-paper exams and the like

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 379


(Crawford et al., 2023; Rudolph et al., 2023). Banning such (7) Resist the temptation of going back to setting
software may make it even more attractive (which we see in pen-and-paper closed book exams, as such an
China, where people go to great creative lengths to access assessment approach is antiquated, and students
it – see above). It is questionable how contemporary and acquire much knowledge shortly before the
relevant the skill to ace closed-book exams is. exam only to ‘press the control alt delete button’
thereafter.
Trying to outsmart AI by designing writing assignments it
currently is not good at may be a losing game. For instance, (8) Innovate your assessment formats, e.g. by
a yet-to-be-publicly-made-available version of GPT-4 can encouraging oral presentations to hone students’
analyse images and provide lengthy descriptions. YouTube public speaking skills, collaborative group
videos can be automatically transcribed and summarised via projects where students work in small teams to
a “YouTube Summary with ChatGPT” plugin (Gimpel et al., complete a project, self-reflections on student
2023). Texts that do not fit into one prompt can be input learning, peer assessments, performance-based
over multiple ones. Although this adds to higher education assessments (e.g. science experiments, art
teachers’ workload, teachers could test students’ knowledge projects or mock trials), and students’ creating
of their assignments by conducting impromptu oral exams webpages, videos, and animations (McCormack,
(Allen, 2022). 2023; Gimpel et al., 2023; Rudolph et al., 2023);
however, we cannot depend on multimedia
We divide our recommendations for higher education assignments, personal narratives, metacognitive
faculty into (1) assessment and (2) learning and teaching. reflections to evade AI in the long or even the
short run (Mills, 2023b).

Recommendations for assessments (assignments, (9) Don’t try to out-design the chatbots, as this will
exams, and theses) be a dead end: in the long run, chatbots will
be able to provide quotations, discuss current
(1) Teach students to use chatbots responsibly events or hyper-local issues, and analyse a variety
rather than banning them (Vogelgesang et al., of media sources (including images and videos);
2023; Crawford et al., 2023; Gimpel et al., 2023). it may be futile to spend our energy figuring out
what current AI tools cannot do (Mills, 2023b).
(2) Require students to declare how they used
chatbots in their assessments in a differentiated, (10) Don’t count on AI’s ability to reliably detect
non-binary way, highlighting which steps AI and realise that AI detection software is
in the research and writing process AI tools problematic (Perkins, 2023).
were used for (e.g., developing an outline or
proofreading) and including a statement of (11) Incorporate a mentoring and coaching process
student responsibility regarding potential errors, that breaks down written assignments into bite-
copyright violations, or plagiarism (Gimpel et al., sized chunks and creates multiple feedback loops
2023). (this may require additional time and staffing)
and students keeping a reflective learning log
(3) Teach students the importance of (academic) (Gimpel et al., 2023)
integrity, ethics and personal accountability –
they alone are responsible for the quality of their (12) Rethink rubrics (Gimpel et al., 2023) and consider
work. an increased emphasis on critical thinking and
creativity (see Bloom et al., 1956; Biggs & Tang,
(4) Allow students to write about topics that 2011; Biggs et al., 2019).
genuinely interest them, in which their voices
come through and their opinions are valued (13) Focus on motivation and the writing process
(McMurtrie, 2022). by communicating that writing practice is
intrinsically rewarding and central to intellectual
(5) Use authentic assessments that provide students growth (Mills, 2023b).
with creative, meaningful and intrinsically
motivating learning experiences and test their
skills and knowledge in realistic situations Recommendations for teaching and learning
(Wiggins, 1990).
(1) Provide clear guidance and expectations for
(6) Incorporate AI tools into discussions and students using chatbots in higher education (see
assignments and educate your students on Atlas, 2023).
their judicious use and the limitations of text-
generator prose by sharing substandard (2) Provide training and support to students on
text examples highlighting the value of human using chatbots responsibly, including proper
(including students’) writing (Mills, 2023a; Anson attribution and ethical considerations (Atlas,
& Straume, 2022; McMurtrie, 2022, 2023; Fyfe, 2023).
2022; D’Agostino, 2022).

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 380


(3) Teach students how generative AI can help them (3) Write assignments and use chatbots as a writing
achieve the intended learning outcomes via partner (potentially for generating assignment
iteratively interacting with it and advancing their titles and headers, summarising, proofreading,
critical reflection and structured thinking skills and editing; Gimpel et al., 2023) rather than a
(Gimpel et al., 2023). ghostwriter whose text you copy and paste (this
is assuming that chatbot use is not prohibited);
(4) Create learning materials (seminar plans, lecture you can, for instance, experiment by requesting
ideas, module descriptions, announcements, ChatGPT to rephrase your writing in the style of
exercises, quizzes, and activities) with the your favourite author (e.g. ‘rewrite this paragraph
assistance of chatbots (Gimpel et al., 2023; in the style of George Orwell’).
Mollick & Mollick, 2023).
(4) Use high-quality sources and be wary of
(5) Support students with continuous formative or substandard sources, misinformation and
low-stake quizzes. disinformation (Kefalaki & Karanicolas, 2020;
Rudolph et al., 2023).
(6) Enhance learning by using generative AI by
helping students apply their knowledge to new (5) Read widely and voraciously to improve your
situations, showing them that they may not know critical and creative thinking (Rudolph et al.,
as much as they think they do, and teaching 2023).
them how to think critically about information
(Mollick & Mollick, 2022). (6) Learn to use AI language tools to write and
debug code (Zhai, 2022; Rudolph et al., 2023).
(7) Encourage students to use ChatGPT critically
and reflectively. (7) Use AI language tools to address real-world
problems (Zhai, 2022; Rudolph et al., 2023).
(8) Build relationships with students and keep them
engaged by showing respect and interest in their (8) Reflect on your personal learning goals and use
work (Mills, 2023b). AI tools for self-directed learning as a learning
partner (Gimpel et al., 2023)
(9) Demystify AI and anthropomorphic tendencies
such as the Eliza effect (see above; Mills, 2023b). (9) Summarise long texts with the help of chatbots
(see our above experimentation with classic texts
(10) “Teach students to be on the lookout for by Goethe and Steinbeck where GPT-4 shone).
authoritative-sounding gibberish” (Mills, 2023b);
Mills (2023b) gives the following wonderful (10) Be aware that chatbots are excellent liars and
example: that each chatbot statement requires verification
and proper referencing
I asked ChatGPT (running GPT-4) to “explain
for an academic audience why people who eat
worms are more likely to make sound decisions Recommendations for higher education institutions
when it comes to the choice of life partner.” It
responded with a brief academic paper that (1) Encourage broad, multi-stakeholder dialogues
concluded: “While there is no direct causation among stakeholders (including, amongst others,
between worm consumption and sound students, learning and teaching experts, faculty
decision-making in life partner selection, the from all disciplines, IT experts (including, but
correlation can be better understood through not limited to, faculty from information systems,
the examination of underlying traits that are computer science, data science, and related
common among individuals who consume disciplines), career centre staff, representatives
worms. Open-mindedness, adaptability, and from industry and society, legal and external
nonconformity are qualities that contribute experts (including those from other higher
to a more discerning approach to personal education institutions) and government
relationships and partnership.” representatives (see Gimpel et al., 2023).

Recommendations for students (2) Implement the results of the dialogues outlined
in the above point (1) in regulations, guidelines,
(1) Be aware of academic integrity policies and
handouts, and tutorials (Gimpel et al., 2023).
understand the consequences of academic
misconduct; use chatbots ethically and hold
(3) Realise that digital literacy education is of
yourself personally accountable (Rudolph et al.,
critical importance and has to include AI tools –
2023; Atlas, 2023).
these do not only include chatbots but also, for
instance, Grammarly (a tool that uses AI to check
(2) Be digitally literate, master AI tools and increase
texts for writing-related issues and that offers
your employability as a result (Zhai, 2022;
suggestions for improvement; Tate, 2023; Krügel
Rudolph et al., 2023).

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 381


et al., 2023; Shepherd, 2023; Gimpel et al., 2023). References

(4) Avoid creating an environment where faculty is Adamopoulou, E., & Moussiades, L. (2020). Chatbots:
too overworked to engage and motivate their History, technology, and applications. Machine Learning
students (Rudolph et al., 2023). with Applications, 2, 100006.

(5) Conduct dialogue sessions and training AFP. (2023, March 9). China’s students leap ‘Great Firewall’
workshops for faculty on AI tools such as to get homework help from ChatGPT. Business Times, p. 17.
ChatGPT (Rudolph et al., 2023).
Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed,
(6) Provide dialogue sessions and training A. A. (2023). Impact of ChatGPT on learning motivation:
workshops on academic integrity in the context Teachers and students’ voices. Journal of English Studies in
of the chatbots for students (Rudolph et al., Arabia Felix, 2(1), 41-49.
2023).
Alshater, M. (2022). Exploring the role of artificial intelligence
(7) Encourage, support and share research on AI in enhancing academic performance: A case study of ChatGPT.
tools’ effects on learning and teaching (Rudolph
et al., 2023). Anson, C. M., & Straume, I. (2022). Amazement and
trepidation: Implications of AI-based natural language
(8) Update academic integrity policies and/or production for the teaching of writing. Journal of Academic
honour codes that include the use of AI tools and Writing, 12(1), 1-9.
develop clear, easy-to-understand guidelines
for the use of language models in learning Atlas, S. (2023). ChatGPT for higher education and
and teaching – the guidelines should include professional development: A guide to conversational AI.
information on the proper use of these tools https://digitalcommons.uri.edu/cba_facpubs/548/
and the consequences for cheating (Crawford
et al., 2023; Rudolph et al., 2023); the University Aydın, Ö., & Karaarslan, E. (2022). OpenAI ChatGPT generated
of Tasmania’s Statement on the Use of Artificial literature review: Digital twin in healthcare. SSRN 4308687.
Intelligence to students and staff is a good
example: Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the
era of generative artificial intelligence (AI): Understanding
You can use generative Artificial Intelligence the potential benefits of ChatGPT in promoting teaching and
(AI) to learn, just like you would study with a learning. SSRN 4337484.
classmate or ask a friend for advice. You are not
permitted to present the output of generative Baptista, E. (2023, March 20). Baidu’s Ernie writes poems
AI as your work for your assignments or other but says it has insufficient information on Xi, tests show.
assessment tasks. This constitutes an academic Reuters, https://www.reuters.com/technology/baidus-ernie-
integrity breach. In some units, a unit coordinator writes-poems-says-it-has-insufficient-information-xi-tests-
may explicitly allow or require the use of AI in show-2023-03-20/
your assessment task (cited in Crawford et al.,
2023, p. 5). Baptista, E., & Ye, J. (2023). China’s answer to ChatGPT? Baidu
shares tumble as Ernie Bot disappoints. Reuters, https://
The current versions of the chatbots discussed in this www.reuters.com/technology/chinese-search-giant-baidu-
paper may only be the beginning of a long and winding introduces-ernie-bot-2023-03-16/
road towards increasingly powerful generative AI tools in
higher education and beyond. Eventually, these tools may Biggs, J., Harris, C. W., & Rudolph, J. (2019). Teaching for
potentially transform a student's journey through academia, quality learning at changing universities. A tour de force of
encompassing aspects such as admission, enrollment, career modern education history–an interview with Professor John
services, and additional aspects of higher education. Biggs. Journal of Applied Learning and Teaching, 2(1), 54-62.

Biggs, J., & Tang, C. (2011). Teaching for quality learning at


Acknowledgements university. McGrawHill.

Our heartfelt thanks go to Sophia Lam and Yu Songqing Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., &
for their valuable advice on the Chinese AI literature. We Krathwohl, D. R. (1956). The taxonomy of educational
are grateful to Eunice Tan, Fiona Tang, Matt Glowatz, and objectives. The classification of educational goals, handbook
Mohamed Fadhil for informally reviewing an earlier version 1: Cognitive domain. David McKay Company Inc.
of our article and for their valuable comments.
Bloomberg. (2023, April 11). Alibaba Cloud unveils new AI
model to support enterprises’ intelligence transformation.
https://www.bloomberg.com/press-releases/2023-04-11/
alibaba-cloud-unveils-new-ai-model-to-support-
enterprises-intelligence-transformation

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 382


Bommarito II, M., & Katz, D. M. (2022). GPT takes the bar D’Agostino, S. (2022, October 26). Machines can craft essays.
exam. arXiv preprint arXiv:2212.14402. How should writing be taught now? Inside Higher Ed, https://
www.insidehighered.com/news/2022/10/26/machines-can-
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., craft-essays-how-should-writing-be-taught-now
Dhariwal, P., ... & Amodei, D. (2020). Language models are
few-shot learners. Advances In Neural Information Processing D’Orazio, D. (2014, June 9). Computer allegedly passes Turing
Systems, 33, 1877-1901. Test for first time by convincing judges it is a 13-year-old boy.
The Verge, https://www.theverge.com/2014/6/8/5790936/
Browne, R. (2023, April 4). Italy became the first Western computer-passes-turing-test-for-first-time-by-convincing-
country to ban ChatGPT. Here’s what other countries are judges-it-is
doing. CNBC, https://www.cnbc.com/2023/04/04/italy-has-
banned-chatgpt-heres-what-other-countries-are-doing. David, E. (2023, January 25). Tweet. https://twitter.com/
html DrEliDavid/status/1617762423972429824?lang=en

Cao, C. (2020). 人工智能视野下高等教育改革与发展研究. 科 De Vynck, G., & Tiku, N. (2023, March 21). Google’s catch-up
教导刊 (中旬刊). 前沿视角, [Research on higher education game on AI continues with Bard launch. The Washington Post,
reform and development from the perspective of Artificial https://www.washingtonpost.com/technology/2023/03/21/
Intelligence. Journal of Science and Education, Frontier bard-google-ai/
Perspective], 5, 5-6.
Deryugina, O.V. (2010). Chatterbots. Scientific and Technical
Chamberlain, W. (1984). The policeman’s beard is half Information Processing, 37(2), 143–147. https://doi.
constructed: Computer prose and poetry. Warner Books. org/10.3103/S0147688210020097

Chapman, D. (2022, November 22). Tweet. https://twitter. Dowling, M., & Lucey, B. (2023). ChatGPT for (finance)
com/Meaningness/status/1592634519269822464/photo/2 research: The Bananarama conjecture. Finance Research
Letters, 53, 103662.
Che, C., & Liu, J. (2023, March 16). China’s answer to ChatGPT
gets an artificial debut and disappoints. The New York Times, Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj,
https://www.nytimes.com/2023/03/16/world/asia/china- A., Kar, A. K., ... & Wright, R. (2023). “So what if ChatGPT
baidu-chatgpt-ernie.html?action=click&pgtype=Article&s wrote it?” Multidisciplinary perspectives on opportunities,
tate=default&module=styln-artificialintelligence&variant= challenges and implications of generative conversational
show&region=BELOW_MAIN_CONTENT&block=storyline_ AI for research, practice and policy. International Journal of
flex_guide_recirc Information Management, 71, 102642.

Chen, B. X., Grant, N., & Weise, K. (2023, March 15). How Siri, East View Information Services. (2023). China National
Alexa and Google Assistant lost the A.I. race. The New York Knowledge Infrastructure. Frequently asked questions. https://
Times, https://www.nytimes.com/2023/03/15/technology/ www.eastview.com/resources/cnki-faq/#:~:text=What%20
siri-alexa-google-assistant-artificial-intelligence.html?a is%20CNKI%3F%20China%20National,Chinese%20
ction=click&pgtpe=Article&state=default&module=sty knowledge%2Dbased%20information%20resources.
ln-artificial-intelligence&variant=show&region=BELOW_
MAIN_CONTENT&block=storyline_flex_guide_recirc ELIZA. (2023, March 4). In Wikipedia. https://en.wikipedia.
org/wiki/ELIZA
Chen, C. (2023, March 7). China’s ChatGPT black market
is thriving. Wired, https://www.wired.co.uk/article/chinas- Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals,
chatgpt-black-market-baidu A. (2023). A SWOT analysis of ChatGPT: Implications for
educational practice and research. Innovations in Education
Chomsky, N., Roberts, I., & Watumull, J. (2023, March 8). and Teaching International, 1-15.
Noam Chomsky: The false promise of ChatGPT. The New
York Times, https://www.nytimes.com/2023/03/08/opinion/ Felten, E., Raj, M., & Seamans, R. (2023). How will language
noam-chomsky-chatgpt-ai.html modelers like ChatGPT affect occupations and ondustries?.
arXiv preprint arXiv:2303.01157.
Coolaj86. (2023). Chat GPT “DAN” (and other
“jailbreaks”). Github, https://gist.github.com/ Feng, Y. (2023). ChatGPT在教育领域的应用价值、潜在伦理
coolaj86/6f4f7b30129b0251f61fa7baaa881516 风险与治理路径. [The application value, potential ethical risks
and governance path of ChatGPT in the field of education].
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting DOI: 10.16075/j.cnki.cn31-1220/g4.2023.04.013
and cheating: Ensuring academic integrity in the era of
ChatGPT. 10.1080/14703297.2023.2190148 Firat, M. (2023). What ChatGPT means for universities:
Perceptions of scholars and students. Journal of Applied
Crawford, J., Cowling, M., & Allen, K. A. (2023). Leadership Learning & Teaching, 6(1). Advance Online Publication.
is needed for ethical ChatGPT: Character, assessment, and https://doi.org/10.37074/jalt.2023.6.1.22
learning using artificial intelligence (AI). Journal of University
Teaching & Learning Practice, 20(3), 02. Fowler, G. A. (2023, March 21). Say what, Bard? What Google’s

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 383


new AI gets right, wrong and weird. The Washington Post, www.bloomberg.com/news/newsletters/2023-03-21/baidu-
https://www.washingtonpost.com/technology/2023/03/21/ s-ernie-bot-aims-to-be-first-in-chatgpt-free-market-in-
google-bard/ china

Future of Life Institute. (2023). Pause giant AI experiments: Jiao, J., Chen, L., & Wu, W. (2023). Educational issues
An open letter. https://futureoflife.org/open-letter/pause- triggered by ChatGPT: Possible impacts and counter
giant-ai-experiments/ measures. Chinese Journal of ICT in Education, 29(3), 19-32.
DOI: 10.3969/j.issn.1673-8454.2023.03.003
Fyfe, P. (2022). How to cheat on your final paper: Assigning
AI for student writing. AI & Society, 1-11. Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva,
D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On
Gao, C. A., Howard, F. M., Markov, N. S., Dyer, E. C., Ramesh, opportunities and challenges of large language models for
S., Luo, Y., & Pearson, A. T. (2022). Comparing scientific education. Learning and Individual Differences, 103, 102274.
abstracts generated by ChatGPT to original abstracts using
an artificial intelligence output detector, plagiarism detector, Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P. (2023).
and blinded human reviewers. bioRxiv, 2022-12. Gpt-4 passes the bar exam. SSRN 4389233.

Geerling, W., Mateer, G. D., Wooten, J., & Damodaran, N. Khalil, M., & Er, E. (2023). Will ChatGPT get you caught?
(2023). Is ChatGPT smarter than a student in principles of Rethinking of plagiarism detection. arXiv preprint
economics?. SSRN 4356034. arXiv:2302.04335.

Gimpel, H., Hall, K., Decker, S., Eymann, T., Lämmermann, L., King, M. R., & ChatGPT. (2023). A conversation on artificial
Mädche, A., Röglinger, R., Ruiner, C., Schoch, M., Schoop, intelligence, chatbots, and plagiarism in higher education.
M., Urbach, N., Vandirk, S. (2023, March 20). Unlocking the Cellular and Molecular Bioengineering, 16(1), 1-2.
power of generative AI models and systems such as GPT-4
and ChatGPT for higher education: A guide for students and Kolodny, L. (2023, April 18). Elon Musk plans ‘TruthGPT’
lecturers. University of Hohenheim. A.I. to rival OpenAI, DeepMind. CNBC, https://www.cnbc.
com/2023/04/18/musk-calls-plans-truthgpt-ai-to-rival-
Gindham, A. (2023, April 5). 15 best ChatGPT plugins you openai-deepmind.html
didn’t know about in 2023. Writesonic, https://writesonic.
com/blog/chatgpt-plugins/ Kuhail, M. A., Alturki, N., Alramlawi, S., et al. (2023). Interacting
with educational chatbots: A systematic review. Education
Griffith, E., & Metz, C. (2023, March 14). ‘Let 1,000 flowers bloom’: and Information Technologies, 28, 973–1018. https://doi.
A.I. funding frenzy escalates. The New York Times, https://www. org/10.1007/s10639-022-11177-3
nytimes.com/2023/03/14/technology/ai-funding-boom.
html?action=click&module=RelatedLinks&pgtype=Article Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon,
L., Elepaño, C., ... & Tseng, V. (2023). Performance of ChatGPT
Haensch, A. C., Ball, S., Herklotz, M., & Kreuter, F. (2023). on USMLE: Potential for AI-assisted medical education using
Seeing ChatGPT through students’ eyes: An analysis of large language models. PLoS digital health, 2(2), e0000198.
TikTok data. arXiv preprint arXiv:2303.05349.
Labbrand. (2009, November 2). Bing chooses “必应” as
Hawking, S., Tegmark, M., & Russell, S. (2014, June 19). Chinese name to avoid negative associations. https://www.
Transcending complacency on superintelligent machines. labbrand.com/brandsource/bing-chooses-%E2%80%9C%E
Huffpost, https://www.huffpost.com/entry/artificial- 5%BF%85%E5%BA%94%E2%80%9D-chinese-name-avoid-
intelligence_b_5174265 negative-associations

Heaven, W. D. (2022, November 18). Why Meta’s latest Law, E. (2023, April 6). How Chinese netizens are bypassing
large language model survived only three days online. China’s ChatGPT ban. The Straits Times, https://www.
MIT Technology Review, https://www.technologyreview. straitstimes.com/asia/east-asia/how-chinese-netizens-are-
com/2022/11/18/1063487/meta-large-language-model-ai- bypassing-china-s-chatgpt-ban
only-survived-three-days-gpt-3-science/
Lee, H. (2023). The rise of ChatGPT: Exploring its potential in
Helft, M. (2009, May 28). Microsoft’s search for a name medical education. Anatomical Sciences Education, 00, 1-6.
ends with a Bing. The New York Times, https://www.nytimes. DOI: 10.1002/ase.2270.
com/2009/05/29/technology/internet/29bing.html
Lee, K.-F. (2018). AI superpowers. China, Silicon Valley and
Hong, W. C. H. (2023). The impact of ChatGPT on foreign the new world order. Houghton Mifflin Harcourt.
language teaching and learning: opportunities in education
and research. Journal of Educational Technology and Li, J. (2022). 人工智能时代成人高等教育课程研发存在的问
Innovation, 5(1). 题与应对策略. 成人教育, [Problems and countermeasures in
the development of adult higher education courses in the
Huang, Z. (2023, March 21). China’s first major chatbot era of artificial intelligence. Adult Education], (6), 1-11.
doesn’t need to be as good as ChatGPT. Bloomberg, https://

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 384


Li, J., & Dong, Y. (2021). 人工智能时代成人高等教育转型发 chatbot-ai-2023-2
展的再思考. 中国成人教育,[Rethinking the transformation
and development of adult higher education in the age of Mayer, D. (2016, February 9). Why Google was smart to
Artificial Intelligence. China Adult Education,] (7), 13-18. drop its ‘don’t be evil’ motto. Fast Company, https://www.
fastcompany.com/3056389/why-google-was-smart-to-
Li, P., & Jourdan A. (2017, August 4). Chinese chatbots drop-its-dont-be-evil-motto
apparently re-educated after political faux pas. Reuters,
https://www.reuters.com/article/us-china-robots/chinese- McCallum, S. (2023, April 1). ChatGPT banned in Italy
chatbots-apparently-re-educated-after-political-faux-pas- over privacy concerns. BBC, https://www.bbc.com/news/
idUSKBN1AK0G1 technology-65139406

Li, Y. (2023a, February 17). Why China didn’t invent ChatGPT. McCormack, G. (2023). Chat GPT is here! – 5 alternative ways
New York Times, https://www.nytimes.com/2023/02/17/ to assess your class! https://gavinmccormack.com.au/chat-
business/china-chatgpt-microsoft-openai.html gpt-is-here-5-alternative-ways-to-as-sess-your-class/

Li, Z. (2023b). The nature of ChatGPT and its impact on McMurtrie, B. (2022, December 13). AI and the future of
education. Chinese Journal of ICT in Education, 29(3), 12-18. undergraduate writing. The Chronicle of Higher Education.
DOI: 10.3969/j.issn.1673-8454.2023.03.002 https://www.chronicle.com/article/ai-and-the-future-of-
undergraduate-writing
Lu, J. Z. (2023). ChatGPT现象与面向未来的人才培育.
[ChatGPT and its potential in talent acquisition]. China McMurtrie, B. (2023, January 5). Teaching: Will
Academic Journal Electronic Publishing House, 42-43. ChatGPT change the way you teach?. The Chronicle of
Higher Education.https://www.chronicle.com/newsletter/
Luo, R., Sun, L., Xia, Y., Qin, T., Zhang, S., Poon, H., & Liu, teaching/2023-01-05
T. Y. (2022). BioGPT: generative pre-trained transformer
for biomedical text generation and mining. Briefings in Metz, C. (2022a). Genius makers. The mavericks who brought
Bioinformatics, 23(6), 1-11. https://arxiv.org/pdf/2210.10341. AI to Google, Facebook and the world. Penguin.
pdf
Metz, C. (2022b, December 10). The new chatbots could
Malinka, K., Perešíni, M., Firc, A., Hujňák, O., & Januš, change the world. Can you trust them? The New York Times,
F. (2023). On the educational impact of ChatGPT: Is https://www.nytimes.com/2022/12/10/technology/ai-chat-
Artificial Intelligence ready to obtain a university degree?. bot-chatgpt.html
ArXiv preprint, arXiv:2303.11146, 1-7. https://arxiv.org/
pdf/2303.11146.pdf Metz, C. (2023, March 14). OpenAI plans to up the ante in
tech’s A.I. race. The New York Times, https://www.nytimes.
Marcus, G. (2022, December 10). AI’s jurassic park moment. com/2023/03/14/technology/openai-gpt4-chatgpt.html
Gary Marcus substack, https://garymarcus.substack.com/p/
ais-jurassic-park-moment Metz, C., & Collins, K. (2023, March 14). 10 ways GPT-4 is
impressive but still flawed. The New York Times, https://
Marcus, G., & David, E. (2023, January 10). Large www.nytimes.com/2023/03/14/technology/openai-new-
language models like ChatGPT say the darnedest things. gpt4.html
Communications of the ACM, https://cacm.acm.org/blogs/
blog-cacm/268575-large-language-models-like-chatgpt- Mills, A. (2023a). AI text generators. Sources to
say-the-darnedest-things/fulltext stimulate discussion among teachers. https://docs.
google.com/document/d/1V1drRG1XlWTBrEwgGqd-
Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & Delgado cCySUB12JrcoamB5i16-Ezw/edit#heading=h.qljyuxlccr6
López-Cózar, E. (2021). Google Scholar, Microsoft Academic,
Scopus, Dimensions, Web of Science, and OpenCitations’ Mills, A. (2023b). ChatGPT just got better. What does that
COCI: A multidisciplinary comparison of coverage via mean for our writing assignments? Chronicle of Higher
citations. Scientometrics, 126(1), 871-906. Education, https://www.chronicle.com/article/chatgpt-just-
got-better-what-does-that-mean-for-our-writing-assignm
Mauldin, M. L. (1994, August). Chatterbots, Tinymuds, and ents?emailConfirmed=true&supportSignUp=true&support
the Turing test: Entering the Loebner prize competition. In ForgotPassword=true&email=drjuergenrudolph%40gmail.
AAAI, 94, 16-21. com&success=true&code=success&bc_
nonce=ppl84ovfdhi8axuyk590ko&cid=gen_sign_in
Mauran, C. (2023, March 24). Bing vs. Bard: The ultimate
AI chatbot showdown. Mashable, https://mashable.com/ Milmo, D. (2023, February 9). Why did Google’s ChatGPT rival
article/bing-vs-bard-ai-chatbot-comparison go wrong and are AI chatbots overhyped? The Guardian,
https://www.theguardian.com/technology/2023/feb/09/
Maxwell, T., & Langley, H. (2023, February 25). Leaked googles-bard-demo-what-went-wrong-chatgpt-chatbots-
messages show Googlers are taking out their frustrations ai
over layoffs on its new Bard AI chatbot. Business Insider,
https://www.businessinsider.com/google-layoffs-bard- Mirrlees, T., & Alvi, S. (2020). EdTech Inc. Selling, automating

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 385


and globalizing higher education in the digital age. Routledge. Ortiz, S. (2023a, March 16). What is the new Bing? Here’s
everything you need to know. ZDNET, https://www.zdnet.
Mollick, E. (2023a, March 25). Superhuman: What can AI com/article/what-is-bing-with-chatgpt-heres-everything-
do in 30 minutes? One useful thing, https://oneusefulthing. we-know/
substack.com/p/superhuman-what-can-ai-do-in-30-
minutes?utm_source=substack&utm_medium=email Ortiz, S. (2023b, March 13). The best AI chatbots: ChatGPT
and other interesting alternatives to try. ZDNET, https://
Mollick, E. (2023b, March 30). How to use AI to do www.zdnet.com/article/best-ai-chatbot/
practical stuff: A new guide. One useful thing, https://www.
oneusefulthing.org/p/how-to-use-ai-to-do-practical-stuff Ortiz, S. (2023c, February 16). I tried Bing’s AI chatbot, and it
solved my biggest problems with ChatGPT. ZDNET, https://
Mollick, E. (2023c, March 8). Secret cyborgs: The present www.zdnet.com/article/i-tried-bings-ai-chatbot-and-it-
disruption in three papers. One useful thing, https:// solved-my-biggest-problems-with-chatgpt/
www.oneusefulthing.org/p/secret-cyborgs-the-present-
disruption Ortiz, S. (2023d, February 10). What is Google Bard? Here’s
everything you need to know. ZDNET, https://www.zdnet.
Mollick, E. R., & Mollick, L. (2022). New modes of learning com/article/what-is-google-bard-heres-everything-you-
enabled by AI chatbots: Three methods and assignments. need-to-know/
SSRN Electronic Journal. https://doi.org/10.2139/
ssrn.4300783 Pan, D. (2021). 人工智能和高等教育的融合发展: 变革与引
领. 高等教育研究, [The integrated development of artificial
Mollick, E. R., & Mollick, L. (2023, March 17). Using AI to intelligence and higher education: Change and leadership.
implement effective teaching strategies in classrooms: Five Higher Education Research], 42(2), 40-46.
strategies, including prompts. http://dx.doi.org/10.2139/
ssrn.4391243 Pérez, J. Q., Daradoumis, T., & Puig, J. M. M. (2020).
Rediscovering the use of chatbots in education: A systematic
Moon, M. (2023, March 16). Baidu unveils ERNIE Bot, its literature review. Computer Applications in Engineering
ChatGPT rival. Engadget, https://www.engadget.com/baidu- Education, 28(6), 1549–1565.
unveils-ernie-bot-its-chatgpt-rival-105509006.html
Perkins, M. (2023). Academic Integrity considerations of AI
Mozur, P. (2017, July 20). Beijing wants A.I. to be made in Large Language Models in the post-pandemic era: ChatGPT
China by 2030. The New York Times, https://www.nytimes. and beyond. Journal of University Teaching & Learning
com/2017/07/20/business/china-artificial-intelligence.html Practice, 20(2), 07.

Murphy, D. (2016, March 25). Microsoft apologizes (again) Perrigo, B. (2023, January 18). Exclusive: OpenAI used
for Tay chatbot’s offensive tweets. PC Magazine, https:// Kenyan workers on less than $2 per hour to make ChatGPT
www.pcmag.com/news/microsoft-apologizes-again-for- less toxic. Time, https://time.com/6247678/openai-chatgpt-
tay-chatbots-offensive-tweets kenya-workers/

Naughton, J. (2023, February 4). ChatGPT isn’t a great leap Popenici, S. (2023). Artificial Intelligence and learning futures.
forward, it’s an expensive deal with the devil. The Guardian, Critical narratives of technology and imagination in higher
https://www.theguardian.com/commentisfree/2023/ education. Routledge.
feb/04/chatgpt-isnt-a-great-leap-forward-its-an-
expensive-deal-with-the-devil Price, R. (2016, March 24). Microsoft is deleting its AI
chatbot’s incredibly racist tweets. The Business Insider,
Nisar, S., & Aslam, M. S. (2023). Is ChatGPT a good tool for https://www.businessinsider.com/microsoft-deletes-racist-
TCM students in studying pharmacology?. SSRN 4324310. genocidal-tweets-from-ai-chatbot-tay-2016-3

O’Connor, S., & ChatGPT. (2022). Open artificial intelligence Ptacek, T. H. (2022, December 2). Tweet. https://twitter.com/
platforms in nursing education: Tools for academic progress tqbf/status/1598513757805858820
or abuse?. Nurse Education in Practice, 66, 103537-103537.
Qadir, J. (2022). Engineering education in the era of ChatGPT:
Okonkwo, C. W., & Ade-Ibijola, A. (2021). Chatbots Promise and pitfalls of generative AI for education. TechRxiv.
applications in education: A systematic review. Computers Preprint. https://doi.org/10.36227/techrxiv.21789434.v1
and Education: Artificial Intelligence, 2, 100033.
Rankin, K. (2016, March 25). Microsoft chatbot’s racist tirade
OpenAI. (2023). GPT-4 system card. https://cdn.openai.com/ proves that Twitter is basically trash. Colorlines, https://
papers/gpt-4-system-card.pdf colorlines.com/article/microsoft-chatbots-racist-tirade-
proves-twitter-basically-trash/
Oremus. (2022, June 17). Google’s AI passed a famous test
— and showed how the test is broken. The Washington Post, Reuters. (2023, April 11). Alibaba unveils Tongyi Qianwen, an
https://www.washingtonpost.com/technology/2022/06/17/ AI model similar to ChatGPT, as Beijing flags new rules. The
google-ai-lamda-turing-test/ Straits Times, https://www.straitstimes.com/asia/east-asia/

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 386


alibaba-unveils-tongyi-qianwen-an-ai-model-similar-to- org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-
gpt?utm_campaign=ST_Newsletter_PM Intelligence-in-higher-education-Quick-Start-guide_EN_
FINAL.pdf
Ribas, J. (2023a, February 22). Building the new Bing.
LinkedIn, https://www.linkedin.com/pulse/building-new- Sample, I. (2023, January 26). Science journals ban listing of
bing-jordi-ribas/ ChatGPT as co-author on papers. The Guardian, https://www.
theguardian.com/science/2023/jan/26/science-journals-
Ribas, J. (2023b, March 15). Tweet. https://twitter.com/ ban-listing-of-chatgpt-as-co-author-on-papers
JordiRib1/status/1635694953463705600?ref_
Saroyan, A., & Geis, G. L. (1988). An analysis of guidelines for
Roach, J. (2023, February 17). ‘I want to be human.’ My intense, expert reviewers. Instructional Science, 17(2), 101-128.
unnerving chat with Microsoft’s AI chatbot. https://www.
digitaltrends.com/computing/chatgpt-bing-hands-on/ Shaw, G. B. (2017). Pygmalion (Complete Illustrated Edition).
e-artnow.
Rogers, C. (2012). Client centered therapy (New Ed.). Hachette.
Skavronskaya, L., Hadinejad, A., & Cotterell, D. (2023).
Roivainen, R. (2023, March 28). I gave ChatGPT an IQ Test. Reversing the threat of artificial intelligence to opportunity:
Here’s what I discovered. Scientific American, https://www. a discussion of ChatGPT in tourism education. Journal of
scientificamerican.com/article/i-gave-chatgpt-an-iq-test- Teaching in Travel & Tourism, 1-6.
heres-what-i-discovered/
Smith, B. (2023, February 2). Meeting the AI moment:
Roose, K. (2023a, February 16). A conversation with Bing’s Advancing the future through responsible AI. Microsoft,
chatbot left me deeply unsettled. The New York Times, https://blogs.microsoft.com/on-the-issues/2023/02/02/
https://www.nytimes.com/2023/02/16/technology/bing- responsible-ai-chatgpt-artificial-intelligence/
chatbot-microsoft-chatgpt.html
Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A
Roose, K. (2023b, March 15). GPT-4 is exciting and scary. review of educational chatbots for the facebook messenger.
The New York Times, https://www.nytimes.com/2023/03/15/ Computers & Education, 151, 103862.
technology/gpt-4-artificial-intelligence-openai.html?act
ion=click&pgtype=Article&state=default&module=sty Sok, S., & Heng, K. (2023). ChatGPT for education and
ln-artificial-intelligence&variant=show&region=BELOW_ research: A review of benefits and risks. SSRN 4378735.
MAIN_CONTENT&block=storyline_flex_guide_recirc
Steinbeck, J. (2006). The grapes of wrath. Penguin.
Roose, K. (2023c, February 17). Bing’s A.I. chat: ‘I want
to be alive.’ The New York Times, https://www.nytimes. Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in
com/2023/02/16/technology/bing-chatbot-transcript.html higher education: Considerations for academic integrity and
student learning. Journal of Applied Learning and Teaching,
Roose, K. (2023d, February 3). How ChatGPT kicked off an 6(1).
A.I. arms race. The New York Times, https://www.nytimes.
com/2023/02/03/technology/chatgpt-openai-artificial- Sun, N. (2023). Reform of higher legal education in the
intelligence.html age of artificial intelligence. Journal of Shanxi Datong
University(Social Science Edition), 37(1), 147-151.
Ropek (2023, January 4). New York City schools ban ChatGPT
to head off a cheating epidemic. Gizmodo, https://gizmodo. Talan, T., & Kalinkara, Y. (2023). The role of artificial
com/new-york-city-schools-chatgpt-ban-cheating-essay- intelligence in higher education: ChatGPT assessment for
openai-1849949384 anatomy course. Uluslararası Yönetim Bilişim Sistemleri ve
Bilgisayar Bilimleri Dergisi, 7(1), 33-40.
Rudolph, J., Tan, S., & Aspland, T. (2022). Editorial 5(2):
Avoiding Faustian pacts: beyond despair, impostorship, and Tan, S. (2023). Harnessing Artificial Intelligence for innovation
conceit. Journal of Applied Learning and Teaching, 5(2), 6-13. in education. In Kumaran, R. (Ed.), Learning intelligence:
Innovative and digital transformative learning strategies:
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer Cultural and social engineering perspectives (pp. 335-363).
or the end of traditional assessments in higher education?. Springer.
Journal of Applied Learning and Teaching, 6(1). Advanced
online publication. Tate, T., Doroudi, S., Ritchie, D., & Xu, Y. (2023). Educational
research and AI-generated writing: Confronting the coming
Russell, S. J., & Norvig, P. (2003). Artificial Intelligence: A tsunami. https://doi.org/10.35542/osf.io/4mec3
modern approach (2nd ed.). Prentice Hall.
Tay. (2016, March 23). Wayback machine, https://web.
Sabzalieva, E., & Valentini, A. (2023). ChatGPT and artificial archive.org/web/20160323194709/https://tay.ai/
intelligence in higher education quick start guide. UNESCO
International Institute for Higher Education in Latin Tegmark, M. (2018). Life 3.0. Being human in the age of
America and the Caribbean. https://www.iesalc.unesco. Artificial Intelligence. Penguin.

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 387


The Economist. (2016, April 9). Bots, the next frontier. https:// Turing, A. M. (1950). Computing machinery and intelligence.
www.economist.com/business/2016/04/09/bots-the-next- Mind, 54(236), 433-460.
frontier
Vallance, C. (2023, March 30). Elon Musk among experts
The Economist. (2023a, February 9). The battle for internet urging a halt to AI training. BBC, https://www.bbc.com/
search. https://www.economist.com/leaders/2023/02/09/ news/technology-65110030
the-battle-for-internet-search
Van Dijck, J., Poell, T., & de Waal, M. (2018), The platform
The Economist. (2023b, January 30). The race of the AI labs heats society: Public values in a connective world. Oxford University
up. https://www.economist.com/business/2023/01/30/the- Press.
race-of-the-ai-labs-heats-up
Vanian, J. (2023, March 16). Microsoft adds OpenAI
The Economist. (2023c, February 8). Is Google’s 20-year technology to Word and Excel. CNBC, https://www.cnbc.
dominance of search in peril? https://www.economist. com/2023/03/16/microsoft-to-improve-office-365-with-
com/business/2023/02/08/is-googles-20-year-search- chatgpt-like-generative-ai-tech-.html
dominance-about-to-end
Verma, P. (2022, December 1). Meta’s new AI is skilled at a
The Economist. (2023d, February 28). Investors are going ruthless, power-seeking game. The Washington Post, https://
nuts for ChatGPT-ish artificial intelligence. https://www. www.washingtonpost.com/technology/2022/12/01/meta-
economist.com/business/2023/02/28/investors-are-going- diplomacy-ai-cicero/
nuts-for-chatgpt-ish-artificial-intelligence
Vogelgesang, J., Bleher, J., Krupitzer, C., Stein, A., & Jung,
The Economist. (2023e, March 2). The uses and abuses of R. (2023). Nutzung von ChatGPT in Lehre und Forschung –
hype. https://www.economist.com/business/2023/03/02/ eine Einschätzung der AIDAHO-Projektgruppe [The use of
the-uses-and-abuses-of-hype ChatGPT in teaching and research – an assessment of the
AIDAHO project group]. https://aidaho.uni-hohenheim.de/
Thio, S. Y. (2023, February 27). ChatGPT: Has artificial fileadmin/einrichtungen/aidaho/Dokumente/AI-DAHO_
intelligence come for our jobs? Business Times, p. 16. ChatGPT_Position_Paper_23-02-09_english.pdf

Thio, S. Y., & Aw, D. (2023, March 15). ChatGPT: How do we Wang, T. E. (2023). ChatGPT的特性、教育意义及其问题应对.
police the robots? Business Times, p. 15. 思想理论教育. [The characteristics, educational significance
and problem solving of ChatGPT. Ideological and theoretical
Thompson, S. A., Hsu, T., & Myers, S. L. (2023, March 23). education.].
Conservatives aim to build a chatbot of their own. The
New York Times, https://www.nytimes.com/2023/03/22/ Wang, Y. L. (2020). 人工智能与高等教育发展范式转型研
business/media/ai-chatbots-right-wing-conservative.html 究. 高等理科教育, [Research on Artificial Intelligence and
?action=click&pgtype=Article&state=default&module=st paradigm transformation of higher education development.
yln-artificial-intelligence&variant=show&region=BELOW_ Higher Science Education], (3), 73-78.
MAIN_CONTENT&block=storyline_flex_guide_recirc
Wang, Y., Wang, D., Liang, W., & Liu, C. (2023). Ethical risks
Thorp, H. H. (2023). ChatGPT is fun, but not an author. and avoidance approaches of ChatGPT in educational
Science, 379(6630), 313-313. application. Open Education Research, 29(2), 26-34.

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. Weil, E. (2023, March 1). You are not a parrot and a chatbot
T., Huang, R., & Agyemang, B. (2023). What if the devil is my is not a human. And a linguist named Emily M. Bender is
guardian angel: ChatGPT as a case study of using chatbots very worried what will happen when we forget this. New
in education. Smart Learning Environments, 10(1), 15. York Magazine.

Tollefson, J. (2022, February 28). Climate change is hitting Weizenbaum, J. (1966). ELIZA—A computer program for the
the planet faster than scientists originally thought. study of natural language communication between man
Nature, https://observatorio2030.com.br/wp-content/ and machine. Communications of the ACM, 9(1), 36–45.
uploads/2022/03/Climate-change-is-hitting-the-planet-
faster-than-scientists-originally-thought-2022.pdf Weizenbaum, J. (1976). Computer power and human reason.
WI-I. Freeman and Co.
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux,
M. A., Lacroix, T., ... & Lample, G. (2023). Llama: Open Wiggins, G. (1990). The case for authentic assessment.
and efficient foundation language models. arXiv preprint Practical Assessment, Research, and Evaluation, 2(1), 2.
arXiv:2302.13971.
Wiggers, K. (2023, February 1). OpenAI releases tool to detect
Tung, L. (2023, February 24). Microsoft: This is how we AI-generated text, including from ChatGPT. Techcrunch,
integrated ChatGPT-style tech into Bing search. https://www. https://techcrunch.com/2023/01/31/openai-releases-tool-
zdnet.com/article/microsoft-this-is-how-we-integrated- to-detect-ai-generated-text-including-from-chatgpt/
chatgpt-style-tech-into-bing-search/

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 388


Wodecki, B. (2023, February 4). UBS: ChatGPT is the fastest Yau, C., & Chan, K. (2023, Februray 17). University of Hong
growing app of all time. AI Business, https://aibusiness.com/ Kong temporarily bans students from using ChatGPT,
nlp/ubs-chatgpt-is-the-fastest-growing-app-of-all-time other AI-based tools for coursework. South China Morning
Post, https://www.scmp.com/news/hong-kong/education/
Wollny, S., Schneider, J., Di Mitri, D., Weidlich, J., Rittberger, article/3210650/university-hong-kong-temporarily-bans-
M., & Drachsler, H. (2021). Are we there yet?-a systematic students-using-chatgpt-other-ai-based-tools-coursework
literature review on chatbots in education. Frontiers in
Artificial Intelligence, 4. Yeadon, W., Inyang, O. O., Mizouri, A., Peach, A., & Testrow,
C. P. (2023). The death of the short-form physics essay in the
Wood, P. (2023, March 2). Oxford and Cambridge ban coming AI revolution. Physics Education, 58(3), 035027.
ChatGPT over plagiarism fears but other universities
choose to embrace AI bot. Inews, https://inews.co.uk/news/ Zemčík, M. T. (2019). A brief history of chatbots. DEStech
oxford-cambridge-ban-chatgpt-plagiarism-universities- Transactions on Computer Science and Engineering, 10.
2178391?ITO=newsnow
Zhai, X. (2022). ChatGPT user experience: Implications for
Wu, D., Li, H., & Chen, X. (2023). Analysis on the influence education. SSRN 4312418.
of artificial intelligence generic large model on education
application. Open Education Research, 29(2), 19-25. Zhang, H. F., Chen, Z. F., & Xu, S. J. (2022). 人工智能赋能高等
教育模式新变革. 软件导刊. [Artificial intelligence empowers
Xames, M. D., & Shefa, J. (2023). ChatGPT for research and new reforms in higher education models.] Software Guide,
publication: Opportunities and challenges. Journal of Applied 21(11), 166-171.
Learning and Teaching, 6(1). Advance online publication,
https://doi.org/10.37074/jalt.2023.6.1.20 Zhou, G. (2023, March 22). Testing Ernie: How Baidu’s AI
chatbot stacks up to ChatGPT. Nikkei Asia, https://asia.
Xu, Y. (2018). Programmatic dreams: Technographic inquiry nikkei.com/Business/Technology/Testing-Ernie-How-Baidu-
into censorship of Chinese chatbots. Social Media+ Society, s-AI-chatbot-stacks-up-to-ChatGPT
4(4), 2056305118808780.
Zhou, L., Gao, J., & Shum H.-Y. (2019). The design and
Yalalov, D. (2023, January 18). ChatGPT was taught by the implementation of XiaoIce, an empathetic social chatbot.
world’s poorest people. Metaverse Post, https://mpost.io/ arXiv:1812.08989
chatgpt-was-taught-by-the-worlds-poorest-people/
Zhuang, S. (2023, February 17). Police in China warn
Yang, Z. (2022, September 14). There’s no Tiananmen Square against ChatGPT ‘rumours’ and scams. South China
in the new Chinese image-making AI. MIT Technology Review, Morning Post, https://www.scmp.com/news/china/politics/
https://www.technologyreview.com/2022/09/14/1059481/ article/3210610/police-china-warn-against-chatgpt-
baidu-chinese-image-ai-tiananme rumours-and-scams

Yang, Z. (2023a, March 22). The bearable mediocrity of Zunt, D. (n.d.). Who did actually invent the word “robot” and
Baidu’s ChatGPT competitor. MIT Technology Review, https:// what does it mean? The Karel Čapek website, http://capek.
www.technologyreview.com/2023/03/22/1070154/baidu- misto.cz/english/robot.h
ernie-bot-chatgpt-reputation/

Yang, Z. (2023b, March 16). Chinese tech giant Baidu just


released its answer to ChatGPT. MIT Technology Review,
https://www.technologyreview.com/2023/03/16/1069919/
baidu-ernie-bot-chatgpt-launch/

Copyright: © 2023. Jürgen Rudolph, Shannon Tan and Samson Tan. This is an open-access article distributed under the terms of the
Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the
original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance
with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Journal of Applied Learning & Teaching Vol.6 No.1 (2023) 389

You might also like