AI Anxiety and Fear: A Look at Perspectives of Information Science Students and
Professionals Towards Artificial Intelligence
Brady D. Lund, Nishith Reddy Mannuru, Daniel Agbaji
Abstract
The rapid integration of artificial intelligence (AI) within society, and the emergence of the
Fourth Industrial Revolution (4IR), has ignited a spectrum of emotions in society, ranging from
enthusiasm to anxiety. This study investigates the depths of AI anxiety and fear among a
population of information science students and professionals. Utilizing a survey of over 200
current students and professionals, this study explores the connections between age, gender
identity, ethnicity, geographic location, educational attainment, and residence, and the levels of
anxiety and fear associated with AI and the 4IR. The findings reveal nuanced relationships, with
age, ethnicity, academic achievement, and regional context serving as critical differentiators in
4IR and AI anxiety within this population. Students and professionals alike may benefit from
seeking further education about this emerging technology.
The rapid advancement of artificial intelligence, especially in the form of generative AI
technologies and large language models like ChatGPT, has been nothing short of remarkable in
recent years. Some researchers and professionals have found this emerging technology to be a
great democratizing force for society – creating a level playing field for innovators who may
struggle with language, educational, or socioeconomic difficulties [1]. However, the embrace of
this technology is far from universal, as some individuals eagerly adopt these innovations while
others remain cautious or even resistant. While much scholarly attention has been devoted to
understanding AI adoption among early enthusiasts, there is a significant gap in our
understanding of the anxieties and fears held by information science students and information
professionals regarding the emergence of generative AI technologies. Even Elon Musk, whose
funding supported OpenAI, the company behind ChatGPT, warned in 2014 that “With artificial
intelligence we are summoning the demon” [2].
This study aims to gain a deeper understanding of feelings of anxiety and fear related to artificial
intelligence among information science students and professionals. To achieve this, we have
employed a comprehensive approach that incorporates demographic questions and items adapted
from the State-Trait Anxiety Inventory (STAI) and State Fear Inventory to explore information
science students and professionals’ perceptions of “artificial intelligence” and the “fourth
industrial revolution.” By utilizing various analytical techniques, including correlation,
regression, and cluster analysis, we intend to examine the connections between demographic
factors and anxiety-fear levels. This research seeks to primarily address the research question,
“What are the distinguishing factors that contribute to fears associated with emerging
technologies within communities of information science students and professionals?”
Definitions
There is considerable disagreement about how the terms “artificial intelligence” and “fourth
industrial revolution” should be defined, as they have both narrow and broad, general and
contextualized understandings among different disciplines and populations [3]. For the purposes
of this study, it was critical to clearly define how these terms were used from the outset, so that
study participants would firmly grasp what technologies, tools, and concepts they were being
asked to consider. The following are the definitions of these two terms that were provided to the
study’s subjects:
Artificial Intelligence is the development and use of digital/computer-based intelligence to
replicate human abilities, through machine learning and automation, with the goal of enhancing
human life by automating tasks and expanding intelligence beyond human capabilities.
The Fourth Industrial Revolution is a transformative period marked by the integration of digital
technologies into industries, revolutionizing processes through digitalization, automation, and
emerging technologies, impacting human activities and enhancing information handling and
outcomes.
Additionally, this research study focuses on the phenomenon of AI anxiety and AI fear, which we
define as feelings of discomfort or nervousness when thinking about or interacting with artificial
intelligence tools like ChatGPT.
Literature Review
The following literature review is based on existing research relating to generative artificial
intelligence, the fourth industrial revolution in society, AI anxiety, and AI fear. The literature
was gathered from relevant Ebsco databases in Spring 2024 and were used to inform the
development of this study’s primary research question and the study’s questionnaire instrument.
AI anxiety is a relatively recent phenomenon, coinciding with the increasing accessibility of AI
technology, particularly generative artificial intelligence (AI) technology. In contrast,
technology-related anxiety has been a well-documented area of study for some time. Broos
observed that females tend to exhibit higher levels of anxiety towards computers than males,
with experience and exposure to technology playing a significant role in moderating this anxiety
[4]. Hsieh et al. further noted that technology anxiety varies significantly based on factors like
age, gender, and one's exposure to technology [5]. Likewise, Berner et al. found that older adults
who actively engage in digital social participation, belong to younger age groups, and have
higher educational levels tend to experience lower levels of technology anxiety [6].
The impact of technology anxiety on individuals is substantial. For example, MacCallum et al.
highlighted that anxiety related to information and communication technology (ICT) can lead to
a reduced intention to adopt new technologies [7]. Similarly, Donmez-Turan and Kir emphasized
that user anxiety regarding technology plays a pivotal role in technology acceptance, potentially
hindering adoption even when the perceived usefulness of the technology is high [8]. Failing to
embrace new technologies due to anxiety can significantly impact one's quality of life,
particularly in an era where computer usage is pervasive in the business world [9]. This
reluctance can affect one's ability to effectively carry out work tasks, potentially jeopardizing job
performance and opportunities for professional growth.
In response to these concerns, several measures of technology anxiety have emerged over recent
decades. One influential model, the Computer Anxiety Rating Scale, was proposed by Rosen and
Weil during the early years of computers and the Internet [10]. However, as new information
technologies like virtual reality and chatbots have emerged, Redmann and Kotrlik suggest that
measures may need to evolve to address these changing modes of interaction [11]. An example
of an updated measure is the Abbreviated Technology Anxiety Scale developed by Wilson et al.,
designed for quick, low-stakes assessments of technology anxiety using a concise set of
questions, replacing lengthier and outdated measures [12].
Several popular theories explore hesitation and adoption behavior towards technology. Diffusion
of Innovations theory, initially proposed by Everett Rogers [13], and the Technology Acceptance
Model, proposed by Davis [14], are arguably the two most influential. Diffusion of Innovations
theory suggests that human behavior regarding adoption of new innovations/technologies is
relatively predictable and that distinct categories of adopters exist, whereby the amount of
information received and exposure to the innovation presented may influential adoption [15].
The Technology Acceptance Model suggests that several factors are core to the adoption of
technology, including core beliefs and behaviors of the individual, the amount of effort required
to utilize the technology, the perceived usefulness of the technology, and attitudes toward the
technology (including possible fear of the technology) [15; 16]. These theories lend a critical
dimension to the design of the present study, which explores anxiety and fear as factors that may
moderate the intention to utilize AI tools.
The emergence of publicly-available generative AI technology has shifted societal issues and
concerns tremendously. While large language models like ChatGPT present opportunity for
societal growth and improvement of human tasks [17; 18], they also pose major risks,
particularly in the realms of job replacement, copyright infringement, and fabrication of
misinformation [19; 20]. Given the rapid manifestation of this technology and its associated
concerns, it is not surprising that some may develop a fear of the technology [21].
AI anxiety encompasses various concerns related to the lack of personal control over AI
technology, including fears of privacy violations, bias, job displacement, learning anxiety, and
ethical violations [22]. Due to the nature of AI models, involving training on massive datasets
that are often not accessible to the general public, the potential for problematic and biased data to
influence models’ decision-making and outputs is significant [23]. Plagiarism and copyright can
similarly prove problematic, as it is difficult to trace what sources a model may be using to help
generate outputs [24]. This “black box” phenomenon in artificial intelligence naturally leads
many to be reluctant to embrace the technology [25].
A few recent studies have explored factors contributing to the phenomenon of AI anxiety and
fear. Research by Zhan et al. suggests that synchronicity can generally alleviate AI-related fears,
while perceived AI control exacerbates them [26]. Furthermore, Wang et al. and Li and Huang
both highlight job replacement as a primary factor contributing to AI anxiety [27; 28]. Kaya et al.
found that demographic factors such as computer use, knowledge, and personality traits can also
influence the intensity of AI anxiety [29]. Despite these concerns, studies by Cugurullo &
Acheampong and Hopcan et al. indicate that many individuals are still open to adopting AI
technology as it becomes more accessible [30; 31].
Presently, libraries are experiencing pressure to adapt to emerging artificial intelligence
technologies like ChatGPT, and many recent articles have discussed the ways in which libraries
can integrate these tools to better serve patrons [32; 33]. One major way in which libraries have
sought to embrace this technology is in the form of AI literacy, or AI-based information literacy,
instruction [34; 35]. However, a critical aspect of understanding what it takes to be ‘literate’ in
an area like artificial intelligence is an understanding of how people currently perceive the
technology and potential barriers to its use, such as fear and anxiety. This study endeavors to
explore the prevalence of these emotions in relation to AI and considers their significance to the
development of AI literacy skills and AI instruction.
Methods
This study utilized a survey comprised of 29 questions, several of them consisting of multiple
parts. These questions collected information about the demographic backgrounds of respondents
as well as their levels of anxiety and/or fear relating to artificial intelligence and the fourth
industrial revolution. The demographic questions included age, gender identity, ethnicity,
geographic location, and educational attainment. The wording of these questions was refined
through a small pilot test, which collected feedback from 14 respondents enrolled in a master’s
or doctoral program in information science at the researchers’ university.
The survey was distributed electronically to information science students and working
information professionals in university-related Facebook groups. A total of 12 different
universities in the United States, India, and Nigeria were represented, with respondents ranging
from information science undergraduate, masters, doctoral students, and academic faculty to
systems analysts, database managers, and academic librarians. It is worth noting that this
distribution method inherently introduced a potential skew toward individuals with higher
educational attainment, meaning that the results may not be generalizable to the general public,
though they should be generally representative of information science students and professionals.
Anxiety levels were assessed using questions adapted from the State-Trait Anxiety Inventory and
the State Fear Inventory. In the initial part of the survey, participants were given the definitions
of "artificial intelligence" and the "fourth industrial revolution” provided in the definitions
section of this paper. Subsequently, in the second section of the study, participants were
presented with either "artificial intelligence" or "fourth industrial revolution" and were asked to
rate their emotional responses using a four-point scale. They were asked to indicate the extent to
which they associated each of sixteen emotions with the respective concept. These emotions
included feeling tense, upset, nervous, worried, confused, furious, terrified, irritated, relaxed,
calm, confident, content, clear-headed, happy, excited, and pleased (with the last eight emotions
being inversely scored – i.e., less “relaxed” equals higher score).
After collecting the data, Pearson correlation and multiple linear regression analyses were
performed to explore the relationships between the demographic variables and emotional
responses related to AI and the Fourth Industrial Revolution (4IR). Regression models were
constructed for both of the dependent variables of anxiety-fear of AI and the 4IR. K-means
cluster analysis was also used to categorize respondents into distinct groups based on their
demographic characteristics. An analysis of variance (ANOVA) was then conducted to assess
differences among these clusters regarding anxiety and fear concerning AI and the 4IR. These
analyses allowed the researchers to inspect the relationships among variables from multiple
angles, identifying any significant findings.
Results
A total of 215 valid and complete responses were received from survey participants. These
responses can be broken down into several key demographic categories. Regarding age, 26% of
respondents were aged 18-29, while 57% fell into the 30-49 age group, and 17% were 50 years
old or older. In terms of gender identity, 55% identified as female, and 45% identified as male.
In terms of reported ethnicity, 72% of respondents identified as white, while 28% identified as
belonging to a non-white ethnicity, with Black (10%) and Asian (15%) being the most common.
Geographically, 70% of participants resided in North America, with the remaining 30% living
outside of North America. Educational attainment varied among respondents: 5% had less than a
two-year degree, 9% held a two-year degree, 44% had completed a four-year degree, and 42%
possessed a graduate degree. Approximately 40% of respondents were current information
science students, while 60% were information professionals. Lastly, with regard to residential
location, 13% of respondents lived in rural areas, 18% resided in towns with populations under
10,000, 20% in towns with populations ranging from 10,000 to 50,000, 25% in towns with
populations from 50,001 to 250,000, and 24% in cities with populations exceeding 250,000.
Table 1 displays the mean scores for each emotion for artificial intelligence and fourth industrial
revolution across the sixteen feelings attached to anxiety-fear. The final row provides the mean
score for all 16 feelings and the percentage that the mean lies between the scores of 1.00 and
4.00 (the four-point scale). For artificial intelligence, the highest mean score was observed for
"Worried" at 2.37, closely followed by "Nervous" at 2.34. The lowest mean scores were found
for "Upset" at 2.07 and "Furious" at 2.12. On the other hand, in the context of the fourth
industrial revolution, the highest mean score was "Worried" at 2.30, while the lowest was
"Furious" at 2.05. For Artificial Intelligence, the mean score on the scale was 2.24, whereas for
the Fourth Industrial Revolution it was just slightly lower at 2.22.
Table 1. Mean Scores for 16 Feelings Associated with Anxiety-Fear
Feeling
Artificial Intelligence
Fourth Industrial Revolution
Tense
2.31
2.28
Upset
2.07
2.04
Nervous
2.34
2.29
Worried
2.37
2.30
Confused
2.24
2.25
Furious
2.12
2.05
Terrified
2.15
2.08
Irritated
2.10
2.03
Relaxed
2.31
2.34
Calm
2.22
2.26
Confident
2.25
2.21
Content
2.36
2.40
Clear-Headed
2.17
2.18
Happy
2.29
2.24
Excited
2.15
2.28
Pleased
2.33
2.38
Mean
2.24
2.22
The study began by examining the correlation matrix for the constructs under investigation.
Interestingly, age exhibited a positive correlation with anxiety and fear related to AI, but the
relationship was only significant at a p < 0.05 level, not a p < 0.01 level. This suggests that, to
some extent, older individuals tended to report higher levels of AI-related anxiety. In contrast,
gender and political leaning showed no significant correlation with either AI or 4IR anxiety.
Ethnicity and region of origin, on the other hand, played more substantial roles in shaping these
anxieties. Individuals from certain ethnic backgrounds and regions displayed notably higher
levels of anxiety and fear regarding both AI and the 4IR. The correlations were statistically
significant at p < 0.01, implying that these factors might be pivotal in understanding the nuances
of technological apprehension.
Academic achievement, often seen as a marker of knowledge and exposure to these technologies,
displayed an intriguing inverse relationship. Lower academic achievement was associated with
higher AI and 4IR-related anxiety, suggesting that a lack of familiarity might breed unease.
Occupation (student vs. information professional) and urbanicity, though not significant at a p <
0.05 level, hinted at nuanced influences. While occupation showed a slight positive correlation
(being an information professional rating higher than a student) with both types of anxiety,
urbanicity had a negative correlation with AI-related anxiety, implying that urban dwellers might
be somewhat more at ease with the specter of AI.
Table 2. Correlation Coefficients for the AI Anxiety/Fear and 4IR Anxiety/Fear
Variable
Age
Gender
AI
.154*
.124
Anxiety/Fear
4IR
.006
.158*
Anxiety/Fear
* Significant difference at p < .05
** Significant difference at p < .01
Ethnicity
Region
of
Origin
Academic
Achievement
Political
Leaning
Occupation
Urbanicity
.263**
.176**
-.104
-.018
.057
-.053
.220**
.242**
-.137
-.143
.167
-.117
Digging deeper, the study conducted regression analyses to unveil the factors that most strongly
influenced anxiety and fear towards AI, as shown in Table 3. Model 1 revealed that age,
ethnicity, and academic achievement were significant predictors. Older individuals, those from
specific ethnic backgrounds, and those with lower academic achievements reported higher AIrelated anxiety. The model accounted for 20.8% of the variance in AI anxiety, underscoring the
multifaceted nature of this fear.
Model 2 looked only at the independent variables that were statistically significant from model 1
(age, ethnicity, academic achievement, and urbanicity). While several of these variables were not
individually significant in the correlation matrix, they were found to be significant contributors
to the regression model. The adjusted R-squared value was 19.9%, providing decent explanatory
power, while still reinforcing that there are yet uncharted factors contributing to AI anxieties.
Table 3 Regression Findings for Dependent Variable of Anxiety/Fear of AI
Model 1
Age
Gender
Ethnicity
Region of Origin
Academic Achievement
Political Leaning
Occupation
Urbanicity
R-squared Value
* Significant difference at p < .05
.017 (.006)**
.140 (.126)
.353 (.160)*
.009 (.151)
-.320 (.075)**
-.076 (.075)
.134 (.316)
-.091 (.050)*
.208 F = 6.78, p < .001
Model 2
.016 (.006)**
.341 (.148)*
-.330 (.073)**
-.102 (.047)*
.199 F=13.07, p<.001
** Significant difference at p < .01
Shifting the focus to anxiety and fear surrounding the 4IR, Model 1 initially showed age, ethnicity,
and occupation as potential contributors. While age displayed a positive coefficient, ethnicity and
occupation exhibited significant relationships. The region of origin (U.S. vs International) and
specific occupations (student vs. librarianship vs. information science and technology positions)
seemed more susceptible to 4IR-related anxiety, explaining 15.1% of the variance in AI fear. In
Model 2, only the variables that were significant in model 1 were evaluated. The resulting model
explains only 7.5% of the variance in AI fear, but does produce a greater F statistic, indicating a
significant influence of these two variables.
Table 4 Regression Findings for Dependent Variable of Anxiety/Fear of 4IR
Age
Gender
Ethnicity
Region of Origin
Academic Achievement
Political Leaning
Occupation
Urbanicity
R-squared Value
Model 1
.088 (.054)
.122 (.076)
.152 (.100)
.241 (.091)**
-.032 (.045)
-.039 (.045)
.346 (.191)*
-.033 (.030)
.151 F = 4.57, p < .001
Model 2
.288 (.086)**
.397 (.200)*
.075
F=8.89, p<.001
* Significant difference at p < .05
** Significant difference at p < .01
Using k-means cluster analysis, four groups were found in the data based on the demographic
variables. Group 1, containing 56 of the 215 responses, included respondents who lived in
medium-density urban or suburban areas and had a high-level of educational attainment. Group 2,
containing 82 responses, included respondents who were in low density urban or rural areas and
had a high-level of educational attainment. Group 3, including 68 responses, were high urbanicity
and high educational attainment. Finally, Group 4, with 9 responses, included responses with low
levels of urbanicity and low levels of educational attainment.
To investigate differences among these groups, we conducted an analysis of variance (ANOVA)
with the dependent variables of AI anxiety-fear and 4IR anxiety-fear. No difference was found for
AI, with F = 1.541, p = .205. However, a statistically significant difference was found for fourth
industrial revolution, with F = 3.36, p = .02. A Tukey post-hoc analysis indicates that the difference
emerges between groups one and three, the medium-density urban and high education group and
the high urbanicity and high education group, while no significant difference was found with the
low urbanicity group and medium urbanicity group. This suggests a jump from suburban or small
urban to large urban community may correspond with a reduction in anxiety-fear of the fourth
industrial revolution.
Discussion
The findings from our study provide valuable insights into the distinct factors that influence
anxiety and fear concerning Artificial Intelligence (AI) in general and the Fourth Industrial
Revolution (4IR), particularly its implications for employment and jobs. This discussion section
will delve into the reasons behind the differences in predictors for these two domains of
technological apprehension, aligning with the primary research question for this study.
Age as a Differential Predictor
One of the notable distinctions lies in the role of age as a predictor. While age exhibited a
positive relationship with AI-related anxiety in both the correlation and regression analyses,
suggesting that older individuals tend to report higher levels of fear in the face of AI
advancements, this relationship did not hold for 4IR anxiety. A key reason behind this distinction
might be the diverse ways in which AI and the 4IR impact individuals of varying age groups.
AI is perceived as an overarching and potentially disruptive force in various aspects of life,
including personal interactions, decision-making processes, and the workforce. Older
individuals, who may have less exposure to these technologies during their formative years,
could perceive AI as more intimidating compared to young generations, as is often the case with
emerging technologies [36; 37]. In contrast, the 4IR's implications for employment and jobs
might be perceived with heightened anxiety across all age groups. Younger generations, who are
likely to be more deeply integrated into the rapidly changing job landscape, may harbor unique
anxieties related to the 4IR's influence on their career prospects that match the anxieties of older
populations.
Ethnicity and Region of Origin: Universal vs. Contextual Concerns
Ethnicity and region of origin emerged as significant predictors for both AI and 4IR-related
anxieties, albeit with varying degrees of influence. The question arises as to why these
demographic factors play such pivotal roles in shaping technological apprehensions.
For AI-related anxiety, the universal nature of AI's impact might explain the significance of
ethnicity and region of origin. AI has the potential to disrupt various aspects of life across the
globe, but its consequences might be felt differently in different cultural and regional contexts.
Individuals from specific ethnic backgrounds or regions might have distinct cultural or economic
ties to AI-related technologies, influencing their perceptions and anxieties [38]. Additionally,
access to information and education about AI could vary based on one's geographical location or
cultural background, especially in developing countries, further contributing to these differences
[20].
Conversely, when it comes to 4IR anxiety, the focus on employment and jobs in a specific
regional or industrial context could be the key differentiator. The 4IR's impact on employment is
likely to be localized, with certain regions or industries experiencing more pronounced
disruptions than others [39]. This localized impact could explain why region of origin becomes a
significant predictor for 4IR-related anxiety. Specific ethnic backgrounds might also be
associated with particular industries or job sectors that are more vulnerable to 4IR-related
changes.
Academic Achievement and Exposure to Technology
The inverse relationship between academic achievement and AI-related anxiety is another
intriguing finding. Academic achievement is often considered a marker of knowledge and
exposure to technology. This study’s results support this belief by suggesting that individuals
with lower academic achievements report higher levels of anxiety. Those with lower academic
achievements may perceive these technologies as threatening their livelihoods and competencies,
leading to higher anxiety levels. In contrast, individuals with higher academic achievements may
be more likely to see these technologies as opportunities for innovation and adaptation, thereby
mitigating their anxieties. Indeed, a 2023 study by Pew Research found that those professions
requiring the greatest knowledge and exposure to AI tend to have a more positive outlook than
jobs in areas like accommodation and food services [40].
Occupation and Urbanicity: Nuanced Influences
Occupation and urbanicity provide nuanced insights. The positive relationship between
occupation and anxiety in both domains suggests that individuals in information professionals
might be more attuned to the potential impacts of AI and the 4IR on their work compared to
information science students. Similarly, the negative correlation between urbanicity and AIrelated anxiety hints at urban dwellers' potentially higher exposure to and comfort with
technological advancements. The findings of the cluster analysis and ANOVA test lends further
support to this potential and suggests that the distinction between medium urbanicity and high
urbanicity is much more significant than between low urbanicity and medium urbanicity. These
findings align with existing understandings of various urban-rural and occupational divides, such
as in the areas of politics and religion [41].
Further Implications for Theory and Practice
The findings of this study convey several important implications for theory and practice. This
study found several demographic factors that correlate with AI anxiety and fear, which may be
crucial to understanding how these anxieties emerge and manifest in different populations.
However, the findings also suggest that these anxieties are multi-faceted constructs, not easily
explained by any one single or group of variables. We may grow in our understanding of some
factors contributing to AI anxiety and fear, while recognizing that there are yet many others that
we have not identified or explained.
These results further suggest that targeted educational interventions to introduce these
technologies to current and future information professionals may increase the likelihood that they
embrace the technology, especially if they are members of a group that may be more
apprehensive about AI. If AI anxiety were to be reduced among the population of information
professionals, these professionals could, in turn, be fundamental in educating the public about AI
in order to reduce misplaced concerns. Several recent studies suggest paths for information
professionals seeking to embrace these new AI training roles, such as James and Filgo [34] and
Lund [42]
Limitations and Future Research
There are several limitations to note for this study, some of which also provide opportunities for
further research. As noted in the methods section, there is a skew towards participants with
higher educational attainment than the general public, which limits the generalizability of the
study’s findings. Future research could expand the participant sample to include more
individuals with lower educational attainment. Additionally, the geographic distribution was
limited to only three countries, making generalizability outside of these regions problematic as
well. Finally, this survey relied on self-reported data, which can be impacted by several biases,
including social desirability, recall, and inaccuracies in self-assessment towards emotions. Future
studies could include additional measures and questions to assess the expression of AI anxiety
and fear.
Conclusion
This study provides critical insights into the factors influencing anxiety and fear surrounding
Artificial Intelligence (AI) and the Fourth Industrial Revolution (4IR) with a specific emphasis
on their implications for employment and jobs within information science. Age, ethnicity,
geographic origin, and educational attainment all proved to be significant factors to some extent.
The backgrounds of information science students and professionals may help to shape how they
view the prospect of AI, and the fourth industrial revolution as a whole, integrating into their
lives. For individuals and groups that are particularly inclined towards high levels of anxiety or
fear towards AI, greater education and exposure to the technology may prove beneficial.
References
1. Lund B. Large Language Models are a Democratizing Force for Researchers: A Call for
Equity and Inclusivity in Journal Publishers’ AI Policies. InfoScience Trends 2024; 1(1):
4-7.
2. McFarland M. Elon Musk: ‘With artificial intelligence we are summoning the demon.’
The Washington Post. Retrieved from
https://www.washingtonpost.com/news/innovations/wp/2014/10/24/elon-musk-withartificial-intelligence-we-are-summoning-the-demon/ (2014, accessed 10 July 2024).
3. Agbaji DA, Lund BD and Mannuru NR. Perceptions of the fourth industrial revolution
and artificial intelligence impact on society. arXiv preprint arXiv:2308.02030, 2023.
4. Broos A. Gender and information and communication technologies anxiety: Male selfassurance and female hesitation. Cyberpsychology and Behavior 2005; 8(1): 21-31.
5. Hsieh Y, Tsai W and Hsia Y. A study on technology anxiety among different ages and
gender. International Conference on Human-Computer Interaction 2020; 2020: 241-254.
6. Berner J, Dallora AL, Palm B, Berglund JS and Anderberg P. Five-factor model,
technology enthusiasm and technology anxiety. Digital Health 2023; 9: 1-7.
7. MacCallum K, Jeffrey L and Kinshuk. Comparing the role of ICT literacy and anxiety in
the adoption of mobile learning. Computers in Human Behavior 2014; 39: 8-19.
8. Donmez-Turan A and Kir M. User anxiety as an external variable of technology
acceptance model: A meta-analytic study. Procedia Computer Science 2019; 158: 715724.
9. Saade RG and Kira D. Mediating the impact of technology usage on perceived ease of
use by anxiety. Computers and Education 2007; 49(4): 1189-1204.
10. Rosen LD and Weil MM. Computer anxiety: A cross-cultural comparison of university
students in ten countries. Computers in Human Behavior 1995; 11(1): 45-64.
11. Redmann DH and Kotrlik JW. A trend study: Technology adoption in the teachinglearning process by secondary business teachers. The Delta Pi Epsilon Journal 2008;
L(2): 77-89.
12. Wilson ML, Huggins-Manley AC, Ritzhaupt AD and Ruggles K. Development of the
abbreviated technology anxiety scale (ATAS). Behavior Research Methods 2023; 55:
185-199.
13. Rogers EM. Diffusion of innovations. New York: Free Press of Glencoe, 1962.
14. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS Quarterly 1989; 13(3): 319-340.
15. Holden RJ and Karsh BT. The technology acceptance model: its past and its future in
health care. Journal of Biomedical Informatics 2010; 43(1): 159-172.
16. Marangunić N and Granić A. Technology acceptance model: a literature review from
1986 to 2013. Universal Access in the Information Society 2015; 14: 81-95.
17. Baidoo-Anu D and Ansah LO. Education in the era of generative artificial intelligence
(AI): Understanding the potential benefits of ChatGPT in promoting teaching and
learning. Journal of AI 2023; 7(1): 52-62.
18. Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepano C, …, Tseng V.
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using
large language models. PLoS Digital Health 2023; 2(2): e0000198.
19. Lund BD, Wang T, Mannuru NR, Nie B, Shimray S and Wang Z. ChatGPT and a new
academic reality: Artificial Intelligence-written research papers and the ethics of the large
language models in scholarly publishing. Journal of the Association for Information
Science and Technology 2023; 74(5): 570-581.
20. Mannuru NR, Shahriar S, Teel ZA, Wang T, Lund BD, …, and Vaidya P. Artificial
intelligence in developing countries: The impact of generative artificial intelligence (AI)
technologies for development. Information Development 2023; online first.
https://doi.org/10.1177/02666669231200628
21. Peterson CJ. ChatGPT and medicine: Fears, fantasy, and the future of physicians. The
Southwest Respiratory and Critical Care Chronicles 2023; 11(48): 18-30.
22. Johnson DG and Verdicchio M. AI anxiety. Journal of the Association for Information
Science and Technology 2017; 68(9): 2267-2270.
23. Saeidnia HR. Ethical artificial intelligence (AI): confronting bias and discrimination in
the library and information industry. Library Hi Tech News 2023; early view.
https://doi.org/10.1108/LHTN-10-2023-0182
24. Mohammadzadeh Z, Ausloos M and Saeidnia HR. ChatGPT: high-tech plagiarism awaits
academic publishing green light. Non-fungible token (NFT) can be a way out. Library Hi
Tech News 2023; 40(7): 12-14.
25. Castelvecchi D. Can we open the black box of AI?. Nature News 2016; 538(7623): 20.
26. Zhan ES, Molina MD, Rheu M and Peng W. What is there to fear? Understanding multidimensional fear of AI from a technological affordance perspective. International
Journal of Human-Computer Interaction 2023; online view.
https://doi.org/10.1080/10447318.2023.2261731
27. Wang Y, Wei C, Lin H, Wang S and Wang Y. What drives students’ AI learning
behavior: A perspective of AI anxiety. Interactive Learning Environments 2022; latest
articles. https://doi.org/10.1080/10494820.2022.2153147
28. Li J and Huang J. Dimensions of artificial intelligence anxiety based on the integrated
fear acquisition theory. Technology in Society 2020; 63: article 101410.
https://doi.org/j.techsoc.2020.101410
29. Kaya F, Aydin F, Schepman A, Rodway P, Yetisensoy O and Kaya MD. The roles of
personality traits, AI anxiety, and demographic factors in attitudes toward artificial
intelligence. International Journal of Human-Computer Interaction 2022; 40(2): 497514. https://doi.org/10.1080/10447318.2022.2151730
30. Cugurullo F and Acheampong RA. Fear of AI: An inquiry into the adoption of
autonomous cars in spite of fear, and a theoretical framework for the study of artificial
intelligence technology acceptance. AI & Society 2023; 39: 1569-1584.
https://doi.org/10.1007/s00146-022-01598-6
31. Hopcan S, Turkmen G and Polat E. Exploring the artificial intelligence anxiety and
machine learning attitudes of teacher candidates. Education and Information Technology
2023; 29: 7281-7301.
32. Houston AB and Corrado EM. Embracing ChatGPT: Implications of emergent language
models for academia and libraries. Technical Services Quarterly 2023; 40(2): 76-91.
33. Lund BD, Khan D and Yuvaraj M. ChatGPT in medical libraries, possibilities and future
directions: An integrative review. Health Information & Libraries Journal 2024; 41(1):
4-15.
34. James AB and Filgo EH. Where does ChatGPT fit into the Framework for Information
Literacy? The possibilities and problems of AI in library instruction. College & Research
Libraries News 2023; 84(9): 334.
35. Zhang L. Exploring Generative AI with Chatgpt for Possible Applications In Information
Literacy Instruction By Li Zhang, Mississippi State University. Journal of Electronic
Resources Librarianship 2024; 36(1): 64-69.
36. Sun E and Ye X. Older and fearing new technologies? The relationship between older
adults’ technophobia and subjective age. Aging and Mental Health 2024; 28(4): 569-576.
37. Vaportzis E, Clausen MG and Gow AJ. Older adults perceptions of technology and
barriers to interacting with tablet computers: A focus group study. Frontiers in
Psychology 2017; 8: 1687. https://doi.org/10.3389/fpsyg.2017.01687.
38. Liu Z. Sociological perspectives on artificial intelligence: A typological reading.
Sociology Compass 2021; 15(3): e12851.
39. Dwivedi YK, Hughes L, Ismagilova E, Aarts G, Coombs C, Crick T, ..., and Williams
MD. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges,
opportunities, and agenda for research, practice and policy. International Journal of
Information Management 2021; 57: 101994.
40. Kochhar R. Workers’ views on the risk of AI to their jobs. Pew Research Center.
https://www.pewresearch.org/social-trends/2023/07/26/workers-views-on-the-risk-of-aito-their-jobs/ (2023, accessed 14 July 2024).
41. Gimpel JG and Karnes KA. The rural side of the urban-rural gap. PS: Political Science &
Politics 2006; 39(3): 467-472.
42. Lund BD. The prompt engineering librarian. Library Hi Tech News 2023; 40(8): 6-8.