Faculty Student Perceptions Preprint
Faculty Student Perceptions Preprint
Faculty Student Perceptions Preprint
Perceptions About Generative AI and ChatGPT Use by Faculty and College Students
a
Communication Department, Penn State Shenango, PA, United State; b Teaching and Learning with
*Tiffany Petricini
Office 310B
Sharon, PA 16146
(724-983-2872)
1
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Abstract
Since November 2022, ChatGPT has created a stir on college campuses. Approaches by colleges and
universities have varied, including updating academic integrity policies or even outright banning the
use of ChatGPT (Clercq, 2023; Mearian, 2023; Schwartz, 2023). As this new technology continues to
evolve and expand, colleges and universities are grappling with the opportunities and challenges of
using such tools. Very little literature exists on student and faculty perceptions of AI use in higher
education, particularly related to generative AI tools. The present study aims to fill this gap and offer
perceptions from both students and faculty from a large research university in the mid-Atlantic.
Survey participants consisted of 286 faculty and 380 students across multiple campuses. Participants
completed an online questionnaire that included open-ended responses, scaled items, and finite
questions. Overall, the reported use of ChatGPT technology is infrequent, though most respondents
feel its use is inevitable in higher education. Faculty and students are uncertain but familiar with
generative AI tools and ChatGPT. Institutions interested in developing policies around using ChatGPT
on campus may benefit from building trust in generative AI, for both faculty and students. Concerns
with academic integrity are prevalent. Faculty and students agree that ChatGPT use violates
institutional policy, though both groups also agree generative AI has value in education.
2
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Perceptions About Artificial Intelligence and ChatGPT Use by Faculty and College Students
Perceptions about technologies are one of the most important considerations for how well-
received and how widely adopted new technologies will become. Typically, when faced with
uncertainty, individuals turn to societal rhetoric to help fulfill their information needs, which shapes
both perceptions and the likelihood of adopting technologies. Publicly, AI is often described in ways
that are either “exaggeratedly optimistic” or “melodramatically pessimistic” (Cave et al., 2018, p. 9).
Because rhetoric and public narratives shape policy and perceptions, research on perceptions of AI is
Despite AI having been coined in the 1950’s, the evolution and adoption of the technologies
in education and the public sphere has been slow (MCCarthy et al, 2006). In part, this is due to the AI
Winter, a period in which development could not keep up with funding and expectations (Umbrello,
2021). The idea of AI has existed in cultural narratives for millennia (Petricini, forthcoming?). Not only
are narratives surrounding AI part of many ancient cultures, but AI tool use in classrooms is also
relatively common place. For example, writing tools such as Quillbot (https://quillbot.com/) and
ChatGPT was introduced to the public in November in 2022, and within two months, there
were million registered users (Hu, 2023). This rapid growth ignited new interest and questions about
AI. In response, individuals involved in all sectors of higher education almost immediately started
asking questions about the implications of ChatGPT and other generative AI technologies (McMurtie,
2023). Often, the same innovative technology that threatens traditional higher education practices
can also revitalize the educational process (Christensen & Eyring, 2011). In this case, the “paradox” of
ChatGPT can potentially destroy current instructional methods while also creating new educational
opportunities (Lim et al., 2023). Higher education is slow to change, and innovation can disrupt the
system, leaving uncertainty in the wake. The introduction of generative AI tools, like ChatGPT, into
3
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Despite the use of AI tools in education and cultural narratives of AI, the current rhetoric,
particularly in pop culture, that tends to dichotomize AI as either messianic or apocalyptic fails to paint
a realistic picture of what AI is or its potential value in education. Although some organizations are
putting forward ideas on how to incorporate AI in the classrooms, such as the University Center for
Teaching and Learning at the University of Pittsburgh (2023), others are banning its use in classrooms,
like New York public schools (Rosenblatt, 2023). Whether incorporated or banned, it is essential to
know what perceptions students and faculty have about the use of generative AI tools, like ChatGPT,
in higher education. Results of our online questionnaire show students and faculty believe the use of
AI in college classrooms is inevitable. Its use is surrounded by uncertainty, issues of trust, and unclear
academic integrity expectations. These perceptions will play a huge role in shaping how generative AI
technologies are used and misused both in settings of higher education and across the globe more
Literature Review
AI Perceptions & Understanding
understandings and levels of optimism. A consumer study showed that participants often compared
AI to human beings (Chen et al., 2021). Other studies investigated perceptions related to AI being used
in hiring decisions (Acikgoz et al., 2019) and healthcare and medicine (Castagno & Khalifa, 2020; Laïet
al., 2020; Stai et al., 2020). A Royal Society (2017) report attempted to visualize the general public’s
perceptions of AI and machine learning. It revealed that even when most individuals seemed familiar
with AI applications in which speech recognition and interaction was possible, like Siri and Alexa, very
few survey participants had ever heard of the term machine learning. The Royal Society (2017) further
reported that the minimal research that did exist, showed that the public were generally “content”
with the idea that robots replace humans for dangerous or difficult jobs, but they were not as
receptive to robots on “caring” (p. 84). Fast and Horvitz (2017) analyzed perceptions of AI in New York
4
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Times articles and found that rhetoric after 2009 was "consistently more optimistic than pessimistic”
and also found evidence of more specific concerns manifesting, such as a "loss of control of AI, ethical
concerns for AI, and the negative impact of AI on work". In February 2023, Educause found that nearly
half of the respondents reported that they were either familiar or very familiar with ChatGPT
(Muscanell & Robert, 2023). Additionally, fifty-four percent of respondents reported feeling optimistic
about AI’s role in our future (Muscanell & Robert, 2023). In a follow-up poll, Educause found that
optimism slightly increased to 67% of respondents and that most respondents (83%) felt that AI was
Godoe and Johanse’s (2012) research argued that optimistic personalities can play a part in
whether technologies are perceived as being easy or useful, that leads to acceptance. Venkatesh and
Bala (2008) showed pre-implementation interventions of the technology can lead to more acceptance
by showing use in context. However, the surge of ChatGPT use has rendered pre-implementation
measures somewhat unreasonable. Rather, reactive approaches might include including training,
relationship building, and support from others and the system (Venkatesh & Bala, 2008). The adoption
Recent literature has explored AI use in higher education, specifically in medical education,
and investigated student perceptions of AI. Sit et al., (2020) examined attitudes of medical students
in the United Kingdom and observed most students recognized the importance of AI in their education
and careers, and also believed that training in AI should be part of the degree earning process. Out of
484 surveyed participants, only 45 had taken any type of training related to AI, of which no students
were trained as part of their coursework (Sit et al., 2020). In another study with medical students,
researchers found a difference between faculty and students' professional needs; while students
wanted training related to their patient care, faculty wanted training related to their teaching (Wood
et al., 2021). The researchers also found both students and faculty learned about AI in the media
5
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
(Wood et al., 2021), indicating a general lack of AI in the curriculum for both parties. Finally, Teng et
al., (2022) surveyed the AI perceptions of students studying healthcare, in the research that spanned
eighteen universities across Canada (Teng et al., 2022). The authors noted that roughly 75% of
participants had positive outlooks related to AI in general, though students’ attitudes varied
depending on specific disciplines. Regardless of a positive or negative outlook, most students felt AI
As noted previously, other types of AI have been used in the classroom and education for
decades, like Quillbot and Grammarly. Kurniati and Fithriani (2022) studied Quillbot in graduate
education. Most participants believed it was a helpful tool for learning. Because Quillbot is capable of
paraphrasing work, it can be causes for concern about the originality of one’s work (Kurniati &
Fithriani, 2022). Tutoring systems and virtual assistants, forms of generative AI, have been used by
students and faculty for some time. As early as 2005, McArthur et al., (2005) showed intelligent
tutoring systems (ITS) or “drill-and practice” type AI models that combine factual and procedural skills
to play an important role in the classroom. In another study, librarians considered virtual assistants,
such as Siri, to be a use of artificial intelligence, with 77% reported using artificial intelligence in their
personal lives (Hervieux & Wheatley, 2021). Interestingly, the number of positive responses from
librarians increased only after the AI terminology was defined (Hervieux & Wheatley, 2021), hinting
at a low level of understanding about what the tool is versus what the tool does. Without a clear
(Ferguson et al., 2016). In general, faculty seem to lack the necessary knowledge, access to training,
A persistent gap between the perception of AI and its use in education seems to exist. The present
study seeks to contribute to an emerging evidence base for the integration of AI tools, such as
ChatGPT, into higher education teaching and learning practices. No evidence of perceptions of
generative AI related to faculty and students or the use in the classroom was found in the literature.
6
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
tool or addressed faculty’s perceptions of their students’ uses of AI for assignments and assessments.
This study aims to fill the gap in the literature by answering several research questions:
No evidence of perceptions of generative AI related to faculty and students or the use in the
classroom was found in the literature. Additionally, no literature addressed students’ perceptions of
their faculties’’ uses of AI as a classroom tool or addressed faculty’s perceptions of their students’ uses
of AI for assignments and assessments. This study aims to fill the gap in the literature by answering
RQ 1: What are the general perceptions held by students about AI use in the classroom,
especially ChatGPT?
RQ 2: What are the general attitudes held by faculty about AI use in the classroom, especially
ChatGPT?
Methods
Measures and Procedure
generative AI, which were distributed online and allowed for a convenient collection process of a large
amount of data in a short period. Questionnaires contained three sections: Section 1 consisted of 8
Likert scale questions about participants’ familiarity and experiences with generative AI and ChatGPT;
Section 2 contained 14 Likert scale questions concerning the benefits and risks of using generative AI
in higher education; and Section 3 asked for participants’ demographic information with multiple-
choice options. All Likert scale items asked participants to rate their level of agreement with the given
The study was conducted at a public university in the mid-Atlantic region of the eastern United
States. The institution enrolled over 50,000 students and had roughly 7,500 faculty members during
Fall 2022 at multiple campus locations. After IRB approval, we sent recruitment materials to academic
7
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
leaders to send to their current faculty and enrolled student. The survey was open during the spring
2023 for approximately 16 weeks. Participants answered the survey on a voluntary basis.
Data analysis
Exploratory and statistical analyses were performed using R and R studio. The Likert scale
mean scores are based on a five-point scale (1=strongly disagree, 2=disagree, 3=neutral, 4=agree,
5=strongly agree) level of agreement. We used one-sample Wilcoxon signed rank test to decide
whether the median response to a question negative (< 3) or positive (> 3). Kruskal-Wallis tests were
conducted for each question to examine whether there are significant differences between students’
In addition, exploratory factor analysis with varimax rotation was conducted for Section 2 to
identify underlying any constructs that contribute to students’ and faculties’ attitudes towards
generative AI applications in higher education. The factorability of the data was examined by Kaiser–
Meyer–Olkin Measure of Sampling Adequacy (MSA) (Hopkins, 1998). The optimal number of factors
was determined by the eigenvalue threshold of 1.0 and visual inspection of the scree plot (Rummel,
1988). Root Mean Square Error of Approximation (RMSEA) and Tucker-Lewis Index (TLI) were
calculated to assess the model quality. A factor loading of .45 was used to determine items associated
with a factor. After extracting the factors, we conducted ANOVA tests to examine whether there is a
Results
Respondent Demographics
A total of 380 student responses and 276 faculty responses were collected across multiple
campuses. We removed 46 (12.11%) students and (20) 7.25% faculty for incomplete responses to key
8
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
items in sections 1 and 2 of the questionnaires. No duplicate responses were found in the data. This
study is largely descriptive and exploratory and did not require a specific sample size.
Overall, more male students responded to the survey than female students (53.66%), were in
the traditional-aged college student range of 18-24 (77.13%) and identified as Caucasian or white
(72.9%). Around a third of students were first-year (32.52%), identified as a first-generation student
(28.35%). More STEM students responded (45.27%) and answered maybe to whether they had an
interest in a career in technology (35.58%). We had and more female faculty than male faculty
(51.37%), between the ages of 36-59 (60.39%), and identified as Caucasian (75%). More senior faculty
with seven or more years of teaching experience (66.27%), had a doctorate (61.96%), and were
tenure-track (46.67%) responded to the questionnaire. More faculty associated their discipline to
9
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
(n=296) (n=255)
Discipline STEM 45.27% 31.37%
Applied Disciplines 32.77% 19.61%
Humanities 3.72% 18.43%
Social Sciences 10.47% 18.04%
Arts .34% 7.45%
Decline to state 7.43% 5.10%
Overall, students and faculty were aware of AI & ChatGPT but reported limited experiences
with the technologies (see Table 2). One-sided Wilcoxon tests examined whether the mean value is
greater or less than 3, and the significance level is denoted under the Mean. Specifically, students and
10
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
faculty agreed they were familiar with the concepts of AI (µ=4.18 and 4.04, respectively) and
familiarity with ChatGPT (µ=3.70 and 3.47, respectively), though both groups indicated limited
experienced using ChatGPT (µ=2.51 and 2.62, respectively). On average, students responded that
faculty have not yet addressed the use of AI in their classrooms (µ=2.34) and faculty agreed they have
not (µ=2.49) so it is unsurprising that students had a low agreement that faculty have integrated
ChatGPT into instruction (µ=1.56) and similarly faculty have not yet integrated the tool (µ=1.51). With
fewer experiences using ChatGPT, students and faculty felt neutral about their plans to use the tool in
their coursework (µ=2.38) or instruction (µ=2.28), likely because students’ have not yet received
instruction (µ=1.57) and faculty have not yet received training or faculty development (µ=1.62) on the
use of ChatGPT. Students and faculty agreed they are interested in receiving instruction (µ=3.49) and
training (µ=3.49). The additional Kruskal–Wallis tests showed no significant difference between
Table 2 Survey Items About AI & ChatGPT Familiarity and Average Values
(1: strongly disagree – 5: strongly agree,
Statistical significance is denoted by ∗:p<.05, ∗∗:p<.01, ∗∗∗:p<.001)
No. Question Focus Mean
Student 4.18***
Q1.1 I am familiar with the concept of artificial intelligence (AI).
Faculty 4.04***
Student 3.70***
Q1.2 I am familiar with ChatGPT.
Faculty 3.47***
Student 2.51***
Q1.3 I have experience using ChatGPT.
Faculty 2.62***
My instructors have addressed the use of AI (especially ChatGPT and
Student 2.34***
other text and image generation tools) in my courses.
Q1.4
I have addressed the use of AI generators such as ChatGPT with my
Faculty 2.49***
students.
My instructors have integrated AI generators like ChatGPT into their
Student 1.56***
instruction.
Q1.5
I integrate ChatGPT (or similar AI text and image generators) in my
Faculty 1.51***
instruction.
I plan to use ChatGPT or similar tools for my coursework in the future. Student 2.38***
Q1.6 I plan to integrate ChatGPT (or similar tools) into my instruction in the
Faculty 2.28***
future.
Q1.7 I have received instruction about how to use ChatGPT or similar tools. Student 1.57***
11
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Q1.1
Q1.2
Q1.3
Q1.4
Q1.5
Q1.6
Q1.7
Q1.8
Both students and faculty demonstrated mixed attitudes towards AI applications (Table 3).
While students believe that AI has value in education (µ=3.78), they also agree that AI could be
dangerous for students (µ=3.40) and using AI to complete coursework violates academic integrity
(µ=3.84). Moreover, students were not comfortable with a syllabus created by AI (µ=2.53) and
these statements but also showed concerns on that students misuse generative AI tools (µ=3.64) and
that students need to be taught how to use generative AI (µ=4.32). Although, faculty agreed that AI is
used for good and helpful reasons (µ=3.38) and generative AI could be beneficial for students (µ=3.40),
12
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
they were not ready to use generative AI for teaching (µ=2.68) or research, scholarship, and creative
activities (µ=2.43).
Table 3 Survey questions about using AI for learning and teaching, average values, and associated
underlying factors
(Statistical significance is denoted by ∗:p<.05, ∗∗:p<.01, ∗∗∗:p<.001)
(†: student factor 1, ‡: student factor 2, §: faculty factor 1, ¶: faculty factor 2)
No. Question Focus Mean
Q2.1 † Student 3.40***
Artificial intelligence (in the form of text and image generation) could
be dangerous for students. § Faculty 3.64***
*
Q2.2 Instructors misuse AI in academic settings. Student 2.75
13
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Items Q2.1 to 2.9 were paired between students and faculty, and the Kruskal–Wallis tests showed a
significant difference for five items. Faculty agreed more that generative AI could be potentially
dangerous for students (H=5.12, p=.024). While students do not have strong opinions about faculty’s’
misuse of AI, 48.05% faculty agreed that students misuse generative AI to some extent ( 𝐻 =
112.12, 𝑝 < .001). Faculty also strongly agree that using AI to complete coursework would be an
inevitable trend (Q2.6, 𝐻 = 3.99, 𝑝 = .046). Attitudes towards restricting students from AI use were
mixed among students, and only 27.34% faculty disagree with such a restriction (𝐻 = 3.84, 𝑝 = .050).
Most students agreed that using generative AI to complete coursework violates academic integrity
Q2.1
Q2.2
Q2.3
Q2.4
Q2.5
Q2.6
Q2.7
Q2.8
14
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Student factors
We performed factor analysis to Q2.1 to 2.8 and Q3.1 to 3.6 for students’ attitudes towards
AI & ChatGPT applications. The overall MSA was .81; Q2.2 and Q3.4 were excluded from the analysis
due to low MSA (<.6). The number of factors was determined using an eigenvalue threshold of 1 (Hair
et al., 2013). Two factors were chosen after varimax rotation, accounting for 29.43% and 10.96% of
the overall variance, respectively. The RMSEA (.079) and TLI (.878) indices suggest a moderate model
fit. The loadings of each factor (see Table 4) use ±.45 as the cutoff point. Therefore, we considered
Factor 1 as students’ Distrust (Trust) in AI’s value in education and Factor 2 as Perceived prevalence of
AI use in Education.
ANOVA tests on the standardized factor scores showed significant differences in Distrust in AI’s value
in education for three demographic variables: Gender ( 𝐹2,310 = 5.363, 𝑝 = .003, 𝜂𝑝2 = .024 ),
Discipline (= 3.258, 𝑝 = .012, 𝜂𝑝2 = .042), and Interest in tech career (.𝐹2,310 = 3.295, 𝑝 = .038 ,
𝜂𝑝2 = .021) (see Figure 3). No significant difference was found on Perceived prevalence of AI use. Figure
3 illustrates the distribution of Factor 1 scores across the demographic variables. The post-hoc Tukey
HSD test suggested that male students tend agree more on statements about AI’s positive value in
15
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Figure 3 Distributions of Distrust in AI’s value in education across students’ Gender, Discipline, and
Interest in tech career
Faculty factors
We performed factor analysis for Q2.1 to 2.8 and Q4.1 to Q4.6 for students’ attitudes towards AI &
ChatGPT applications. The overall MSA was .84 and Q2.5 was excluded from the analysis. Q2,6, Q2.2,
and Q4.6 were further removed due to poor goodness-of-fit in factor analysis. Two factors were
chosen eventually, accounting for 32.27% and 27.78% of the total variance respectively. The RMSEA
(.151) and TLI (.830) indices suggest a moderate model fit. The loadings of each factor are shown in
Table 5. Therefore, we consider Factor 1 as faculty’s Perceived dangers in student’s use of AI and Factor
16
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
ANOVA tests on the standardized factor scores showed significant differences on Perceived dangers
in student’s use of AI for Race & ethnicity ( 𝐹3,233 = 3.485, 𝑝 = .016, 𝜂𝑝2 = .029), Faculty status
𝐹4,233 = 2.431, 𝑝 = .048, 𝜂𝑝2 = .038), and Discipline 𝐹5,233 = 2.490, 𝑝 = .032, 𝜂𝑝2 = .051) (see Figure
4). Post-hoc test indicated that Asian faculty might perceive less dangers in students’ use of AI
demographic variables.
17
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Figure 4 Distributions of Perceived dangers in student’s use of AI across faculty’s Race & Ethnicity,
Faculty status, and Discipline
Discussion
Uncertainty & Familiarity
Results suggest that faculty and students alike were uncertain about using ChatGPT and AI
generative tools in higher education. Uncertainty is likely driven by limited experience using the tools
and a lack of training or instruction on how to use the tool in the classroom. Because of uncertainty,
certain responses, like anxiety or bewilderment, can occur (Carleton, 2016; Grupe & Nitschke, 2013;
Greco & Roger, 2003; Reuman et al., 2015; Rowe, 1994). ChatGPT is already in use, making pre-
interventions for acceptance impossible (Venkatesh & Bala, 2008). Faculty and students have some
level of familiarity with generative AI tools, though fewer with ChatGPT specifically, by comparing it
18
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
to other forms of AI. This general type of familiarity is not strong enough to connect to the classroom
to fully express the opportunities and challenges of how teaching and learning will change.
ChatGPT is a popular topic in higher education, even with uncertainty. Recently, Educause
found around a quarter of colleges and university websites included ChatGPT topics related to
personal perspectives and opinions or classroom lectures (Veletsianos et al., 2023). For their
systematic literature review, Rudolph et al., (2023) included blog posts, newspaper articles, social
media posts, and other materials not commonly used in literature reviews because ChatGPT is such a
current and pressing topic. People are talking about the tool which has created a sense of familiarity,
but there is a lack of experience which potentially furthers uncertainty of its use. This combination of
familiarity, coupled with uncertainty, makes AI use in the classroom complicated, particularly as these
Previously, the introduction of calculators into classrooms caused uncertainty, but they
became so ubiquitous that acceptance and use were inevitable (Roberts et al., 2013). Calculators still
require knowledge to translate and contextualize the mathematical output, which meant instructional
focus and approach had to change. The same could be said for AI tools; the introduction into the
classroom has the capacity to refocus instructional methods and assessment to fully harness the
benefits of such output. However, similarities between calculators and AI tools likely ends here
because AI can develop seemingly unique content like that of a human. Nothing like ChatGPT and
other AI generative tools have previously existed at this scale, nor for free, so making a comparison is
difficult. Unlike calculators that had to be built and paid for (Roberts et al., 2013), ChatGPT’s
Faculty in this study was uncertain about how to incorporate the technology into instruction,
and they were not alone in seeking help for what to do. When people lack of information, feelings can
be linked to uncertainty (Gao & Gudykunst, 1999; Gudykunst & Nishida, 1984; Gudykunst, 1985;
Gudykunst & Hammer, 1983; Gudykunst et al., 1996) and to help colleges and universities navigate
19
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
the tsunami of AI tools in higher education, governments and nonprofit agencies are releasing guides
around ChatGPT. For example, UNESCO highlights the beneficial and supportive roles AI can have in
the classroom, not to mention the ethical considerations, the lack of regulation, potential bias, and
accessibility uses (Sabzalieva & Valentini, 2023). These types of recommendations and ideas could be
used to help alleviate some feelings of uncertainty and potentially lead to acceptance.
Trust
When discussing AI use in the classroom, two important threads of trust studies intersect
relationship and trust between faculty and students; and individuals’ trust in AI. A major contributor
to academic achievement is the faculty-student relationship that as are positive relationships and
fostered by trust (Ullah & Wilson, 2007). Accordingly, faculty has two priorities in their trust-building
activities; they must have technical competence and the ability to place the student’s interests before
their own, if necessary (see Barber, 1993). This duality might mean giving students the guidance to
use AI tools in their work because they want and need to learn how to use these tools, even when
faculty may not personally want to use them. Trust in generative AI tools, or “TAI”, Trustworthy
Artificial Intelligence (Thiebes et al., 2020) is also critical. Successful integration of AI in higher
education will have to rely on trust, particularly because it will allow for a clear understanding of how
the tool is used, dispelling notions of overuse, abuse, or outright rejection (Glikson & Wooley, 2020).
Our data shows faculty think students will misuse ChatGPT, which is a potential distrust
between faculty and students. As the complexity of AI continues to grow, mistrust in the AI tool will
be fueled by the lack of transparency about how the model functions. Not unique to AI, other
computer-mediated-research has been plagued with a building trust for years, as the filtering of cues
leads to the interpreter relying on their own subjective experience to fill in the gaps about the other
(Liu, 2021; Petricini, 2019). Outside of a few experts, most people will find it difficult to fully grasp AI
functionality, so providing both faculty and students with a competency grounded in explainability will
be essential to help build trust (Ferrario & Loi, 2022; Jacovi et al., 2021; Liu, 2021). Glikson and Wooley
20
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
(2020) highlight that the distrust with AI or other machine intelligences influence both cognitive trust
as created by the abstract and opaque nature of AI, and emotional trust, in part from the human-link
forms that AI can take. The more information that users have about AI models, output generation and
decision-making processes; the less uncertainty they will harbor, and the more likely users are to
perceive those models to be trustworthy (Ashoori & Weisz, 2019). For students, especially, the trust
of AI will no doubt be shaped by how faculty frame AI use in their fields and courses. Reducing
While multiple and useful applications of AI in the classroom exist, language models of AI most
definitely have the potential for bias (Atlas, 2023), which may dissolve trust in the tool. As faculty
introduce the use of AI into their classrooms, they carry an additional responsibility for controlling and
checking for bias, while instructing students on the potential reproduction of harmful practices. In
general, trust and mistrust in AI is affected by multiple factors, which may need unique and separate
Relational trust is the most important trust in the classroom and directly affects student
achievement (Hoy, 2002; Wilson, 2007). Over 50% of students in our study would not feel confident if
the instructor used it to create a syllabus, and the number of students who did not feel confident
having an AI grade was even higher. Faculty must be transparent about their use of AI and can be
models for ethical conduct for students. Likewise, students transparent about their use of AI for
coursework can generate faculty trust. Learning environments in which training and relationship
building through transparent use (see Vankatesh & Bala, 2008) can create a positive climate of
Academic Integrity
Based on the results from this survey, it is clear students and faculty are uncertain about
the AI technologies but believe its use in higher education is inevitable. This inevitability means higher
education must come to terms with what a tool like this means for academic integrity and instruction,
mainly cheating and plagiarism. It is likely the focus of punishable policies will be placed on students’
21
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
use for cheating, plagiarizing, or otherwise practicing “academic misconduct” or “dishonesty” (Eaton,
2021, p. 15-16). However, academic integrity can take on several dimensions beyond just students’
academic behavior, so institutions should be careful about just creating policies out of fear to prevent
students’ use.
The Department of Education (ED) calls for transparency of use, including functional
knowledge of the tool and disclosure, while still allowing for human-interventions to counter
algorithmic bias or discrimination (U.S. Department of Education, 2023). Therefore, faculty will need
to know how the technology works and always retain control of the content. Faculty will also need to
check for accuracy, especially when using the tools for grading and assessment because of “cognitive
bias” or other social identity discriminatory framing (Sabzalieva & Valentini, 2023, p. 11). AI tools
cannot make these decisions or be held accountable (Foltynek, et al., 2023). Instead, that
responsibility resides with the instructor. Additionally, faculty will need to understand and know that
students’ experiences are diverse, and faculty can inadvertently create unfair activities or
Based on our data, students want instruction on AI use and this interest can help drive
decision-makers about implementation and use. Xu and Babaian (2021) note that when students’
opinions about AI tools are unknown, it is harder to design suitable curriculum and outcomes to meet
students’ needs. The students’ interest in instruction should not be taken lightly, because it can serve
as a guide for how institutions might adapt and introduce AI beyond punitive policies. Eaton (2021)
explains that unwanted academic behavior often includes moral and policy issues, on top of the
teaching and learning (p. 15). In part, the curriculum will need to consider alternative assessments so
that the desire to cheat using ChatGPT could decrease. AI can be a powerful tool, especially to help
generate ideas.
Faculty in this study felt students misused AI tools to complete coursework, going so far
as to agree that the use of ChatGPT should be restricted. Caution should be made about restrictive
use, as some uses may address accessibility (see Morris, 2020) or limit future employment
22
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
opportunities and expectations (see Dawson, 2020). The use of these AI technologies is seemingly
boundless, though thoughtful reflection for how faculty introduce the tools in their classrooms might
help curtail certain uncertainties and concerns about how students intend to use the tools. Faculty
concerned about misuse or academic integrity should consider introducing these tools only so far as
it impacts their classroom activities by altering assignments and having students cite the use like other
education can create hysteria and panic that leads to unnecessary policy making. Eaton (2021) warns
that policies around academic behaviors are often reactive to undesirable behaviors and seek to
punish or limit that bad behavior. An unintended or dangerous outcome to over-policing the use of AI
might be the psychological impact on students being accused of academic misconduct. Instead,
educators and policymakers need to think about how AI tools can be used, teaching around and about
the use, by redesigning assessments and objectives. Rudolph et al., (2023) provide some examples
about how to combat the use of ChatGPT in assignments by having students do things the AI tool
cannot do and to incorporate the tools into assignments. Short of returning to handwritten exams
(see Cassidy, 2023), faculty and administrators will need to explore ways to incorporate new tools will
Conclusion
This study provides some key elements to understand student and faculty perceptions of
generative AI. Possibly the most important and pressing takeaway is that the time is ripe--Now is the
time that higher education can have the biggest impact when molding and shaping perceptions, use
and misuse, and ethical directions. AI tools, like ChatGPT, are still novel or toy-like, with a low
awareness in the general population at this point (Moor, 2005), making it an opportune time to
implement programs to build trust and literacy. Even in 2019, calls to consider regulation of AI, both
from an ethical and legal standpoint (D'Acquisto, 2020) were made, yet the impending surge was
23
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
unanticipated. Until the release of Chat-GPT 3, even most of the academic community was unaware
of the capabilities, and now, even while writing this manuscript, it is hard to keep up with the
availability, ideas, and integration into the classroom (see Coffey, 2023). Data from this study shows
students and faculty are interested in learning more about how to use these tools successfully and
earnestly in the classroom. Presently, faculty can drive how ChatGPT, and other generative AIs are
used in the classroom, within their individual disciplines and higher education in general.
Our data demonstrate the importance of faculty competency with generative AI, as very few
faculty have incorporated ChatGPT into their instruction. Given that other generative AI tools have
been used in classrooms for years, uncertainty, trust, and issues with academic integrity about
ChatGPT have most certainly played a part in decision-making. However, if higher education is going
to help meet the demands of a well-trained and educated workforce, faculty cannot opt out of the
using, discussing, and teaching AI within their classes. Students and faculty will need literacy and
Still very new, the direction of generative AI in the classroom is unclear. Policies that prohibit
use could lead to even more use (see Lim, et al., 2023), leading to even more unsavory practices.
Disregard for AI tools will not make them disappear, just like the advancement and access will not
curb use and excitement. Building familiarity and trust using studies like this to create training
opportunities is a path to hopefully ensure that the integration of generative AI in the classroom is
The role that administrators and faculty will play will shape not only the use of generative AI
tools in higher education but will affect the entire globe. Students will leave universities and colleges
and enter the workforce, and the training that has been foundational for their practice and use of
generative AI technologies will shape the future of economies, societies, and intuitions. Embracing
our role in higher education to understand and adapt to perceptions is the first step in, hopefully,
24
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Supplement materials:
Faculty Survey
Student Survey
Acknowledgements
The authors wish to express gratitude to their peer Dr. Joneen Schuster of Penn State Shenango. Her
valuable input, advice, and feedback were integral to the design of this study.
25
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
References
Acikgoz, Y., Davison, K. H., Compagnone, M., & Laske, M. (2020). Justice perceptions of artificial
https://doi.org/10.1111/ijsa.12306
Ashoori, M., & Weisz, J. D. (2019). In AI we trust? Factors that influence trustworthiness of AI-
Atlas, S. (2023). ChatGPT for higher education and professional development: A guide to
retrieved from
https://digitalcommons.uri.edu/cgi/viewcontent.cgi?article=1547&context=cba_facpubs
Berger, C. R., & Bradac, J. J. (1982). Language and social knowledge: Uncertainty in interpersonal
Berger, C. R., & Calabrese, R. J. (1974). Some explorations in initial interaction and beyond: Toward a
1(2), 99-112.
Carleton, R. N. (2016). Fear of the unknown: One fear to rule them all?. Journal of anxiety disorders,
41, 5-21.
Cassidy, C. (2023 Jan 9). Australian universities to return to 'pen and paper' exams after students
news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-
using-ai-to-write-essays
Castagno, S., & Khalifa, M. (2020). Perceptions of Artificial Intelligence Among Healthcare Staff: A
https://doi.org/10.3389/frai.2020.578983
26
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and
https://www.repository.cam.ac.uk/bitstream/handle/1810/287193/EMBARGO%20-
%20web%20version.pdf?sequence=1
Chen, H., Chan-Olmsted, S., Kim, J., & Sanabria, I. M. (2021). Consumers’ perception on artificial
Clercq, G. D. (2023). Top Top French university bans use of ChatGPT to prevent plagiarism. Reuters.
https://www.reuters.com/technology/top-french-university-bans-use-chatgpt-
prevent-plagiarism-2023-01-27/
Coffey, L. (31 Jul 2023). Professors craft courses on ChatGPT with ChatGPT. Inside Highered.
https://www.insidehighered.com
Eaton, S. E. (2021). Plagiarism in higher education: Tackling tough topics in academic integrity. ABC-
CLIO.
Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., ... & Vuorikari, R. (2016).
Research evidence on the use of learning analytics: Implications for education policy.
Ferrario, A., & Loi, M. (2022, June). How explainability contributes to trust in AI. In 2022 ACM
Foltynek, T., Bjelobaba, S., Glendinning, I., Khan, Z., Santos, R., Pavletic, P., & Kravjar, J. (2023). ENAI
https://edintegrity.biomedcentral.com/articles/10.1007/s40979-023-00133-4
Godoe, P., & Johansen, T. (2012). Understanding adoption of new technologies: Technology
27
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Greco, V., & Roger, D. (2003). Uncertainty, stress, and health. Personality and Individual differences,
34(6), 1057-1068.
Grupe, D. W., & Nitschke, J. B. (2013). Uncertainty and anticipation in anxiety: an integrated
501.
Hervieux, S., & Wheatley, A. (2021). Perceptions of artificial intelligence: A survey of academic
librarians in Canada and the United States. The Journal of Academic Librarianship, 47(1),
https://doi.org/10.1016/j.acalib.2020.102270
Hopkins, K. D. (1998). Educational and psychological measurement and evaluation. Allyn & Bacon, A
Hoy, W. K. (2002). Faculty trust: A key to student achievement. Journal of School Public Relations,
23(2), 88-103.https://voicebot.ai/2023/02/09/chatgpt-is-banned-by-these-colleges-and-
universities/
Hu, K. (2 Feb 2023). ChatGPT sets record for fastest-growing user base - analyst note. Reuters.
https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-
analyst-note-2023-02-01/
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial
intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021
Johnson, L., Becker, S. A., Cummins, M., Estrada, V., Freeman, A., & Hall, C. (2016). NMC horizon
report: 2016 higher education edition (pp. 1-50). The New Media Consortium,
https://www.learntechlib.org/p/171478/.
Kurniati, E. Y., & Fithriani, R. (2022). Post-Graduate students’ serceptions of Quillbot utilization in
English academic writing class. Journal of English language teaching and linguistics, 7(3), 437-
28
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Laï, M. C., Brian, M., & Mamzer, M. F. (2020). Perceptions of artificial intelligence in healthcare:
findings from a qualitative survey study among actors in France. Journal of translational
Lim, W., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023) Generative AI and the
https://doi.org/10.1016/j.ijme.2023.100790.
McArthur, D., Lewis, M., & Bishary, M. (2005). The roles of artificial intelligence in education: current
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth
summer research project on artificial intelligence, august 31, 1955. AI magazine, 27(4), 12-
12.
chatgpt-write-this-report
McMurtie, B. (2023). ChatGPT is everywhere. The chronicle of higher education. Retrieved from
https://www.chronicle.com/article/chatgpt-is-already-upending-campus-practices-colleges-
are-rushing-to-respond
Mearia, L. (2023). Schools look to ban ChatGPT, students use it anyway. Computer World. Retrieved
from https://www.computerworld.com/article/3694195/schools-look-to-ban-
chatgpt-students-use-it-anyway.html
Momani, A. M., & Jamous, M. (2017). The evolution of technology acceptance theories. International
Moor, J. H. (2005). Why we need better ethics for emerging technologies. Ethics and information
29
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Muscanell, N., & Robert, J. (2023). EDUCAUSE QuickPoll results: Did ChatGPT write this report?
EDUCAUSE. https://er.educause.edu/articles/2023/2/educause-quickpoll-results-did-
chatgpt-write-this-report
Petricini, T. (2024). ChatGPT: Everything to everyone all at once. Forthcoming in ETC: A Review of
General Semantics, 8.
researcharchive/3357/Public-Attitudes-to-Science-2014.aspx
Reuman, L., Jacoby, R. J., Fabricant, L. E., Herring, B., & Abramowitz, J. S. (2015). Uncertainty as an
anxiety cue at high and low levels of threat. Journal of behavior therapy and experimental
Roberts, D., Lenug, A., & Lins, A. (2013). From the Slate to the Web: Technology in Mathematics
Curriculum. In M.A. Clements, A.J. Bishop, C. Keitel, J. Kilpatrick, & F.K.S. Leung (Eds.) Third
Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education.
https://doi.org/10.1007/s40593-016-0110-3
Rudolph, J., Tan, S., & Tan, S., (2023). ChatGPT: Bullshit spewer or the end of traditional assessments
https://journals.sfu.ca/jalt/index.php/jalt/article/view/689/539
Sabzalieva, E., & Valentini, A. (2023). ChatGPT and Artificial Intelligence in higher education.
UNESCO. https://www.iesalc.unesco.org/
http://dx.doi.org/10.2139/ssrn.3372914
30
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Schwartz, E. H. (2023). ChatGPT is banned by these colleges and universities. Voicebot.ai. Retrieved
from
Sit, C., Srinivasan, R., Amlani, A., Muthuswamy, K., Azam, A., Monzon, L., & Poon, D. S. (2020).
https://doi.org/10.1186/s13244-019-0830-7
Stai, B., Heller, N., McSweeney, S., Rickman, J., Blake, P., Vasdev, R., Edgerton, Z., Tejpaul, R.,
Peterson, M., Rosenberg, J., Kalapara, A., Regmi, S., Papanikolopoulos, N., & Weight, C.
Teng, M., Singla, R., Yau, O., Lamoureux, D., Gupta, A., Hu, Z., Kell, D., MacMillan, K., Malik S,
Mozzoli, V., Teng, Y. Laricheva, M., Jarus, T. . & Field, T. S. (2022). Health care students’
The Royal Society. Machine learning: The power and promise of computers that learn by Example.
(2017). https://royalsociety.org/-/media/policy/projects/machine-
learning/publications/machine-learning-report.pdf
Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31,
447-464.
U.S. Department of Education [ED], (2023). Artificial Intelligence and Future of Teaching and
D.C., https://tech.ed.gov
31
FACULTY STUDENT PERCEPTIONS OF GENERATIVE AID
Ullah, H., & Wilson, M. A. (2007). Students' academic success and its association to student
involvement with learning and relationships with faculty and peers. College student journal,
41(4), 1192-1203.
Umbrello, S. (2021). AI Winter. In M. Klein & P. Frana (eds.), Encyclopedia of artificial intelligence:
Veletsianos, G., Kimmons, R., & Bondah, F., (2023 Mar 15). ChatGPT and higher education: Initital
https://er.educause.edu/articles/2023/3/chatgpt-and-higher-education-initial-prevalence-
and-areas-of-interest
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on
Wood, E. A., Ange, B. L., & Miller, D. D. (2021). Are we ready to integrate artificial intelligence
literacy into medical school curriculum: students and faculty survey. Journal of medical
Xu, J. J., & Babaian, T. (2021). Artificial intelligence in business curriculum: The pedagogy and
https://doi.org/10.1016/j.ijme.2021.100550
32