Yilmaz Et Al. (2023)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/372830546

Student's Perception of Chat GPT: A Technology Acceptance Model Study

Article in International Educational Review · April 2023


DOI: 10.58693/ier.114

CITATIONS READS
4 2,639

4 authors:

Halit Yilmaz Samat Maxutov


Suleyman Demirel University SDU University
3 PUBLICATIONS 6 CITATIONS 8 PUBLICATIONS 10 CITATIONS

SEE PROFILE SEE PROFILE

Azatzhan Baitekov Nuri Balta


Suleyman Demirel University Suleyman Demirel University
1 PUBLICATION 4 CITATIONS 76 PUBLICATIONS 721 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Samat Maxutov on 02 August 2023.

The user has requested enhancement of the downloaded file.


International Educational Review

https://int-er.com

Student’s Perception of Chat GPT: A Technology Acceptance Model Study

Halit Yilmaz, Samat Maxutov, Azatzhan Baitekov & Nuri Balta

Suleyman Demirel Univesity, Kazakhstan

Abstract: This study aimed to develop and validate an instrument to explore university students' perception of Chat GPT, while
also investigating potential variations across gender, grade level, major, and prior experience with using Chat GPT. Employing a
quantitative research approach, the study involved 239 students enrolled in the Science and Mathematics Education Program at a
private university in Almaty, Kazakhstan. The results indicated an overall positive perception of Chat GPT among the
participants. Notably, the only significant disparity in perception between male and female students was observed in the
dimension of "Perceived ease of use." Moreover, no significant differences were found across any survey dimensions when
comparing students from different grade levels (first to fourth grade). However, statistically significant differences emerged in
the dimension of "Perceived social influence" between Mathematics majors and Chemistry-Biology majors, as well as between
Chemistry-Biology majors and Physics-Informatics majors. Additionally, except for the dimension of "Perceived social
influence," statistically significant differences were observed among groups based on their prior experience using artificial
intelligence (AI) or chatbots. These findings provide valuable insights into university students' perceptions of Chat GPT and
highlight the influence of factors such as gender, major, and prior experience on their perceptions. The implications of these
findings can inform the design and implementation of educational technologies involving AI-based chat systems in higher
education settings.

Keywords: Artificial intelligence in Education; Chat GPT perception; Prior experience with AI; Technology acceptance model.
DOI: https://doi.org/10.58693/ier.114

Introduction

The Technology Acceptance Model (TAM) is a widely recognized theoretical framework that seeks to understand
individuals' acceptance and usage of new technologies (Davis, 1989). The model suggests that individuals' attitudes
towards using information technology are influenced by two primary factors: Perceived Usefulness (PU) and
Perceived Ease of Use (PEOU). Perceived Ease of Use refers to an individual's perception of the level of difficulty or
simplicity associated with using the technology, based on the cognitive resources required. On the other hand,
Perceived Usefulness can be understood as an individual's belief in the technology's ability to enhance their
productivity in performing a specific activity.

The findings from existing studies on the Technology Acceptance Model (TAM) reveal that perceived ease of use and
perceived usefulness serve as crucial antecedent factors influencing the acceptance of learning technologies, with
perceived usefulness being the primary determinant for adoption. Moreover, it has been observed that learners'
perceptions of usefulness and ease of use positively impact their satisfaction with the learning process, which in turn
contributes to a favorable intention to continue using the technology. (Granić & Marangunić, 2019). Additionally,
according to the TAM, it is suggested that the influence of external factors on individuals' intention to use technology
will be mediated by their perceptions of the technology's ease of use (PEOU) and usefulness (PU) (Venkatesh &
Davis, 1996).
58 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Numerous information systems researchers have explored the applications of the Technology Acceptance Model
(TAM) in diverse contexts. Additionally, several studies have been conducted by researchers to replicate the original
TAM study, aiming to assess its reliability and validity. Between 1989 and 2001, approximately 100 studies related
to TAM were published in journals, proceedings, and technical reports. These studies extensively tested TAM using
diverse sample sizes and user groups within or across organizations. They employed different statistical methods and
compared TAM with competing models, as noted by Gefen (2000).

TAM's applicability extended to a wide range of end-user technologies, including email (Adams, Nelson & Todd,
1992; Davis, 1989), word processors (Adams, Nelson & Todd, 1992; Davis, Bagozzi & Warshaw, 1989), groupware
(Taylor & Todd, 1995), spreadsheets (Agarwal, Sambamurthy & Stair, 2000; Mathieson, 1991), and the World Wide
Web (Lederer, Maupin, Sena & Zhuang, 2000). Some studies expanded TAM by incorporating additional predictors
such as gender, culture, experience, and self-efficacy.

Overall, researchers consistently argue that TAM is a valid, concise, and robust model (Venkatesh & Davis, 2000),
supported by its wide application and the diverse empirical evidence accumulated over the years.

Attitudes toward artificial intelligence (AI) have garnered significant attention in recent years, as AI technologies
continue to permeate various aspects of our lives. According to Fast and Horvitz (2016), an analysis of the long-term
trends in public perception of artificial intelligence (AI) reveals a notable increase in discussions surrounding AI since
2009. Furthermore, their research suggests that these discussions have consistently displayed a greater degree of
optimism than pessimism. This finding highlights a positive overall sentiment towards AI among the general public.
Understanding how individuals perceive and interact with AI is crucial for the successful adoption and integration of
these technologies. Gender, as a social construct, has been recognized as a potential factor influencing attitudes toward
AI. Gender-related differences in experiences, beliefs, and societal expectations may shape individuals' perceptions
and interactions with AI systems. Therefore, exploring the attitudes toward AI across gender groups can provide
valuable insights into the complex interplay between gender and technology acceptance.

According to Lozano, Molina, and Gijón (2021), their study on the perception of Artificial Intelligence in Spain
revealed that men exhibited a higher interest in technological developments compared to women. The findings of their
research, showed that the probability of men having a positive or very positive attitude towards AI and robots was
1.481 times higher than that of women. In a study conducted by Mozilla (2023), it was found that men (41%) expressed
a higher inclination compared to women (31%) in desiring artificial intelligence (AI) to surpass their own intelligence.
This gender disparity suggests differing attitudes towards AI capabilities, with men showing a greater preference for
AI systems that exhibit superior intelligence.
International Educational Review | 59

According to Araujo et al. (2020), gender significantly influenced perceptions of usefulness, with females perceiving
automated decision-making (ADM) by AI as significantly less useful than males. Gender also exhibited a marginal
association with perceptions of risk in relation to ADM.

In the study conducted by Yeh et al. (2021) the authors investigated the public's perception of artificial intelligence
(AI) and its relationship with the Sustainable Development Goals (SDGs). Within this study, a significant gender
difference was identified concerning confidence levels in AI knowledge. The findings, supported by a t-test analysis,
indicated that male respondents exhibited higher confidence compared to females (t = −6.294, p < 0.001). These results
highlight the importance of considering gender dynamics in understanding public attitudes and perceptions towards
AI, particularly regarding confidence in AI-related knowledge.

As AI technology increasingly permeates classrooms, it is essential to examine the attitudes and perceptions of
students at different grade levels toward this emerging technology. Understanding how students across grade levels
perceive AI can provide valuable insights into their readiness to embrace its integration in educational settings and
can help inform effective strategies for its implementation.

In a study conducted by Demir and Guraksin (2022), the perceptions of secondary school students regarding artificial
intelligence (AI) were explored through the use of metaphors. The study aimed to determine the connotations
associated with AI among participants and whether these connotations leaned towards positive or negative views. The
findings revealed that the students had mixed perceptions of AI, with both positive and negative connotations being
attributed to the concept. Metaphors used by the participants highlighted associations between AI and humans,
technology, and the brain. Interestingly, the majority of the metaphors employed by the students were positive in
nature, indicating a generally favorable attitude towards AI.

Jeffrey's (2020) study aimed to explore college students' perceptions of AI based on their level of understanding,
beliefs in its benefits, and concerns about its future development. The findings revealed conflicting beliefs among
participants, with those perceiving personal benefits from AI also expressing concerns about its rapid advancement
and its impact on human jobs. Notably, participants who possessed greater knowledge and understanding of AI were
more uncertain about its outcomes. The study highlighted the significant influence of participants' level of information
on their perception of AI, demonstrating a tension between their beliefs in AI's benefits and their concerns about
potential negative consequences. Moreover, the research indicated that AI was generally viewed as a positive
technological advancement, but caution was advised due to potential negative outcomes. The study aligns with
existing literature and emphasizes the tension between the inevitability of AI development and its actual impact on
humanity, with implications for individuals and society. As AI continues to advance, this tension is expected to
escalate due to increasing efforts by businesses and governments to gain a competitive advantage.
60 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

In accordance with Atwell (1999) and Parker (2007), students' perspectives on AI in the L2 classroom were examined
in a study conducted by Gallacher, Thompson, and Howarth (2018). The findings revealed that students perceived
conversing with Cleverbot, an AI chatbot, as beneficial for their English language study due to its independence.
However, the reported merits of AI partners were primarily associated with the speech-to-text function of smartphones
rather than the AI itself. This suggests that existing smartphone functions might offer similar benefits as certain AI
iterations, without the need to learn a new software platform, reducing potential confusion. Despite these perceived
benefits, students did not view Cleverbot as a viable substitute for communication with human beings. Its lack of
emotion, visible cues, and inability to confirm understanding were reported as significant drawbacks in terms of
interaction. Consequently, the study suggests that educators should exercise skepticism when incorporating current
AI technology in the L2 classroom, as the frustrations arising from interactions with AI might outweigh the benefits
within an English curriculum. The authors recommend future research to develop a quantifiable survey using the
categories discussed, enabling more consistent analysis across various AI chatbot platforms. This approach would
provide deeper insights into students' perceptions of AI and facilitate more informed decision-making in integrating
AI technology in language learning settings.

Liu et al. (2022) conducted a study to examine the effects of an AI chatbot on children's interest in reading. The
research focused on analyzing the interaction between children and the chatbot and its impact on their reading
engagement. The findings revealed that the AI chatbot had a positive influence on children's reading experiences,
leading to increased interest and engagement in reading activities. This study contributes to the understanding of how
AI technology can enhance children's reading motivation and enjoyment.

Based on the findings of Yeh et al. (2021), significant differences in the perception of artificial intelligence (AI) were
observed among different college major groups. The study revealed that business majors perceived AI as more virtuous
compared to humanities majors, which aligns with previous research. Furthermore, engineering majors expressed
greater concern about the possibility of human lives being monitored by AI compared to business majors. These results
highlight the influence of college major on individuals' perceptions of AI and suggest the importance of considering
disciplinary backgrounds when examining public attitudes towards AI.

In a study conducted by Firat (2023), the perceptions of scholars and students regarding the integration of ChatGPT
and AI into education were examined. Through thematic content analysis of comments, nine main themes emerged,
highlighting the diverse opinions and concerns of participants. The findings indicate that there is a consensus among
scholars and students that AI will have a significant impact on traditional learning methods, shifting the focus towards
skills and competencies and redefining the roles of educational institutions. Despite recognizing the challenges and
potential issues, participants expressed optimism for the future of AI in education.

The research by Iqbal, Nayab, Ahmed, and Azhar (2023) reveals that teachers generally hold a negative attitude
towards ChatGPT. They express concerns about its potential for facilitating cheating, promoting student laziness, and
International Educational Review | 61

lacking value in the learning process. However, some teachers recognize specific benefits, such as automated feedback
and increased student engagement. Overall, the study emphasizes the need for addressing teachers' concerns and
providing support when integrating ChatGPT as an educational tool. Further exploration is warranted to understand
the potential benefits and challenges of AI technologies like ChatGPT in education.

The literature review highlights that the Technology Acceptance Model (TAM) is a widely recognized framework for
understanding technology acceptance. Perceived usefulness and ease of use are key factors influencing the acceptance
of learning technologies. Attitudes towards AI show a generally positive sentiment, although gender-related
differences exist. Students' perceptions of AI vary, with mixed connotations and considerations of its limitations. AI
chatbots have been found to positively impact children's reading engagement. College majors and disciplinary
backgrounds influence perceptions of AI. Teachers generally have negative attitudes towards ChatGPT, citing
concerns about cheating and lack of value, but recognize some benefits. Further research is needed to address teacher
concerns and explore AI's potential in education.

The aim of this study was twofold: First, develop and validate an instrument to explore university students’ perception
of Chat GPT. Second, identify students’ perception across gender, grade level, major and prior experience with using
Chat GPT. Following research questions guided this study:
• Is the developed survey considered valid?
• Are there differences in participants' perception of Chat GPT across gender groups?
• Are there differences in participants' perception of Chat GPT across grade level groups?
• Are there differences in participants' perception of Chat GPT across major groups?
• Do participants' perception of Chat GPT differ based on their prior experience using artificial intelligence
(AI) or chatbots?

Methods
In this study, a quantitative research approach was employed to ensure a thorough analysis of the gathered data. It is
a survey study providing a better understanding of students' attitudes towards Chat GPT, an artificial intelligence-
based chatbot. The survey we adapted is based on the "Technology Acceptance Model" (TAM) survey, which is a
widely used model for evaluating users' attitudes toward new technologies. The original TAM survey was developed
by Fred Davis in the 1980s and has since been adapted and modified by many researchers in various fields. The TAM
survey typically includes items related to perceived usefulness, perceived ease of use, attitude towards using the
technology, and intention to use the technology.

This model is adapted to fit specifically with Chat GPT by adding items related to perceived credibility, perceived
social influence, and perceived privacy and security. However, the basic structure and items of the survey are still
rooted in the TAM framework. For each item in the survey, participants are typically asked to rate their agreement
with a statement on a Likert-type scale. The Likert-type scale is a commonly used rating scale in surveys, and it
typically ranges from 1 to 5 or 7, with higher numbers indicating greater agreement with the statement.
62 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Sample

When conducting survey research, the researchers used a representative and convenience sample from the
population of science and mathematics education students from a private university in Almaty, Kazakhstan. This
study's convenience sample was available and provided helpful information for answering questions and hypotheses
(Creswell, 2002). The participants comprised 235 undergraduates, 2 graduates, and 2 Ph.D. students who were
enrolled in Science and Mathematics Education Program. The participants, were from mathematics (42), physics-
informatics (100), and chemistry-biology (77) double programs and 79 were male and 175 were female students
aged between 17-23 years.

Table 1
Demographic Information
Variable Group N=239

Age 17 19

18 56

19 71

20 48

21 19

22 4

23 2

Grades Freshman (1st grade) 56

Sophomore (2nd grade) 70

Junior (3th grade) 58

Senior (4th grade) 35

Gender Female 145

Male 79

Educational level Bachelor 235

Master 2

PhD 2

Major/field of study Mathematics 42

Chemistry-Biology (double program) 77

Physics-Informatics (double program) 100


International Educational Review | 63

Instrument

Initially, a questionnaire was developed based on the artificial intelligence-based chatbot (https://chat.openai.com/).
The question posed was, "Could you create a questionnaire to assess students' perception of ChatGPT?" The
resulting instrument, consisting of seven dimensions, can be found in the Appendix. However, during the validation
process, the last dimension concerning perceived privacy and security did not meet the required criteria and was
subsequently eliminated. The construct validity of the questionnaire was established through factor analysis. Finally,
a questionnaire of seven dimensions consisting of 21 items, with demographic information, was developed (see
Table 2).

Table 2
The Dimension of the Instrument
Dimension Number of items Option Range

Perceived usefulness 3

Attitudes using Chat GPT 3 in five choices as 1: Strongly Disagree, 2:

Perceived credibility 3 Disagree, 3: Uncertain, 4: Agree, 5:

Perceived social influence’ 3 Strongly Agree

Perceived privacy and security 3

3 in seven choices as 1: Very difficult, 2:

Perceived ease of use Difficult, 3: Somewhat difficult, 4: Neither

difficult nor easy, 5: Somewhat easy, 6:

Easy, 7: Very easy.

3 in seven choices as 1: Very unlikely, 2:

Behavioral intention to use Chat GPT Unlikely, 3: Somewhat unlikely, 4:

Neutral, 5: Somewhat likely, 6: Likely, 7:

Very likely.

Data collection

In this study, the instrument was administered in three languages, that is, English, Kazakh and Russian. Our
participants speak English at B2 and above levels and speak Kazakh and Russian as native languages. The original
questionnaire was in English and was translated to both Kazakh and Russian by four instructors who were native
speakers of Kazakh and Russian.
64 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

The data for this research was gathered using the "Student Attitudes Towards Chat-GPT" questionnaire.
Questionnaires are a reliable method of data collection, as they provide highly structured, objective, and accurate
data for thorough analysis (Taherdoost, 2021). The questionnaire was administered to students during the 2022-2023
academic year. The final version of the questionnaire was distributed via Google Forms to all science and
mathematics education program students in April 2023. The questionnaire stayed online for a duration of two weeks
and to ensure an adequate response rate, lecturers were involved in facilitating the questionnaire administration. The
collected responses were handled with confidentiality and students voluntarily participated in this study. The ethical
permission was taken from the institution’s ethical committee.

Data analysis

The analysis was conducted by the use of Jamovi (The Jamovi Project, 2022) software program to measure the
normality, reliability, and factor analysis.

Results

Validity and reliability studies

Content validity
This type of validity is an evaluation of each of the items constituting the factor for content relevance,

representativeness, and technical quality (Boetang et al., 2018). Four experts, who specialize in science teaching

(one from mathematics, one from chemistry, and two from physics) judged the items of the questionnaire. After the

feedback from experts, the item validity was complemented by expert agreement to provide the quality of each item

in measuring the target dimension to reach a valid instrument about the students` perception of Chat GPT.

Construct validity
Two-factor analyses: exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were recommended

to identify factors (Rattray & Jones, 2007) and to describe items in factors (Fraenkel et al., 2011), and also were

considered for the validation of scale, sampling adequacy, assessing the item loadings in factors, interpreting the

factors, and determining each Factor's reliability.

Exploratory Factor Analysis (EFA)


Before the EFA, tests for homogeneity and sampling adequacy were determined to obtain conceptually similar and

significant factors of the variables. If the Bartlett test of sphericity should be significant or better, and the KMO

(Kaiser-Meyer-Olkin) measure of sampling adequacy should be at 0.6 or above (Cohen et al., 2017:570), then the

sample size is appropriate for factor analysis. The KMO of 0.842 shows the sample adequacy and significant data

set homogeneity (x2=2765.61, df=210, p<.000). Concerning the 21 items used, oblimin rotation of the items yielded
International Educational Review | 65

seven factors with three items of each factor. The factors were named with reliability coefficients, respectively: The

Perceived usefulness (0.816), The Perceived ease of use (0.899), The Attitude towards using Chat GPT (0.715), The

Behavioral intention to use Chat GPT (0.932), The Perceived credibility (0.921), The Perceived social influence

(0.821), and The Perceived privacy and security (0.650). Due to a low coefficient of reliability and the absence of

factor loadings in the factor analyses, the final dimension (Perceived privacy and security) of the survey were

excluded. The remained items were examined with the minimum residual technique with oblimin rotation based on

parallel analysis and keeping item factor loadings of greater than 0.3 (Boetang et al., 2018) were extracted. The final

version of the scale with 18 items, revealed its implicit structure and six factors with loadings, shown in Table 3.

Table 1

Factor Loadings

Factor names Factor Loadings


1 2 3 4 5 6
Use1 0.650
The Perceived usefulness Use2 0.938
Use3 0.558
Ease1 0.856
The Perceived ease of use Ease2 0.853
Ease3 0.781

The Attitude toward using Chat Atti1 0.42


Atti2 0.452
GPT
Atti3 0.982

The Behavioral intention to use Beh1 0.800 7


Beh2 0.987
Chat GPT
Beh3 0.802
Cre1 0.815
The Perceived credibility Cre2 0.879
Cre3 0.909
Soc1 0.777
The Perceived social influence Soc2 0.657
Soc3 0.734
66 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Confirmatory factor analysis (CFA).


The factors extracted provided a hypothetical structure of the scale and it should be tested the dimensionality of

these factors with the confirmatory analysis before the reliability and validity. For testing dimensionality, CFA was

performed to determine acceptable the measurement model's fit indexes and convergent validity.

Model fitting: The measurement model's fit indexes are measured to determine how well the instrument data fit the
proposed factorial dimension by checking the model with various indexes. The fitting model method is provided to
measure the factor independence and fitting sufficiency of a hypothesis (Harrington, 2009). These indexes include
the chi-square test of exact fit (CMIN/DF), Root Mean Square Error of Approximation (RMSEA), Tucker Lewis
Index (TLI), Comparative Fit Index (CFI), and Standardized Root Mean Square Residual (SRMR) (Boetang et al.,
2018; Dong et al., 2020; Hu and Bentler, 1999; Lee et al., 2008; Swisher et al., 2004; Zheng et al., 2014). As shown
in Table 2, the results indicated the acceptable fit indexes, and the final model was accepted as statistically
significant and sufficient. All indexes as shown in Table 4, provided the perfect and acceptable values; CMIN/DF
value (Zheng et al., 2014), CFI and TFI values (Hu and Bentler, 1999; Lee et al., 2008; Swisher et al., 2004), SRMR
(Hu and Bentler, 1999), and RMSEA (Hu and Bentler, 1999). The results of CFA show that the dimensionality test
was accepted in sufficient with the fitting model.

Table 2

Fit Indexes for the scale

Fit index Perfect Fit Finding Measures Interpretation

Measures

CMIN/DF (x2/df) ≤ 0.02 0.00196 Perfect fit

RMSEA ≤ 0.06 0.0664 Acceptable fit

TLI ≥ 0.95 0.941 Acceptable fit

CFI ≥ 0.95 0.954 Acceptable fit

SRMR ≤ 0.08 0.0476 Perfect fit


International Educational Review | 67

Convergent validity: Convergent validity was examined through the standardized regression weights of

measurement items, composite reliability (CR), average variance extracted (AVE), and the square root of the AVE

for discriminant validity. The CR and AVE values were calculated by using online reliability and validity calculator

(Weiss, 2011). According to the rule of thumb, CR reliability criteria need to be above 0.70 (Hair et al., 2020). The

criterion of AVE is the value should be 0.5 (50%) or higher (Hair et al., 2020). The results; a greater CR than 0.7

and a greater AVE than 0.5 (50%) for each factor, indicated the perfect and acceptable values to achieve convergent

validity (Awang, 2015; Zheng et al., 2014, Hair, et al., 2017) (see Table 5).

Table 3

Convergent validity results

Factor Names, codes Standardized weights CR AVE Sqr. AVE

The perceived usefulness (Use1, Use2, 0.781; 0.877; 0.670 0.822 60.9% 0.780

Use3)

The Perceived ease of use (Ease1, 0.850; 0.844; 0.841 0.882 71.4% 0.845

Ease2, Ease3)

The Attitude towards using Chat GPT 0.850; 0.324; 0.838 0.734 51.0% 0.714

(Atti1, Atti2, Atti3)

The Behavioral intention to use Chat 0.880; 0.949; 0.887 0.932 82.1% 0.906

GPT Beh1, Beh2, Beh3)

The Perceived credibility (Cre1, Cre2, 0.848; 0.902; 0.906 0.916 78.5% 0.886

Cre3)

The Perceived social influence (Soc1, 0.768; 0.800; 0.728 0.810 58.7% 0.766

Soc2, Soc3)
68 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Discriminate validity: The discriminate validity test was also helpful in determining whether factors are

significantly different from each other. It means the results of different constructs should be different (Xu & Lewis,

2011). The findings show that the square root of the AVE value of each Factor was greater than the correlations

between factors (Zheng et al., 2014). The greater value of the square root of AVE than correlations indicated the

perfect acceptance of scale discriminant validity by comparisons between the square roots of AVE and correlation

values (see Tables 5 and 6).

Table 4

Factor correlations

1 2 3 4 5 6

1 — 0.409 0.287 0.371 0.530 0.487

2 — 0.345 0.413 0.538 0.322

3 — 0.357 0.236 0.430

4 — 0.460 0.480

5 — 0.471

6 —

Reliability: In the achievement of reliability of the final scale version, Cronbach reliability values were 0.809 for

the perceived usefulness, 0.881 for The Perceived ease of use, 0.690 for The Attitude towards using Chat GPT,

0.929 for The Behavioral intention to use Chat GPT, 0.915 for The Perceived credibility, and 0.809 for The

Perceived social influence. The overall scale reliability value was 0.904, more than the accepted value of 0.7 (Hair et

al., 2020; Boateng et al., 2018; Rattray & Jones, 2007).

Findings from the questionnaire

Initially, we provided descriptive statistics, including the mean, standard deviation, and assessment of the data's

normality, which can be found in Table 7. It is important to note that the items within the second and fourth
International Educational Review | 69

dimensions of the survey were rated on a scale ranging from 1 to 7, while the items in the remaining dimensions

were rated on a scale of 1 to 5.

Table 5

Descriptives statistics for the dimension of the survey

Attitude
Behavioral Perceived
Perceived Perceived towards Perceived
intention to use social
usefulness ease of use using Chat credibility
Chat GPT influence
GPT

N 219 219 219 219 219 219

Mean 3.38 5.20 3.45 4.57 3.17 3.30

Standard
0.950 1.31 0.832 1.71 0.916 0.918
deviation

Shapiro-
0.946 0.937 0.955 0.949 0.965 0.964
Wilk W

Shapiro-
< .001 < .001 < .001 < .001 < .001 < .001
Wilk p

Table reveals that the average scores for each dimension surpass the "Neither agree nor disagree" or "Neither

difficult nor easy" options for both 5-point and 7-point scales. Which indicates a positive perception of Chat GPT.

However, the dimension with the lowest overall perception score was found to be " Perceived credibility " (with a

mean score of 3.17, representing 63% perception), while the highest score was recorded for the "Perceived ease of

use" dimension (with a mean score of 5.20, indicating 74% perception). In order to facilitate better comparison, we

converted all scores into percentages and graphically presented them in Figure 1.
70 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Figure 1

Average scores across dimensions

90.0
80.0
% Overall percention

70.0 74.3
60.0 67.6 69.0
65.3 63.4 66.0
50.0
40.0
30.0
20.0
10.0
0.0
Perceived Perceived ease Attitude Behavioral Perceived Perceived
usefulness of use towards using intention to use credibility social
Chat GPT Chat GPT influence

To address the second research question, we utilized the Mann-Whitney U test due to the non-normal distribution of

the data (See Table 8). This test was employed to identify significant gender differences across the various

dimensions of the survey.

Table 6

Mann-Whitney U test for gender groups

Statistic p Effect Size

Perceived usefulness 4923 0.281 0.0883

Perceived ease of use 4475 0.037 0.1713

Attitude towards using Chat GPT 4828 0.194 0.1059

Behavioral intention to use Chat GPT 4860 0.224 0.1000

Perceived credibility 5188 0.630 0.0394

Perceived social influence 5205 0.659 0.0362


International Educational Review | 71

The sole significant disparity in how male and female students perceive Chat GPT was found in the dimension of

"Perceived ease of use (P=.037)." Male students (M=5.44) reported finding Chat GPT easier compared to their

female counterparts (M=5.04).

To investigate the third research question, we examined potential significant differences among grade level groups.

With four grades ranging from first grade to fourth grade, we conducted a Kruskal-Wallis test due to the non-normal

distribution of the data (See Table 9).

Table 7

Kruskal-Wallis test for grade level groups

χ² df p ε²

Perceived usefulness 5.378 3 0.146 0.02467

Perceived ease of use 2.373 3 0.499 0.01089

Attitude towards using Chat GPT 0.768 3 0.857 0.00352

Behavioral intention to use Chat GPT 4.475 3 0.214 0.02053

Perceived credibility 5.818 3 0.121 0.02669

Perceived social influence 5.232 3 0.156 0.02400

Based on the findings presented in Table 9, no significant differences were observed across any of the dimensions of

the survey among students from first grade to fourth grade (p>0.05).

In responding the fourth research question, in order to examine potential significant differences in the perception of

major groups (Mathematics, Chemistry-Biology, and Physics-Informatics), a Kruskal-Wallis test was conducted due

to the failure to meet the parametric test requirements (See Table 10).
72 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Table 8

Kruskal-Wallis test for major groups

χ² df p ε²

Perceived usefulness 4.35 2 0.114 0.01995

Perceived ease of use 6.07 2 0.048 0.02784

Attitude towards using Chat GPT 1.38 2 0.502 0.00633

Behavioral intention to use Chat GPT 2.31 2 0.315 0.01061

Perceived credibility 3.32 2 0.191 0.01521

Perceived social influence 8.93 2 0.011 0.04098

As observed in Table 10, there are statistically significant differences in the "Perceived ease of use" (p=0.048) and

"Perceived social influence" (p=0.011) dimensions of the survey. Since there are three major groups, pairwise

comparisons were conducted using the Dwass-Steel-Critchlow-Fligner test (See Table 11).

Table 9

Pairwise comparisons for perceived ease of use and perceived social influence

Perceived ease of use Perceived social influence

W p W p

Mathematics Chemistry-Biology 3.24 0.057 3.72 0.023

Mathematics Physics-Informatics 2.73 0.13 1.24 0.657

Chemistry-Biology Physics-Informatics -1.43 0.571 -3.4 0.042

According to Table 11, there are statistically significant differences in the perceived social influence dimension

between Mathematics majors (M=3.08) and Chemistry-Biology majors (M=3.54), as well as between Chemistry-

Biology majors (M=3.54) and Physics-Informatics majors (M=3.22). While the Kruskal-Wallis test indicated
International Educational Review | 73

significant differences in the perceived ease of use dimension, pairwise comparisons did not yield statistically

significant differences.

To address the fifth research question, we examined potential differences among groups based on their prior

experience using artificial intelligence (AI) or chatbots. There were three groups classified as: having a lot, neither

more nor less, or little experience with AI or chatbots. Due to the non-normal distribution of the data, a Kruskal-

Wallis test was performed, as indicated in Table 12.

Table 10

Kruskal-Wallis test for experience with AI groups

χ² df p ε²

Perceived usefulness 6.69 2 0.035 0.03070

Perceived ease of use 15.54 2 < .001 0.07130

Attitude towards using Chat GPT 15.45 2 < .001 0.07087

Behavioral intention to use Chat GPT 27.26 2 < .001 0.12506

Perceived credibility 8.58 2 0.014 0.03935

Perceived social influence 1.83 2 0.400 0.00840

Upon examining Table 12, it is evident that, with the exception of the dimension "Perceived social influence"

statistically significant differences exist between the groups. To identify the specific differences between these

groups, pairwise comparisons were conducted using the Dwass-Steel-Critchlow-Fligner test, as shown in Table 13.
74 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Table 11

Pairwise comparisons for usefulness, ease of use, attitude, intention to use, and credibility

Usefulness
Ease of use Attitude Intention to use Credibility

W p W p W p W p W p

Neither

A lot more nor 0.248 0.983 -2.04 0.32 -2.05 0.314 -3.91 0.016 -3.544 0.033

less

A lot Little -1.821 0.402 -4.01 0.013 -4.5 0.004 -6.54 < .001 -4.211 0.008

Neither

more nor Little -3.556 0.032 -4.7 0.003 -4.39 0.005 -5 0.001 -0.835 0.825

less

In the Perceived usefulness dimension, there is a statistically significant difference between individuals with

"Neither more nor less experience" (M=3.54) and "Little experience" (M=3.24) with AI. In the Perceived ease of use

and Attitude towards using Chat GPT dimensions, there are statistically significant differences between individuals

with "A lot" of experience (M=5.84, M=3.95) and "Little experience" (M=4.84, M=3.25), as well as between

individuals with "Neither more nor less experience" (M=5.43, M=3.57) and "Little experience" (M=4.84, M=3.25)

with AI. In the behavioral intention to use Chat GPT dimension, there are statistically significant differences

between individuals with "A lot" of experience (M=5.98) and "Neither more nor less experience" (M=4.85),

between individuals with "A lot" of experience (M=5.98) and "Little experience" (M=4.02), and between individuals

with "Neither more nor less experience" (M=4.85) and "Little experience" (M=4.02) with AI. Lastly, in the

Perceived credibility dimension, there are statistically significant differences between individuals with "A lot" of

experience (M=3.69) and "Neither more nor less experience" (M=3.17), as well as between individuals with "A lot"

of experience (M=3.69) and "Little experience" (M=3.07) with AI.

Discussion and conclusion


The first research question in this study is focused on assessing the validity of the developed survey. The survey
assesses various factors linked to the adoption of Chat GPT in the context of education, including its perceived
International Educational Review | 75

usefulness, perceived ease of use, attitudes toward utilizing Chat GPT, behavioral intention to use Chat GPT, perceived
credibility, and perceived social impact. To offer a summary of the survey results, descriptive statistics were used.
The study's findings indicate that participants had favorable perceptions of Chat GPT in the educational environment
in general, implying that participants acknowledged the benefits and worthwhile of utilizing Chat GPT in their
educational experiences. This conclusion supports prior research on technology acceptance models, which highlight
the relevance of perceived usefulness in influencing users' attitudes and intentions for using technology (Davis, 1989;
Venkatesh & Davis, 1996). Furthermore, the survey results display that the majority of participants positively
perceived the ease of use. This is consistent with the Technology Acceptance Model (TAM) idea of perceived ease of
use, which states that people are more likely to accept and use technology if they believe it to be simple to use (Davis,
1989). As a result, participants' perceptions regarding using Chat GPT in their educational activities were equally
positive. Positive attitudes are frequently recognized as a major factor affecting the acceptance and adoption of
technology (Granić & Marangunić, 2019). Regarding Chat GPT behavioral intention, it was assessed at 65.3%. This
indicates that participants have shown a moderate level of interest in using Chat GPT in the future. Because it indicates
individuals' willingness and motivation to adopt and engage with technology, behavioral intention is a significant
predictor of actual technology usage (Venkatesh & Davis, 1996). The perceived credibility category has a moderate
score that is nearly equal to the score of the preceding category. According to Liu et al. (2022), credibility is a critical
factor influencing individuals" confidence and trust in technology. Finally, the survey's results show that perceived
social impact is 66%. These findings correspond with prior study, which discovered that others' influence was a key
role in their decision to employ Chat GPT in the educational environment. Individuals' technological acceptance and
adoption behaviors are influenced by social influence, according to Iqbal et al. (2022). In summary, the descriptive
statistics findings give preliminary insights into the survey's validity by examining participants' perceptions of Chat
GPT in the educational setting.

Regarding the differences in participants' perception of Chat GPT across gender groups, the survey results revealed a
significant disparity in the perceived ease of use dimension between male and female students. Specifically, male
students reported finding Chat GPT easier to use compared to their female counterparts. Gender disparities in
technology acceptance have been studied by scholars such as Mathieson (1991) and Parker (2007), who discovered
that males and females may have different perceptions and behaviors toward technology. These disparities might be
explained by sociocultural factors and gender norms, which impact individual opinions and decisions (Fast & Horvitz,
2016). As a result, it is possible that gender influences Chat GPT perception, particularly in variables like as perceived
ease of use (PEOU) (Liu et al., 2022). The finding of a considerable disparity in perceived ease of use across gender
groups emphasizes the need of taking gender into account when researching technology acceptance and individuals'
perceptions. Understanding gender differences may help guide the design and implementation of Chat GPT and other
related technologies in educational settings, ensuring that they are accessible and user-friendly for all students,
regardless of gender.
76 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

The third research question sought to determine whether participants' perceptions of Chat GPT differed by grade level.
The survey findings, as shown in Table 9, show that no significant differences were identified across any of the survey
variables among participants from first to fourth grade (p>0.05). Granić and Marangunić (2019) investigated the
acceptance of educational technology among primary school students and discovered that perceived usefulness and
perceived ease of use both strongly affected their desire to utilize the technology. The study, however, found no
significant changes in acceptance and perception across grade levels. These data imply that students' perceptions of
Chat GPT do not differ much by grade level. In contrast, Parker (2007) stated that younger learners may be more
supportive of technology due to their experience with digital tools, but older students may be more resistant or
skeptical. These findings are consistent with the Technology Acceptance Model (TAM), which states that perceived
utility (PU) and perceived ease of use (PEOU) are important factors influencing technology acceptance across age
groups (Firat 2023). The lack of significant differences among grade level groups in the current study suggests that
students at different grade levels interpret Chat GPT similarly. This shows that Chat GPT has the potential to be a
powerful instructional tool that students of all grade levels may use effectively.

The fourth research question examines how participants' perceptions of Chat GPT differ between major groups.
Recognizing that students from various disciplines may have varied needs, interests, and technical capabilities, this
study tries to determine whether different disciplines affect students' perceptions of Chat GPT. According to research
by Demir and Guraksin (2022), students' academic backgrounds and disciplinary attitudes might influence their
acceptance of educational technology. For example, those studying in STEM disciplines may have a higher degree of
competence and perceived ease of use (PEOU) with technology than those majoring in non-STEM subjects. These
perceptional discrepancies might be explained by differences in past exposure to technology tools and their relevance
to particular fields. Subsequently, the study's findings shed important light on how different major groups perceive
the Chat GPT in an educational setting. It implies that depending on the major they have selected, students may have
distinct opinions of usefulness and social influence. This knowledge may help instructors and developers customize
Chat GPT's implementation and design to the unique requirements and preferences of different major groups.

The fifth research question investigates if users' perceptions of Chat GPT differ depending on their past experience
with AI or chatbots. This question acknowledges that earlier experiences may impact individual opinions, attitudes,
and acceptance of new technology. Previous research into the importance of past experience in technology acceptance
discovered that users with more experience with AI or chatbots may have higher perceived usefulness (PU) and ease
of use (PEOU) of Chat GPT (Iqbal et al., 2022). According to the findings of Iqbal et al.'s (2022) study, these
individuals may have established a stronger degree of familiarity, comfort, and confidence in engaging with AI-based
systems, resulting in more favorable attitudes and behavioral intents to utilize Chat GPT.

In conclusion, the research questions in this study seek to investigate the survey's validity as well as changes in
participants' perceptions of Chat GPT based on multiple variables like as gender, grade level, major, and past
experience with AI or chatbots. Scholars in the field of technology acceptance, such as Granić and Marangunić (2019),
International Educational Review | 77

Davis (1989), and Venkatesh and Davis (1996), have provided valuable insights and frameworks, such as the
Technology Acceptance Model (TAM), perceived usefulness (PU), and perceived ease of use (PEOU), that can inform
the discussion and analysis of the survey results.

References

Adams, D., Nelson, R. R., & Todd, P. M. (1992). Perceived Usefulness, Ease of Use, and Usage of Information
Technology: A Replication. Management Information Systems Quarterly, 16(2), 227.
https://doi.org/10.2307/249577

Agarwal, R., Sambamurthy, V., & Stair, R. (2000). Research Report: The Evolving Relationship Between General
and Specific Computer Self-Efficacy—An Empirical Assessment. Information Systems Research, 11(4), 418–
430. https://doi.org/10.1287/isre.11.4.418.11876

Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated
decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-
00931-w

Atwell, E. (1999). The language machine: the impact of speech and language technologies on English language
teaching. British Council.

Awang, Z. 2015. Validating the measurement model: CFA. A Handbook on SEM. 2nd edition ed: Kuala Lumpur:
Universiti Sultan Zainal Abidin: 54-73.

Boateng G., O, Neilands T., B, Frongillo E., A, Melgar-Quiñonez H., R & Young S., L. (2018). Best Practices for
Developing and Validating Scales for Health, Social, and Behavioral Research. In Raykov, T., & Marcoulides,
G. A. (2011). Introduction to psychometric theory. Routledge.

Cohen, L., Manion, L., & Morrison, K. (2017). Research methods in education. Routledge.

Creswell, J. W. (2002). Educational research: Planning, conducting, and evaluating quantitative (pp. 146-166).
Upper Saddle River, NJ: Prentice-Hall.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User Acceptance of Computer Technology: A Comparison of
Two Theoretical Models. Management Science, 35(8), 982–1003. https://doi.org/10.1287/mnsc.35.8.982

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS
Quarterly, 13(3), 319-340. https://doi.org/10.2307/249008

Demir, K., & Guraksin, G. E. (2022). Determining middle school students’ perceptions of the concept of artificial
intelligence: A metaphor analysis. Participatory Educational Research, 9(2), 297–312.
https://doi.org/10.17275/per.22.41.9.2
78 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Dong, Y., Xu, C., Chai, C. S., & Zhai, X. (2020). Exploring the structural relationship among teachers' technostress,
technological pedagogical content knowledge (TPACK), computer self-efficacy, and school support. The Asia-
Pacific Education Researcher, 29(2), 147-157.

Fast, E., & Horvitz, E. (2016). Long-Term Trends in the Public Perception of Artificial Intelligence. Proceedings of
the . . . AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10635

Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2011). How to design and evaluate research in education. New York:
McGraw-Hill Humanities/Social Sciences/Languages.

Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied
Learning and Teaching, 6(1), 57-63. DOI: https://doi.org/10.37074/jalt.2023.6.1.22

Gallacher, A., Thompson, A., Howarth, M., Taalas, P., Jalkanen, J., Bradley, L., & Thouësny, S. (2018). “My robot
is an idiot!”–Students’ perceptions of AI in the L2 classroom. Future-proof CALL: language learning as
exploration and encounters–short papers from EUROCALL, 70-76.

Gefen, D., Straub, D., & Boudreau, M. C. (2000). Structural equation modeling and regression: Guidelines for research
practice. Communications of the association for information systems, 4(1), 7.
https://doi.org/10.17705/1cais.00407

Granić, A., & Marangunić, N. (2019). Technology acceptance model in educational context: A systematic literature
review. British Journal of Educational Technology, 50(5), 2572–2593. https://doi.org/10.1111/bjet.12864

Hair Jr, J. F., Howard, M. C., & Nitzl, C. (2020). Assessing measurement model quality in PLS-SEM using
confirmatory composite analysis. Journal of Business Research, 109, 101-110.

Harrington, D. 2009. Confirmatory factor analysis. Oxford university press.

Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria
versus new alternatives. Structural equation modeling: a multidisciplinary journal, 6(1), 1-55. DOI:
https://doi.org/10.1080/10705519909540118

Iqbal, N., Ahmed, H., & Azhar, K. A. (2022). Examining the acceptance of chatbots in education: A study based on
the technology acceptance model. Education and Information Technologies, 27(5), 4855-4874.

Iqbal, N., Ahmed, H., & Azhar, K. A. (2022). Exploring teachers’ attitudes towards using chatgpt. Global Journal for
Management and Administrative Sciences, 3(4), 97–111. https://doi.org/10.46568/gjmas.v3i4.163

Jeffrey, T. (2020). Understanding college student perceptions of artificial intelligence. Systemics, Cybernetics and
Informatics, 18(2), 8-13.

Lederer, A. L., Maupin, D. J., Sena, M. P., & Zhuang, Y. (2000). The technology acceptance model and the World
Wide Web. Decision Support Systems, 29(3), 269–282. https://doi.org/10.1016/s0167-9236(00)00076-2
International Educational Review | 79

Lee, M., H., Johanson, R. E., & Tsai, C., C. (2008). Exploring Taiwanese high school students' conceptions of and
approaches to learning science through a structural equation modeling analysis, Science Education 92(2), 191–
220. doi: https://doi.org/10.1002/sce.20245

Liu, C., Liao, M., Chang, C., & Lin, H. M. (2022). An analysis of children’ interaction with an AI chatbot and its
impact on their interest in reading. Computers & Education, 189, 104576.
https://doi.org/10.1016/j.compedu.2022.104576

Lozano, I. A., Molina, J. M., & Gijón, C. (2021). Perception of Artificial Intelligence in Spain. Telematics and
Informatics, 63, 101672. https://doi.org/10.1016/j.tele.2021.101672

Mathieson, K. (1991). Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of
Planned Behavior. Information Systems Research, 2(3), 173–191. https://doi.org/10.1287/isre.2.3.173

Parker, L. (2007). Gender differences in computer attitudes, ability, and use in the preschool environment. Journal of
Research in Childhood Education, 22(1), 39-51.

Parker, L. (2007). Technology in support of young English learners in and out of school. In L. Parker (Ed.),
Technology-mediated learning environments for young English learners (pp. 213-250). Routledge.

Rattray, J., & Jones, M. C. (2007). Essential elements of questionnaire design and development. Journal of clinical
nursing 16(2): 234-243. DOI: https://doi.org/10.1111/j.1365-2702.2006.01573.x

Swisher, L. L., Beckstead, J. W., & Bebeau, M. J. (2004). Factor analysis as a tool for survey analysis using a
professional role orientation inventory as an example. In Joreskog KG, Sorbom D. LISREL Version 8.54: User’s
Reference Guide [electronic manual]. Chicago, Ill: Scientific Software International Inc; 2003.

Taherdoost, H. (2021). Data Collection Methods and Tools for Research; A Step-by-Step Guide to Choose Data
Collection Technique for Academic and Business Research Projects. International Journal of Academic
Research in Management (IJARM), 10(1), 10-38.

The jamovi project (2022). jamovi. (Version 2.3) [Computer Software]. Retrieved from https://www.jamovi.org.

Taylor, S., & Todd, P. M. (1995). Assessing IT Usage: The Role of Prior Experience. MIS Quarterly, 19(4), 561.
https://doi.org/10.2307/249633

Venkatesh, V., & Davis, F. D. (1996). A Model of the Antecedents of Perceived Ease of Use: Development and Test.
Decision Sciences, 27(3), 451–481. https://doi.org/10.1111/j.1540-5915.1996.tb00860.x

Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four
Longitudinal Field Studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926

Xu, X., & Lewis, J. E. (2011). Refinement of a chemistry attitude measure for college students, Journal of Chemical
Education, 88(5): 561-568. DOI: https://doi.org/10.1021/ed900071q

Weiss, B.A. (2011). Reliability and validity calculator for latent variables [Computer software]. Available from
https://blogs.gwu.edu/weissba/teaching/calculators/reliability-validity-for-latent-variables-calculator/.
80 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Yeh, S. C., Wu, A., Yu, H., Wu, H., Kuo, Y., & Chen, P. (2021). Public Perception of Artificial Intelligence and Its
Connections to the Sustainable Development Goals. Sustainability, 13(16), 9165.
https://doi.org/10.3390/su13169165

Zheng, C., Fu, L., & He, P. (2014). Development of an instrument for assessing the effectiveness of chemistry
classroom teaching.” Journal of Science Education and Technology 23(2): 267-279. DOI:
https://doi.org/10.1007/s10956-013-9459-3

Appendix

Student’s Perception of Chat GPT: A Technology Acceptance Model Study

Dear Participant,

We are conducting a survey to better understand students' attitudes towards Chat GPT, an artificial intelligence-

based chatbot that provides information and assistance to users. Your participation in this survey will help us

understand how students perceive Chat GPT and how it can be improved to better serve their needs.

This survey is completely voluntary and anonymous. Your responses will be kept confidential and will only be used

for research purposes. The survey will take approximately 10 minutes to complete.

Please answer all questions to the best of your ability. There are no right or wrong answers, and we are interested in

your honest opinions and experiences.

Thank you for your time and participation. Your feedback is greatly appreciated.

Sincerely,

A. Items

Have you ever heard of Chat GPT?

Yes

No

Perceived usefulness:

To what extent do you agree with the following statements regarding Chat GPT?

1. Chat GPT can help me find the information I need quickly and easily.
2. Chat GPT is a valuable resource for answering my questions.
3. Chat GPT enhances my ability to learn.

Perceived ease of use:

To what extent do you agree with the following statements regarding Chat GPT?

1. Chat GPT is easy to use.


International Educational Review | 81

2. It is easy to get Chat GPT to do what I want it to do.


3. I find Chat GPT to be a user-friendly tool.

Attitude towards using Chat GPT:

To what extent do you agree with the following statements?

1. I enjoy using Chat GPT.


2. Using Chat GPT is fun.
3. I find it interesting to interact with Chat GPT.

Behavioral intention to use Chat GPT:

To what extent do you agree with the following statements?

1. I intend to use Chat GPT in the future.


2. I plan to use Chat GPT frequently in the future.
3. I expect to use Chat GPT more often in the future than I do now.

Perceived credibility:

To what extent do you agree with the following statements regarding Chat GPT?

1. Chat GPT is a trustworthy source of information.


2. I believe that Chat GPT provides accurate information.
3. I perceive Chat GPT to be a reliable resource.

Perceived social influence:

To what extent do you agree with the following statements?

1. My peers think I should use Chat GPT.


2. I believe that using Chat GPT is socially acceptable.
3. I am encouraged by others to use Chat GPT.

Perceived privacy and security:

To what extent do you agree with the following statements regarding Chat GPT?

1. I am concerned about the privacy of my information when using Chat GPT.


2. I am confident that Chat GPT will keep my information secure.
3. Chat GPT takes adequate measures to protect my privacy.
82 | Y I L M A Z , MAXUTOV, BAITEKOV & BALTA

Rating

Dimension 1 2 3 4 5 6 7

Perceived Strongly Disagree Neither agree Agree Strongly

usefulness disagree nor disagree agree

Perceived ease Very difficult Difficult Somewhat Neither Somewhat Easy Very

of use difficult difficult easy easy

nor easy

Attitude Strongly Disagree Neither agree Agree Strongly

towards using disagree nor disagree agree

Chat GPT

Behavioral Very Unlikely Somewhat Neutral Somewhat Likely Very

intention to use unlikely unlikely likely likely

Chat GPT

Perceived Strongly Disagree Neither agree Agree Strongly

credibility disagree nor disagree agree

Perceived Strongly Disagree Neither agree Agree Strongly

social disagree nor disagree agree

influence:

*Perceived Strongly Disagree Neither agree Agree Strongly

privacy and disagree nor disagree agree

security:

*This dimension was removed because of validation process


International Educational Review | 83

B. Demographic information

Please provide the following demographic information.

Age

Gender

Educational level

Major/field of study

Prior experience using artificial intelligence (AI) or chatbots (A lot, neither more nor less, little)

Corresponding Author Contact Information:

Author name: Azatzhan Baitekov


Department: Department of Pedagogy of Natural Sciences
University, Country: Suleyman Demirel University, Kazakhstan
Email: [email protected]

Please Cite: Yilmaz, H., Maxutov, S., Baitekov, A. & Balta, N. (2023). Student’s Perception of Chat GPT: A
Technology Acceptance Model Study. International Educational Review, 1(1), 57- 83. DOI:
https://doi.org/10.58693/ier.114

Copyright: This is an open-access article distributed under the terms of the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and
source are credited.

Conflict of Interest: No conflict of interest

Publisher’s Note: All claims expressed in this article are solely those of the authors and do not necessarily represent
those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may
be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the
publisher.

Data Availability Statement: Data is available upon request from corresponding author.

Ethics Statement: This material is the authors' own original work, which has not been previously published
elsewhere.

Author Contributions: Authors equaly contributed to this paper.

Received: December 13, 2022 ▪ Accepted: April 10, 2023

View publication stats

You might also like