
RÜMEYSA PEKTAŞ
PhD in English Language Teaching~SDU
Active Learning in ELT, AI, PP, Interculturality, Women Studies, Teacher Development, Reflective Thinking, DA, PD, Communication Skills, Special Education
Active Learning in ELT, AI, PP, Interculturality, Women Studies, Teacher Development, Reflective Thinking, DA, PD, Communication Skills, Special Education
less
Related Authors
Daniel García-Pérez
Universidad Complutense de Madrid
Tony Burner
Universitetet i Sørøst-Norge / University of South-Eastern Norway
Na Li
Xi'an Jiaotong-Liverpool University
maria ranieri
Università degli Studi di Firenze (University of Florence)
Maryam Alqassab
Ludwig-Maximilians-Universität München
InterestsView All (11)
Uploads
Papers by RÜMEYSA PEKTAŞ
2007; Shute 2008). The growing dependence on AI technologies like ChatbotGPT for offering feedback has led to discussions about the effectiveness of AI generated feedback, in comparison to feedback from evaluators. These research findings also suggest that feedback
generated by AI frequently lacks the nuanced understanding and contextual awareness that human evaluators offer. This disparity is particularly noticeable in scenarios involving subjective tasks like composing essays (Violaine & Long, 2021).
However, AI tends to fall when it comes to offering critical feedback on the content quality and argument effectiveness. According to research conducted by Williams and Lee (2023), feedback from AI tools like ChatGPT is generally broader and less tailored to the requirements of each student compared to the feedback given by human teachers.
Even though these important discoveries have been made, there is still a gap in the research when it comes to the differences in the feedback quality for essays of varying levels of quality between AI and human assessors. This research aims to bridge this gap by examining whether there are differences in the quality of feedback given by ChatGPT compared to that given by human assessors for essays categorized as quality. Exploring this issue aims to add value to the discussions on how AI influences education and its implications for teaching and learning
methods based on the findings in this area of study. This research was conductedto find out how the quality of formative feedback provided by ChatGPT differs from that provided by human evaluators.
Key Words: Women Studies, Critical Feminist Theory, ELT, Türkiye
2007; Shute 2008). The growing dependence on AI technologies like ChatbotGPT for offering feedback has led to discussions about the effectiveness of AI generated feedback, in comparison to feedback from evaluators. These research findings also suggest that feedback
generated by AI frequently lacks the nuanced understanding and contextual awareness that human evaluators offer. This disparity is particularly noticeable in scenarios involving subjective tasks like composing essays (Violaine & Long, 2021).
However, AI tends to fall when it comes to offering critical feedback on the content quality and argument effectiveness. According to research conducted by Williams and Lee (2023), feedback from AI tools like ChatGPT is generally broader and less tailored to the requirements of each student compared to the feedback given by human teachers.
Even though these important discoveries have been made, there is still a gap in the research when it comes to the differences in the feedback quality for essays of varying levels of quality between AI and human assessors. This research aims to bridge this gap by examining whether there are differences in the quality of feedback given by ChatGPT compared to that given by human assessors for essays categorized as quality. Exploring this issue aims to add value to the discussions on how AI influences education and its implications for teaching and learning
methods based on the findings in this area of study. This research was conductedto find out how the quality of formative feedback provided by ChatGPT differs from that provided by human evaluators.
Key Words: Women Studies, Critical Feminist Theory, ELT, Türkiye