Berrezueta 2023 Cseet 2
Berrezueta 2023 Cseet 2
Berrezueta 2023 Cseet 2
Outcomes
1st Jonnathan Berrezueta-Guzman 2nd Markus Paulsen 3rd Stephan Krusche
Technical University of Munich Technical University of Munich Technical University of Munich
[email protected] [email protected] [email protected]
Abstract—The consistent effort and autonomous problem- who lack prior exposure to plagiarism control measures are
solving required in programming instruction presents unique more prone to committing plagiarism. Lastly, the study reveals
educational challenges, with plagiarism being a significant factor that early communication of plagiarism control measures at
negatively correlated with independent programming perfor-
mance. The prevalence of digital communication and information the beginning of the course serves as a motivational factor for
exchange further complicates this issue. students, leading to improved academic performance.
This research paper aims to investigate and discern patterns of Notably, the outcomes of the third experiment hold sub-
plagiarism among first-year computer science undergraduates. To stantial implications. The data unequivocally indicate that
investigate the research questions, we conducted three controlled
students who are aware of and adhere to plagiarism control
experiments. The first experiment demonstrated that students
were more likely to commit plagiarism when provided with the measures achieve their intended learning outcomes more ef-
opportunity to do so. The second experiment corroborated this, fectively (achieving higher grades). This accomplishment in
showing a tendency for plagiarism among students who had not turn helps them to avoid complications in subsequent subjects
previously been exposed to any plagiarism mitigation strategies. throughout their academic journey, safeguarding the quality
The third experiment revealed that early awareness of plagiarism
of their education. Moreover, these positive outcomes extend
mitigation strategies leads to enhanced student performance,
indicating that awareness of these measures acts as a stimulant beyond the realm of academia, benefiting students in their
for autonomous learning. future professional opportunities.
Index Terms—programming education, interactive learning, This paper systematically presents its content in the follow-
online training and education, software engineering education ing way: Section II is dedicated to reviewing and summarizing
for novices, vision for education in the future
relevant existing related work, along with their respective
conclusions and limitations. Section III explains the method-
I. I NTRODUCTION
ology adopted in this research. It outlines the organizational
Plagiarism poses a substantial challenge in introductory pro- aspects of the conducted courses and describes the specific
gramming courses, presenting both immediate and long-term tool employed for plagiarism control. Section IV presents a
consequences for students [1]. Extensive research has estab- meticulous overview of the experimental design, data collec-
lished a strong correlation between plagiarism and academic tion procedures, and the subsequent results derived from the
failure, with students who engage in plagiarism consistently experiments.
achieving the poorest performance in individual examinations In Section V, the paper offers a discussion to critically
[2]. Moreover, the implications of plagiarism extend beyond analyze and interpret the obtained results. This section serves
the academic realm, impacting students’ professional lives and to emphasize the significance of the findings, considering their
imposing limitations on their subsequent performance [3], [4]. implications and potential ramifications. Finally, Section VI
The task of plagiarism detection in assignments within concludes the research by summarizing the key insights and
large-scale courses presents a formidable challenge. This conclusions drawn from the study. It also presents potential
complexity arises from the fact that the required effort for avenues for future research, highlighting areas that warrant
plagiarism detection is directly proportional to the number of further investigation and suggesting potential directions for
students, while the number of reviewers available for the task improving plagiarism control.
usually remains constant [5]. Without the aid of specialized
tools, determining instances of plagiarism, as well as identify- II. R ELATED WORK
ing the originators/creators (providers) and recipients (takers)
of plagiarized content, becomes a non-trivial problem within According to Jenkins and his colleagues, plagiarism hap-
the domain of computer science education [6]. pens mostly when the student encounters the opportunity [7].
This paper makes a significant contribution by conducting Additionally, some researchers experimented and concluded
three rigorous experiments to validate key hypotheses. Firstly, that plagiarism is related to high levels of failure in academic
the findings confirm that the presence of an opportunity to do courses and especially those involving programming exercises
so significantly increases the likelihood of students engaging [8], [9]. Therefore, including a plagiarism control system for
in plagiarism. Secondly, the results demonstrate that students the source code of programming assignments has motivated
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or
reuse of any copyrighted component of this work in other works.
students to work autonomously and to improve their academic unique exercise per student while the second one is conducting
performance [4], [10]. individual presentations. The first technique suggested a con-
Kaya and Özel extended the Moodle system with the GNU siderable reduction in plagiarism detection in different classes,
Compiler Collection (GCC)1 and the Measure of Software as there was a reduction of inter-report similarity of 13 %.
Similarity (MOSS)2 source code plagiarism detection tool The second technique suggested that its application motivated
to decrease the effort for the assessment of programming the students to avoid plagiarism. However, Halak and El-
assignments and prevent students from plagiarism in the Data Hajjar didn’t mention the necessary effort of the instructors
Structure course [13]. The experiment was conducted with two of conducting these techniques.
courses of the same subject, and the same professor but in Finally, Moss and his colleagues conducted a systematic re-
different years. view of 83 empirical papers to clarify the psychological causes
The results indicated that the first group (which was not of plagiarism [17]. The finding of this study established that
aware of the plagiarism check) was the one with lower the possible causes can be the emphasis on competition and
performance (29.8 % of students failed) in comparison to success rather than development and cooperation coupled with
the second group (which was aware of the plagiarism check) impaired resilience, limited confidence, impulsive tendencies,
where 21.6 % of students failed (8.2 % lower failure rate). and biased cognitions.
As it was expected, the first group was the one with a higher These studies and their results collectively contribute to
amount of plagiarism cases, taking into account a threshold the understanding of plagiarism in programming courses and
of 50 % of similarity. However, the experiment had only 3 provide insights into effective approaches for plagiarism de-
programming assignments and also included 1 midterm and 1 tection, prevention, and fostering academic integrity among
final exam. students.
Pawelczak used a tool for tokenizing and averages certain
characteristics of the source code [14]. He analyzed 5 years
of data from a course on the introduction of programming in III. M ETHODOLOGY
C language with 7 programming exercise assignments. The
typical threshold used was 80 % in this study due to the This research study aims to address three primary research
average length of the exercises. questions:
The results established that since the implementation of the
tool, the performance of the students in examinations is about • RQ1: Do students exhibit a higher propensity to engage
8.6 % better. They also find that about 84 % of the stu- in plagiarism when presented with an opportunity?
dents who failed plagiarized in the programming assignments. • RQ2: Does prior experience with plagiarism checks en-
Additionally, the tool detected about 92 % of students who hance students’ motivation to work autonomously, com-
abandoned the course were involved in committing plagiarism. pared to those without such experience?
Pawelczak also implemented an experiment where he com- • RQ3: Does the announcement of plagiarism checks at
pared two introductory programming courses on their assign- the beginning of the course yield beneficial outcomes for
ments and their learning outcomes [15]. He checked the stu- students?
dents’ submissions (for 7 assignments in total) for plagiarism To address these questions, we devised and executed three
directly after the submission deadline while for the second one, experiments spanning three semesters within the Information
we conducted the plagiarism check after the semester ended. Engineering bachelor’s degree program: winter semester 2021-
The results indicate that the average percentage of stu- 2022 (WS21/22), summer semester 2022 (SS22), and winter
dents plagiarizing in the first group is 11.1 %, while in semester 2022-2023 (WS22/23).
the second group is 35.1 % (24 % higher than the first
During the winter semesters, we conducted an intro-
group). Additionally, the second group was the one with lower
ductory programming course (InProg) targeted at first-
performance in the final examination. However, the students
semester students, facilitating their progression from beginner
in this group presented better skills in fundamental coding.
to intermediate-advanced programming skills. In the summer
Pawelczak believes this is due to the students being forced
semester, we offered an Introduction to Software Engineer-
to edit their implementations (adding loops, outsourcing code
ing (ISE) course, intended for second-semester students to
into functions, etc.) in order not to be caught by the plagiarism
develop advanced to professional-level software engineering
detection tool.
and programming skills.
On the other hand, Halak and El-Hajjar presented two tech-
While it is not mandatory to have completed InProg to
niques to prevent plagiarism [16]. The first one is assigning a
enroll in ISE, students are expected to leverage their acquired
1 It is a free and open-source compiler system developed by the GNU knowledge from InProg to successfully tackle the proposed
Project. It is used to compile various programming languages such as C, exercises and the final exam.
C++, Objective-C, Fortran, Ada, and others [11]. These courses employ interactive learning, which aims to
2 It is an automatic system for determining the similarity of programs. To
date, the main application of Moss has been focused on detecting plagiarism minimize the temporal gap between theoretical learning and
in programming classes [12]. practical application by students [18]–[21].
A. Course Organization While other tools may employ different detection methodolo-
InProg. The introductory programming course course takes gies, such as Neural Networks [25], JPlag has proven to be
place over 12 weeks in which students not only take the lecture highly efficient in terms of analysis time and accuracy when
but also work on exercises that prepare them for the final exam. identifying instances of plagiarism among submissions.
Quiz exercises. At the beginning of each lecture, students When students upload their solutions through Artemis,
are presented with a set of 3-5 questions associated with instructors have the option to manually initiate a similarity
the topics covered in the previous lecture. They are given 5 analysis run by setting a threshold based on the exercise’s
minutes to solve these quiz exercises. When the time is up, the length. For instance, the threshold can be determined by
instructor proceeds to explain the solution and then addresses the average number of lines of code required to complete a
any doubts or objections raised by the students afterwards. programming-based exercise, the length of words in a text-
Tutor exercises. These exercises consist of two assignments based exercise, or the components used in a model-based
per week and primarily focus on programming tasks of easy exercise.
to medium difficulty. Students collaboratively work on these Figure 1 illustrates the sequential steps involved in the
exercises during tutoring sessions, fostering the development automatic checking, manual analysis, student notification, stu-
of soft skills such as effective communication and teamwork. dent statement, and verdict of a plagiarism case, carried out
The collaborative nature of these exercises prepares students throughout the utilization of the Artemis platform. These steps
for independent problem-solving in subsequent homework contain the following actions:
exercises. 1. Automatic check: Once the exercise instructors activate
Homework exercises. These assignments comprise two the automatic plagiarism checker in the plagiarism detection
weekly programming exercises of medium to hard difficulty. screen in Artemis by clicking on the ”Detect Plagiarism”
The submissions of these exercises contribute to the practical Button, all student submissions are compared against each
part (Prog) of the InProg course’s final grade and aid in other using JPlag’s greedy string tiling algorithm in order to
preparing students for the final exam, which determines the obtain a measure of the similarity of a submission pair (in
theoretical part (In) grade of the course. percent) and a plagiarism report describing the similarities
Presentations. Homework exercises and tutorial exercises found. Artemis now displays to the exercise instructors every
are presented by students each week during the tutorial ses- submission pair above the aforementioned threshold and an
sions. This is a suitable environment to discuss the solutions overall similarity distribution.
and resolve any remaining questions. This is an additional Overall, this step preprocesses and discards a significant
step to check if students can understand and present their own number of submission pairs that are not suspected to be
solution. involved in plagiarism (indicated by a similarity measure
ISE. The course Introduction to Software Engineering fol- below the threshold). Therefore, the exercise instructors can
lows a similar methodology to InProg, but with the inclusion focus exclusively on the cases which are suspected to be
of other exercise types, such as text exercises and modeling plagiarism cases.
exercises, in addition to programming tasks. Students take it 2. Manual analysis: The exercise instructors inspect the
in their second semester, as it builds on the InProg. plagiarism report, which displays the two similar code files
side by side. The part of the code that was not included in the
B. Plagiarism Detection Process and Tool template and overlaps is highlighted in blue. Subsequently, the
Artemis is an open-source learning and research platform3 . exercise instructors decide whether the similarities found are
It is designed to facilitate the creation and automated assess- indeed indicative of plagiarism.
ment of programming, modeling, text, and file uploading ex- Therefore, they try to spot typical plagiarism patterns and
ercises [22]. Students participate in programming exercises by other recognizable indications like removing/adding white
committing and pushing code in a version control repository. spaces, renaming variables, identical variable naming strate-
They receive feedback from automatically executed tests and gies, changing between similar types of control statements,
static code analysis in a continuous integration environment. identical comments, identical text patterns, identical mistakes,
Artemis incorporates a semi-automatic system for detecting and identical elements within a UML diagram. In case there
similarity among submissions of the same exercise [23]. is sufficient reasonable evidence of plagiarism, the exercise
In order to achieve this, Artemis integrates JPlag, a plagia- instructors will mark the submission pair as a ”plagiarism
rism detection service equipped with a Greedy-String-Tiling case”. Otherwise, they will mark the submission pair as a ”no
algorithm [24]. JPlag is capable of identifying pairs of similar plagiarism case”.
programs within a given set of programs and supports multiple This step is necessary as automatic checking is not yet able
programming languages, including C, C++, Java, and more. It to accurately detect all cases and would produce too many
compares the abstract syntax tree of two programs. This way, false positives. Therefore, the exercise instructors manually
the comparison hits even if students copy the code and make review all cases with high similarities and accept or deny each
simple changes to variable names or the order of statements. case.
3. Student notification: If the exercise instructors (after
3 https://github.com/ls1intum/Artemis a second screening) decide a particular pair of submissions
1. Automatic check 2. Manual analysis 3. Student notification 4. Student statement 5. Final verdict
Fig. 1. Procedure of the plagiarism check: The instructors evaluate the results of the automatic check for plausibility and notify the identified students, who
can submit a statement on their case. Depending on the severity of the plagiarism and the student’s statement, the instructors decide on the final verdict.
to be a case of plagiarism, they will issue a notification conducted with the involved students in a plagiarism case. The
email in Artemis to both involved students. The email will determination of their roles relies partially on the admissions
inform them of the suspected plagiarism and the consequences provided by the students who acknowledge their involvement.
they can expect (e.g., failing the course and only getting one Verification is feasible due to the availability of precise
more chance), and the opportunity to submit comments within data in Artemis, including the specific number and timing of
the following seven days. In either case, both students are commits made by the students. Typically, the giver is charac-
suspected of plagiarism, as there is usually no way to clearly terized as the student who initiates and completes the task first,
identify the sender and the participant of an deception before while also demonstrating a higher frequency of commits. Con-
contacting them. This step is required to inform the student of versely, the taker often exhibits contrasting behavior, typically
the suspected plagiarism, which may or may not subsequently commencing the task later and generating fewer commits. By
result in a plagiarism conviction. considering these observed patterns and leveraging the data
4. Student statement: The student responds to the notifi- provided by Artemis, instructors can discern the roles of the
cation by using the link embedded in the email. Using this taker and giver in the plagiarism case.
link, the student can view the comparison between the own
work and the anonymized work of the other involved student. IV. E XPERIMENT AND R ESULTS
The student explains in detail why the work is supposed to be A total of three experiments were conducted over three
original and therefore the suspicion is unfounded. semesters. The organization of these experiments, along with
The student must demonstrate that the assumptions made their respective pre-conditions, student groups, and the corre-
by the exercise instructors regarding the plagiarism patterns sponding courses involved, are presented in Table I.
detected were incorrect or that the accuracy of the detection
was too low. Likewise, they have the opportunity to admit TABLE I
their misconduct and facilitate further proceedings. This step O RGANIZATION OF THE EXPERIMENTS . AWARENESS MEANS WHETHER
STUDENTS HAVE BEEN INFORMED ABOUT PLAGIARISM CONTROL .
allows the student to justify the submission against suspicion
of plagiarism, which may be warranted in the case of a false Experiment Students’ groups Awareness Course
positive. WS21/22 HN01 (70 students)
Since the middle
InProg
of the course
5. Final verdict: The exercise instructors meticulously HN01 (44 students) Since the beginning
assess and evaluate whether the justifications made by the SS22 ISE
GA01 (1621 students) of the course
student are steadfast. This involves analyzing whether the Since the beginning
WS22/23 HN02 (61 students) InProg
of the course
students’ justification demonstrate that they fully mastered the
concepts taught in the lecture and exercises, and whether there
are any unresolved gaps or problems in the reasoning of the
exercise instructors. A. First Experiment - WS21/22
Finally, the exercise instructors decide on a verdict of either The objective of this experiment was to investigate whether
”plagiarism” (in case the justifications were not steadfast), students (HN01 group) in the InProg course engaged in
resulting in a failing grade, or ”no plagiarism” (in case plagiarism when they were not initially informed about the im-
the justifications were steadfast), which has no immediate plementation of plagiarism checks. Additionally, we focused to
consequences, except that students are now generally more assess their behavior after the announcement of the plagiarism
careful with their submissions. A third alternative would be to checks.
deduct points in the respective exercise and to issue a warning. In order to achieve this, we chose not to inform the students
In this last step, the exercise instructors pronounce their about the plagiarism control at the beginning of the semester
final verdict for the two students involved, whereupon the but rather in the middle of the course period, specifically in
appropriate consequences are announced. This final verdict is week 8. During that week, we provided detailed information
documented in Artemis together with the plagiarism report in on how the checks were conducted (as described in the
case the data is needed later, e.g. if the student complains to previous section) and the penalties.
the examination board or sues the universities in court. Figure 2 illustrates that the plagiarism control was initiated
The identification of the taker (the student who engaged in exercise H03E01 (where the x-axis starts), as the initial two
in copying) and the giver (the student who facilitated the weeks primarily involved simple exercises with minimal lines
copying) can sometimes be ascertained through the discussions of code (conducting a plagiarism analysis on these exercises
would have resulted in an unnecessarily high number of was adequately complex, presented a specific context, and
plagiarism accusations). demanded students to propose their solutions. Furthermore,
We observed an average of 20 instances of similarity the notification of the plagiarism control served as an alert to
exceeding the predetermined threshold per exercise during the refrain from copying, resulting in increased time investment
first 8 weeks (the number of possible cases of similarity likely and commits compared to the other exercises
depends on the difficulty level of each exercise). However, Similar patterns were observed in exercises H09E02 and
in week 8, during the lecture, we explicitly announced to H13E01, where a substantial number of hours were invested
the students that plagiarism checks would be conducted for and only a few instances of similarity were found. However,
assignments starting from week 9. in the case of H13E01, these instances were confirmed as
As a result, we subsequently observed a significant decrease instances of complete plagiarism. Hence, it can be inferred
of 84 % in the number of submissions with similarity above the that the occurrence of plagiarism in these exercises can be at-
threshold for homework H09E01. Despite the prior notification tributed to their complexity. The students were confronted with
during the lecture, we had to issue plagiarism notifications to challenging exercises that pressured them to utilize plagiarism
4 students, indicating their involvement in plagiarism (First in order to resolve them.
notification).
40 350
37
316,2
35 300
40 31
36 253,3
30
35 250
172,7
25 23
30 29 200
28
27
20
25 25
25 23 150
21 21 15
20
19
20 18 100
10
15 5 50
10
10 9
8 0 0
7
6 6
H01E01
H01E02
H02E01
H02E02
H03E01
H03E02
H04E01
H04E02
H05E01
H05E02
H06E01
H06E02
H07E01
H07E02
H08E01
H08E02
H09E01
H09E02
H10E01
H10E02
H11E01
H11E02
H12E01
H12E02
H13E01
5
44
5 3
22
1
0 Average commits Average invested time in hours
H03E01
H03E02
H04E01
H04E02
H05E01
H05E02
H06E01
H06E02
H07E01
H07E02
H08E01
H08E02
H09E01
H09E02
H10E01
H10E02
H11E01
H11E02
H12E02
H13E01
40%
100%
20%
80%
0%
H01E01
H01E02
H02E01
H02E02
H03E01
H03E02
H04E03
H04E01
H04E02
H05E01
H05E02
H06E01
H06E02
H07E01
H07E02
H08E01
H08E02
H09E01
H09E02
H10E01
H10E02
H11E01
H11E02
H12E01
H12E02
H13E01
60%
H01E01
H01E02
H02E01
H02E02
H02E03
H03E01
H03E02
H03E03
H04E01
H04E02
H04E03
H05E01
H05E02
H06E01
H06E02
H06E03
H07E01
H07E02
H07E03
H08E01
H08E02
H09E01
H09E02
H10E01
H10E02
H11E01
H11E02
who exhibited characteristics consistent with being takers also
Participation Average score Log. (Participation) Linear (Av erage score)
failed the exam.
Hence, we can infer that these 8 students were indeed takers.
Fig. 5. Quantitative representation of student participation, alongside cor-
This outcome validates the efficacy of our plagiarism analysis responding average grade distributions, of the HN01 group throughout the
in the exercises, as it aligns with our initial expectations. duration of the ISE course.
0%
Finding 2: Following a plagiarism control notification,
H01E01
H01E02
H02E01
H02E02
H02E03
H03E01
H03E02
H03E03
H04E01
H04E02
H04E03
H05E01
H05E02
H06E01
H06E02
H06E03
H07E01
H07E02
H07E03
H08E01
H08E02
H09E01
H09E02
H10E01
H10E02
H11E01
H11E02
students either strive to work independently or simply
give up. Participation Average score Log. (Participation) Linear (Av erage score)
B. Second Experiment - SS22 Fig. 6. Quantitative representation of student participation, alongside corre-
sponding average grade distributions, of GA01 group throughout the duration
This experiment involved the comparative analysis of pla- of the ISE course.
giarism behavior between two distinct groups of students
within the same subject but in different locations. The first The occurrence of detected plagiarism cases exhibited a
group consisted of students from Experiment 1, referred to substantial disparity between the HN01 and GA01 groups. In
as the HN01 group, while the second group comprised stu- the HN01 group, only 3 exercises indicated a similarity level
dents who had not previously undergone a plagiarism control surpassing the established threshold, as shown in Figure 7. In
process, denoted as the GA01 group. contrast, the GA01 group indicated 15 exercises with a sim-
In this experiment, both groups were immediately informed ilarity level above the threshold, which suggests a quadruple
about the implementation of a plagiarism check for all as- increase compared to the HN01 group, as shown in Figure 8.
signments at the beginning of the SS22 semester. The subject The distribution of plagiarism cases was more evenly spread
taught during this period was ISE, encompassing programming across the exercises in the GA01 group. In the HN01 group, a
tasks, modeling exercises, and text-based assignments. total of 12 students were notified, out of which 2 students
The average grade between the HN01 and GA01 groups received a second notification and subsequently failed the
exhibited minimal imbalance. The HN01 group achieved an course. On the other hand, in the GA01 group, a larger
average grade of 77.4 %, whereas the GA01 group obtained number of students, specifically 86 students, were notified
an average percentage of 78.3 %, resulting in a difference of of plagiarism cases, and among them, 22 students failed the
less than 1 %. However, a notable divergence was observed course.
8 ments throughout the course duration (due to the semester
7
7 planning, week 12 had no assignments).
6 Figure 9 shows a notable decline in the number of similarity
5 5 5
5 checks, reaching zero starting from week three. In contrast,
4 to experiment 1 with the HN01 group (Figure 2), there
3 is no provision for a first notification in this experiment,
2 2 2
2 and students fail the course directly upon confirmation of a
1 plagiarism case.
0 Consequently, assignments (from week 4 onwards) exhibit
H07E03 H08E01 H08E02
Simil arity cases over the threshold First notification Second notification (Fail the course)
no instances of similarity surpassing the established threshold.
This new condition has led to a significant reduction in the
Fig. 7. Quantitative representation of the plagiarism cases, alongside corre-
instructor’s effort required for reviewing similarity cases.
sponding notifications of HN01 group throughout the duration of the course.
5
30
26 26
25 4
4
20
17 17 3
16 16
15 2 2
12 12
10 2
10
7
6 66 6
5 4 44 4 4 4 4 4 4 4 4
3 1
22 2 2 2 2 22
0 0 0 0 0 0 0 0 0
0
0
01
02
03
01
02
03
02
03
03
01
02
01
02
02
02
3E
3E
3E
4E
4E
4E
5E
6E
7E
8E
8E
9E
9E
0E
1E
H0
H0
H0
H0
H0
H0
H0
H0
H0
H0
H0
H0
H0
H1
H1
H0 1
H0 2
H0 1
H0 2
H0 1
H0 2
H0 1
H0 2
H0 1
H0 2
H0 1
H0 2
H0 1
H1 2
H1 1
H1 2
H1 1
02
0
0
3E
3E
4E
4E
5E
5E
6E
6E
7E
7E
8E
8E
9E
9E
0E
0E
1E
1E
H0
Similarity cases over the threshold First notification Second notification (Fail the course)
Similarity cases over the threshold Definitive notification (Fail the course) Similarity trendline
Fig. 8. Quantitative representation of the plagiarism cases, alongside corre- Fig. 9. Quantitative representation of the plagiarism cases, alongside corre-
sponding notifications of GA01 group throughout the duration of the course. sponding notifications of HN02 group throughout the duration of the course.
18
14
16
12
14
10
12
8 10
8
6
6
4
4
2
2
0 0
5,0 4,7 4,3 4,0 3,7 3,3 3,0 2,7 2,3 2,0 1,7 1,3 1,0 5,0 4,7 4,3 4,0 3,7 3,3 3,0 2,7 2,3 2,0 1,7 1,3 1,0
Fig. 10. Graphical representation of grade distribution for the practical Fig. 13. Graphical representation of final examination grades (In) for the
programming course (Prog) during the winter semester WS22/23. The average winter semester WS21/22. The average grade of the course is highlighted in
grade of the course is highlighted in blue. blue.
34