Educ 6 Module 1
Educ 6 Module 1
Educ 6 Module 1
Module in Education 6
Course Learning At the end of the course, students are expected to acquire necessary
Objectives knowledge and skills critical to efficient performance of roles and
responsibilities as prospective teachers. Specifically the objectives are:
Knowledge:
To identify the role of assessment in making instructional decisions
To apply the basic concepts and principles of high quality
assessment
To match the different targets and their appropriate assessment
techniques
Skills/Abilities:
To construct classroom assessment tools in measuring knowledge
and thinking skills
To construct periodic/standardized tests
To demonstrate skills in utilizing and reporting test results
To apply statistical procedures in standardizing tests
To grade students’ achievement
Values/Attitudes:
To demonstrate an appreciation of assessment in improving
teaching and learning
To manifest accuracy in computation , intellectual honesty ,and
responsibility
References:
Chatterji, M. (2003). Designing and using tools for educational assessment. Boston: Teachers College,
Colombia University.
de-Guzman, Estefania, S and Joel L. Adamos. (2015). Assessment of learning 1. Quezon City: Adriana
Publishing Co., Inc
De Guzman-Santos, Rosita (2007). Assessment of learning 1. Quezon City: Lorimar Publishing , Inc.
PRETEST
Multiple Choice : Select the correct answer. Write the letter of your choice in your activity notebook.
1. Assessment is important to teachers because of the decisions they will make about their students
when teaching them. For teachers, which of the following is NOT among the functions of assessment?
A. Identify learner’s needs and abilities
B. Monitor and improve the teaching-learning process
C. Make decisions about how to implement learning activities
D. Make policy decisions regarding what is and what is not appropriate for lea andrners
2. Mr. Reyes uses evidences of student learning to make judgements on students achievement against
goals and standards. He does this at the end of a unit or period. Which purpose does assessment serve?
A. Assessment as learning B. Assessment for learning
C. Assessment of learning D. Assessment strategy
3. During his first meeting Mr. Montecino gave a readiness test to determine the prerequisite skills and
degree of mastery his students possess in relation to the course objectives or learning outcomes. He
intends to use the results to determine where he will begin in his lesson plan and decide on the best
mode of learning. Which form of assessment did he employ?
A. Diagnostic B. Formative
C. Placement D. Summative
4. Test results were utilized to make decisions about the school’s curriculum. What function does the
assessment serve?
A. Administrative B. Guidance
C. Instructional D. Research and evaluation
5. Which of the following guides the teacher in ensuring that achievement domains are achieved and a
fair and representative sample of questions appear on the test? It provides evidences of content validity.
A. Assessment matrix B. Lesson plan
C. Table of contents D. Table of specification
6. A Science teacher administered the same test twice. Which statistical tool is used in a test-retest
method?
A. Cronbach alpha B. Kuder-Richardson 20/21 C. Spearman Brown D. Pearson r
7. Which of the following is true about validity and reliability?
A. For an instrument to be valid, it must be reliable B. For an instrument to be reliable, it must be valid
C. Both (A) and (B) D. None of these
8 .A Grade 7 Science teacher grouped the students for an oral presentation about mitigation and
disaster risk reduction. She forgot to discuss to the class how they shall be graded. What ethical concern
did the teacher fail to consider?
A. Confidentiality B. Fairness C. Relevance D. Transparency
9. What is a basic feature of summative assessment?
A. It is used for evaluating student learning and effectiveness of instruction
B. It is basically utilized for monitoring learning of students
C. It marks student performance before and after instruction
D. It is used for making instructional decisions
10. Multiple-choice type is considered a versatile format since it can be used for assessing
A. learners of different ability levels B. lower and higher-level learning outcomes
C. knowledge, processes and performance D. all levels of assessment
11. What is the only requirement of a test to be considered an objective test?
A. Appropriateness of difficulty level of items to test takers means continued study, review,
B. Observance of fairness in writing the items
C. Presence of only one correct or nearly best answer in every item
D. Comprehensibility of test instructions
12. Performance assessment tasks are utilized for assessing complex understanding. Select the learning
outcome appropriate to this kind of assessment.
A. Making weather predictions based on available data
B. Choosing the best paraphrase of a proverb
C. Associating significant inventions with their inventors
D. Composing an original design based on a theme
13. The median score in a test is 34. What does this indicate?
A. The average is 34 B. Half of the test scores are above/below 34
C. Majority of the students received a score of 34 D. The cut-off score is 34
14. Mean is to central tendency while standard deviation is to
A. difficulty B. discrimination C. reliability D. variability
15. When interpreting test results? Which of the following types of scores is least meaningful?
A. Raw score B. Percentage correct score C. Percentile rank D. standard score
16. Joseph’s grade in Mathematics did not progress despite a high-quality curriculum and group support.
What can the teacher do?
A. Give makeup work to cope with lessons
B. Implement a personalized action plan that target skills deficit
C. Provide more assignments for practice
D. Transfer the student to another class
17. What must be satisfied for large-scale assessment to perform its use to compare countries?
A. consistency of findings for schools over time B. positive attitude of participating schools
C. score comparability among schools D. fairness of items used in schools
18. The purpose of field testing as a step in the test development process is to find out if the test items
A. can be answered only by students who have the knowledge to answer them
B. can be answered by all the students in the participating schools
C. are understandable to all types of students
D. are worded appropriately
19. To ensure fairness , it is important that large-scale test be pilot tested to
A. samples of students from high-performing and low-performing schools
B. random samples representing the age and gender grouping of the target population
C. selected samples belonging to the same grade level as the target population
D. random samples of the same grade level as the target group coming from different socio-economic
and cultural conditions
20. Which of the statements below is true?
A. Classroom-based and large-scale testing can be both norm-referenced and criterion-referenced
B. Classroom – based is more norm-referenced than criterion-referenced
C. Large-scale assessment testing can be better used for criterion-referenced interpretation
D. Neither classroom testing nor large-scale testing is appropriate for norm-referenced purposes.
Discussion/Presentation
Measurement. Comes from an old French word mesure which means “limit or quantity”. Basically, it is a
quantitative description of an object’s characteristic or attribute. When we measure there are appropriate
measuring tools to gather numerical data on variables such as height, mass, time, and temperature among others.
In the field of education, what do teachers measure and what instruments do they use?
Teachers are particularly interested in determining how much learning a student has acquired. They measure
particular elements of learning like students” readiness to learn, recall of facts or ability to analyze and solve
problems. Teachers use tools like tests, oral presentations, written report, portfolios and rubrics to obtain
pertinent information. Among these, tests are the most pervasive.
Testing. A formal, systematic procedure for measuring a learner’s knowledge, skills, or abilities administered
under certain conditions (manner and duration). Test is a tool or an instrument used in testing. Educational tests
may be used to measure the learning progress of a student . Tests are the most dominant form of assessment.
Types of Test
According to mode of response:
Oral test – answers are spoken
Written test – activities wherein students either select or provide a written response to a
question
Performance tests - activities that require students to demonstrate their skills or ability to
perform specific actions. More aptly called performance assessments, they include
problem-based learning, inquiry tasks, demonstration tasks, exhibits, presentation tasks
and capstone performances.
According to Ease of quantification:
Objective test can be corrected and quantified quite easily. The test items have a
single or specific convergent response.
Subjective test elicits varied responses
According to test constructors :
Standardized tests – prepared by experts who are versed in the principles of
assessment. They are administered to a large group of students or examinees under
similar conditions. Scoring procedures and interpretations are consistent. There are
available guides and manuals to aid in the administration and interpretation of results.
Results of standardized tests serve as an indicator of instructional effectiveness and a
reflection of the school’s performance.
Non-standardized – prepared by teachers who may not be adept at the principles of
test construction
According to Nature of Answer or Construct they are measuring:
Personality tests - initially intended to aid in the selection of personnel in the armed
forces, initially developed in the 1920s. It has no right or wrong answer, but it
measures one’s personality and behavioural style. It is used in the recruitment of
personnel , in career guidance, in individual and relationship counselling and in
diagnosing personality disorders. In schools, personality tests determine personality
strengths and weaknesses. Personality development activities can then be arranged
for students.
Achievement and Aptitude Tests
Achievement tests measure students’ learning as a result of instruction and training
experiences. When used summatively, they serve as a basis for promotion to the next
Grade.
Aptitude tests determine a student’s potential to learn and do new tasks. A career aptitude
tests aids in choosing the best line of work for an individual based on his/her skills and interests.
Intelligence tests - measure learner’s innate intelligence or mental ability.
Sociometric test – measures interpersonal relationships in a social group. The test
allows learners to express their preference s in terms of likes and dislikes for other
members of the group.
Trade or vocational tests – assesses an individual’s knowledge, skills, and competence
in a particular occupation Upon successful completion of the test, the individual is given
certification for qualification. Trade test can likewise be issued to determine the
fruitfulness of training program.
Functions of Testing
A. Instructional Functions
Tests facilitate the clarification of meaningful Learning objectives
Tests provide a means of feedback to the instructor and the student.
Tests can motivate learning.
Tests can facilitate learning
Tests are a useful means of over learning. Over learning means continued study, review,
interaction or practice of the same material even after concepts and skills had been mastered.
B. Administrative Functions
Tests provide a mechanism of quality control. Through tests, a school can determine the
strengths and weaknesses of its curricula
Tests facilitate better classification and placement decisions.
Tests can increase the quality of selection decisions.
Tests can be a useful means of accreditation, mastery or certification.
C. Research and Evaluation
Tests are utilized in studies that determine effectiveness of new pedagogical techniques. Evaluators also utilized
assessment data to determine the impact and success of their programs.
D. Guidance Functions
Tests can be of value in diagnosing an individual’s special aptitudes and abilities. The aim of guidance is to
enable each individual to understand his/her abilities and interests and develop them so that he/she can take
advantage of educational, vocational and personal opportunities.
ASSESSMENT
The word assessment comes from the Latin word assidere which means “to sit beside a judge”. This implies
that assessment is tied up with evaluation. Assessment is a process of collecting information about a learner’s
performance using a variety of tools and methods. Tests are a form of assessment .However, there are also non-
test assessment techniques like portfolio, observation, oral questioning and case studies for authentic assessment
Categories:
Maximum Performance – determines what learners “can do”. It is achieved when learners are
motivated to perform well. In this category, students are encouraged to aim for a high score.
Typical Performance – shows what learners “will do” or “choose to do”. It is more focused on
the learner’s level of motivation rather than his optimal ability.
Purposes:
Assessment for Learning (AfL) . Pertains to diagnostic and formative assessment tasks which are
used to determine learning needs, monitor academic progress of students and guide instruction.
Examples: Pre-tests, written assignments, quizzes, concept maps, focused questions
Assessment as Learning (AaL). Employs tasks or activities that provide students with an
opportunity to monitor and further their own learning – to think about their personal learning
habits and how they can adjust their learning strategies to achieve their goals. Examples: self
and peer assessment rubrics
Assessment of Learning (AoL). Summative and done at the end of a unit, task, process or period.
Its purpose is to provide evidence of a student’s level of achievement in relation to curricular
outcomes. Examples: unit tests, final projects
Relevance of Assessment
Assessment is needed for continued improvement and accountability in all aspects of the education
system. In order to make assessment work for everyone-students, teachers, and other players in the education
system ,one should have an understanding of what assessment provides and how it is used to explain the dynamics
of student learning.
EVALUATION
This comes in after the data had been collected from an assessment tasks. Evaluation is a process of judging
the quality of a performance or course of action. As what its etymology indicates (French word evaluer), evaluation
entails finding the value of an educational task. This means that assessment data gathered by the teacher have to
be interpreted in order to make sound decisions about students and the teaching-learning process. Evaluation is
carried out both by the teacher and students to uncover how the learning process is developin
Task/ Activity 1 Directions: Briefly answer the following questions in your notebooks.
1. Briefly discuss the differences and similarities of measurement, testing,
assessment and evaluation.
2. Rectify the following misconceptions. Explain in two to three sentences
why they are incorrect:
a. Assessment and evaluation are one and the same
b. Assessment is completed once every grading period.
c. Assessment is one-way. Only teachers are involved in assessment.
d. Assessment is ultimately for grading purposes.
e. Student’s work should always be given a mark or grade.
f. Assessment is the responsibility of program coordinators/ supervisors.
g. Assessment is imposed on teachers by the school and accrediting
agencies.
h. Formative assessment is a kind of test teachers use to find out what their
students know.
i. Instruction informs assessment but not the other way around.
j. Assessment is an average of performances across a teaching period.
3.As a college student, you underwent several assessments in basic
education . Recall from your own personal experience an assessment that
you think was truly meaningful to you. Explain why it is so. Explain also the
nature and purpose of that particular assessment.
4.Discuss the concept of measurement, assessment and evaluation in the
following scenarios.
a. Mrs. Vidal, A Grade 5 Science teacher, was done with her lessons on rocks
and soil erosion. She gave a test that required students to discuss the types
of rocks, how rocks turn into soil and the effect of soil erosion on living
things and the environment. She noted that Ellah scored 43 out of 50 in the
test. She informed Ellah that she satisfactorily attained the intended learning
outcome and deserved a grade of 86 with a remark that Susan showed
proficiency in the said topic.
b. Ms. Mendoza measured the size of her classroom and found that the
floor area is only 20 square meters .She reported to the principal that the
classroom is too small for a class of 40 students.
5. Explain briefly the relevance of assessment to learners, teachers, parents
and other stakeholders.
A. COGNITIVE (Knowledge-based)
B. PSYCHOMOTOR (Skills-Based)
The psychomotor domain focuses on physical and mechanical skills involving coordination of the brain
and muscular activity. It answers the question , “What actions do I want learners to be able to perform?”
Internalizing values Act, discriminate, display, influence, listen, Join intramurals to play
Characterization by a modify, perform, practice, propose, qualify, volleyball twice a week.
value or value complex question, revise, serve, serve, solve, use,
acting consistently with verify
the new value
The affective domain emphasizes emotional knowledge. It tackles the question, “What actions do I want
learners to think or care about?
1. Selected-Response Format . Students select from a given set of options to answer a question or a
problem. Categories: true-false, matching type, multiple – choice formats
Only one correct or best answer
Objective
Efficient
Items are easy to grade
2. Constructed-response format. This format is more useful in targeting higher levels of cognition. It
is subjective and demands that students create or produce their own answers in response to a
question, problem or task . Categories: brief-constructed response, performance tasks, essay items,
oral questioning
3. Teacher observations. A form of on-going assessment, usually done in combination with oral
questioning. Teachers regularly observe students to check on their understanding. This assessment
method can also be used to assess the effectiveness of teaching strategies and academic interventions.
4. Student self-assessment
Self-Assessment is one of the standards of quality assessment. It is a process where students the
students are given a chance to reflect and rate their own work and judge how well they have
performed in relation to a set of assessment criteria.
A learning target is a description of performance that includes what learners should know and be able
to do. Matching the assessment method with the learning outcomes requires an examination of the
evidences of learning needed and targeted levels of knowledge, understanding, reasoning , skills,
product/performance and affect as manifested in the learning targets.
Knowledge and simple understanding . Covers the lower order thinking skills of remembering,
understanding and applying. These can be assessed through pencil-and-paper tests, essays and oral
questions.
Deep understanding and reasoning. Involve higher order thinking skills of analyzing, evaluating and
synthesizing. Assessed through essays, performance tasks and oral questioning.
To assess skills, performance assessment is the superior assessment method. When used in real-life and
meaningful context, it becomes an “authentic assessment”.
Student affect can be assessed best by self-assessment. Oral questioning may also work in assessing
affective traits. Affect pertains to attitudes, interests, and values students manifest.
Tasks/ Activities All answers should be written in your activity notebooks .
1. Determine which domain and level of learning are targeted by the following
learning competencies.
2. SEQUENCING. Arrange the learning competencies using the hierarchy from lowest
to highest
Domain: Cognitive
Topic A. Quadratic Equations
______ (a) Solve quadratic equations by factoring
______ (b) Describe a quadratic equation
______ (c) Compare the four methods of solving quadratic equations
______ (d) Differentiate a quadratic equation from other types of equations
______ (e) Formulate real-life problems involving quadratic equations
______ ( f) Examine the nature of roots of a quadratic equation.
Domain: Psychomotor
Topic B. Basic Sketching
______ (a) Watch how tools are selected and used in sketching
______ (b) Create a design using combinations of lines, curves and shapes.
______ (c) Draw various lines, curves and shapes.
______ (d) Set the initial drawing position.
Domain: Affective
Topic C. Short Story
______ (a) Write down important details of the short story pertaining to
character, setting and events.
______ (b) Share inferences, thoughts and feelings based on the short story.
______ (c) Relate story events to personal experience
______ (d) Read carefully the short story.
______ (e)Examine thoughts on the issues raised in the short story.
3.Assessment scenarios
For each of the following scenarios, indicate which method provides the best match.
In determining the appropriate method, apply Table i. Justify your choice in one or
two statements.
1. Mrs. Duque wants to know if her students can identify the different parts of a
flower.
2. Ms. De la Cruz wants to determine if her Grade 1 pupils can smoothly and legibly.
3. Ms. Uy wants to check if her students can subtract two-digit numbers.
4. Mr. Bogo wants to find out if his students can examine the quality of education in
the country.
5. Mrs. Dayao wants to see if her students have grasped the important elements of
the story before continuing on to the next instructional activity.
2. Criterion- Related Evidence . Refers to the degree to which test scores agree with an external
criterion. It examines the relation between an assessment and another measure of the same
trait.
Types:
Concurrent Validity: provides an estimate of a student’s current performance in relation to a
previously validated or established measure. it is important to mention that data from the two
measures are obtained at about the same time. If the statistical analysis reveals a strong
correlation between the two sets of scores, then there is a high criterion Validity.
Predictive Validity- pertains to the power or usefulness of test scores to predict future
performance. For instance, can scores in the entrance /admission test (predictor) be used to
predict college success (criterion)? If there is a significantly high correlation between entrance
examination scores and first year grade point averages assessed a year later, then there is
predictive validity used.
In testing correlations between two data sets for both concurrent and predictive validity, the
Pearson coefficient of correlation (r) or Spearman’s rank order correlation may be ised.
3. Construct-Related Evidence
A construct is an individual characteristic that explains some aspect of behaviour. Construct-
related evidence of validity is an assessment of the quality of the instrument used. It measures
the extent to which the assessment is a meaningful measure of an unobservable trait or
characteristic.
Threats to Validity
These factors are defects in the construction of assessment tasks that would render
assessment inferences inaccurate. The first four apply to traditional tests and performance
assessments. The remaining factors concern brief-constructed response and selected-response
items:
Unclear test directions
Complicated vocabulary
Ambiguous statements
Inadequate time limits
Inappropriate difficulty level of test items
Poorly constructed test items
Inappropriate test items for outcomes being measured
Short test
Improper arrangement of items
Identifiable pattern of answer
For a test to be valid, it has to be reliable.
RELIABILITY
Refers to reproducibility and consistency in methods and criteria. An assessment is said to be reliable if
it produces the same results if given to an examinee on two occasions. It is expressed as a correlation
coefficient. A high reliability coefficient denotes that if a similar test is readministered to the same
group of students , test results from the first and second testing are comparable.
Types of reliability:
Internal Reliability – assesses the consistency of results across items within a test
External Reliability – gauges the extent to which a measure varies from one use to another.
Sources of Reliability Evidence:
Stability. The test-retest reliability correlates scores obtained from two administrations of the
same test over a period of time. It assumes that there is no considerable change in the
construct between the first and second testing.
Equivalence. In this method, two different versions of an assessment tool are administered to
the same group of individuals. However, the items are parallel , i.e. they probe the same
construct, base knowledge or skill. The two sets of scores are then correlated in order to
evaluate the consistency of results across alternate versions.
Internal Consistency. Implies that a student who has mastery learning will get all or most of
the items correctly while a student who knows little or nothing about the subject matter will get
all or most of the items wrong.
Ways to establish internal consistency:
Spearman –Brown formula
Cronbach alpha
Kuder-Richardson Formula (KR)20/21
For internal consistency, the range of reliability measures are rated as follows:
Less than 0.50 - the reliability is low
0.50 - 0.80 - the reliability is moderate
Greater than 0.80 – the reliability is high
Scorer or Rater Consistency
Certain characteristics of the raters contribute to errors like bias, halo effect, mood, fatigue,
among others.
Decision Consistency
Describes how consistent the classification decisions are rather than how consistent the scores .
Discussion/Presentation
Classroom testing should be “ teacher friendly”. Test development and grading should be practical and
efficient. Practical means ‘useful’, that is , it can be used to improve classroom instruction and for
outcomes assessment purposes. It likewise pertains to judicious use of classroom time. Efficient, in this
context, pertains to development, administration and grading of assessment with the least waste of
resources and effects.
Teachers’ assessments have important long-term and short-term consequences for students; thus
teachers have an ethical responsibility to make decisions using the most valid and reliable information
possible. Validity and reliability are aspects of fairness. Other aspects of fairness include the following:
Students’ Knowledge of Learning Targets and Assessments. This aspect of fairness speaks of
transparency. Transparency is defined here as disclosure of information to students about
assessments. This includes what learning outcomes are to be assessed and evaluated,
assessment methods and formats, weighting of items, allocated time in completing the
assessment and grading criteria or rubric. By informing students regarding the assessment
details, they can adequately prepare and recognize the importance of assessment. They become
part of the assessment process. By doing so, assessment becomes learner-centered.
In regard to written tests, it is important that students know what is included and excluded in
the test .The scoring criteria should also be known . As for performance assessments, the criteria
should be divulged prior to assessment so that students will know what the teacher is looking
for in the actual performance or product.
Opportunity to Learn. Fair assessments are aligned with instruction that provides adequate time
and opportunities for all students to learn. Discussing an extensive unit in an hour is obviously
insufficient. Inadequate instructional approaches would not be just to the learners because they
are not given enough experiences to process information and develop their skills.
Prerequisite Knowledge and Skills. Students may perform poorly in an assessment if they do
not possess background knowledge and skills. The teacher must identify early on the
prerequisite skills necessary for completing an assessment. The teacher may also provide clinics
or reinforced tutorials to address gaps in students’ knowledge and skills. He/she may also
recommend reading materials or advise students to attend supplemental instruction sessions
when possible. These are forms of remediation.
Avoiding Stereotyping.
A stereotype is a generalization of a group of people based on inclusive observations of a small
sample of this group. Common stereotypes are racial, sexual and gender remarks. Stereotyping
is caused by preconceived judgments of people one comes in contact with which are sometimes
unintended. Teachers should avoid terms and examples that may be offensive to students of
different gender, race, religion, culture or nationality. Stereotypes can affect students’
performance in something that they do not know already.
Avoiding Bias in Assessment tasks and Procedures
Assessment must be free from bias. There are two forms of assessment bias: offensiveness and
unfair penalization.
Offensiveness happens if test-takers get distressed, upset, or distracted about how an individual
or a particular group is portrayed in the test.
Unfair Penalization harms student performance due to test content, not because items are
offensive but rather, the content caters to some particular groups from the same economic
class, race, gender, etc., leaving other groups at a loss or a disadvantage.
Accommodating Special Needs
Teachers need to be sensitive to the needs of students. Certain accommodations must be given
especially for those who are physically or mentally challenged. The legal basis for
accommodation is contained in Sec 12 of Republic Act 7277 entitled “An Act Providing for the
Rehabilitation, Self-Development and Self-Reliance of Disabled Person and Their Integration into
the Mainstream of Society and for other Purposes”. The provision talks about access to quality
education – that learning institutions should consider the special needs of learners with
disabilities in terms of facilities, class schedules, physical education requirements and other
related matters. Another is Sec.32 of CHED Memorandum 09, s.2013 on “Enhanced Policies and
guidelines on Student Affairs and Services” which states that higher education institutions
should ensure that academic accommodation is made available to persons with disabilities and
learners with special needs.
Relevance
This can be thought of as an aspect of fairness. Irrelevant assessment would mean short-
changing students of worthwhile assessment experiences.
Additional criteria for achieving quality assessment:
Assessment should reflect the knowledge and skills that are most important for students
to learn.
Assessment should support every student’s opportunity to learn things that are
important
Assessment should tell teachers and individual students
Ethical Issues
Asking pupils to answer sensitive questions like their sexuality or problems in the family are
unwarranted especially without the consent of the parents. The following are other ethical issues in
testing:
Possible harm to the participants
Confidentiality of results
Deception regarding the purpose and use of the assessment
Temptation to assist students in answering the test
.
Tasks/Activities Answer the following in your activity notebooks.
A. Suppose you are the principal of a public high school. You received
complaints from students concerning their tests. Based on their
complaints, you decided to talk to the teachers and offered advice based
on ethical standards. For each scenario, write down your
recommendations citing specific aspects of ethics or fairness discussed in
the module.
Scenario 1. Eight-grade students complained that their music teacher uses
only written tests as the sole method of assessment. They were not
assessed on their skills in singing and creating musical melodies.
Scenario 2. Grade 7 students complained that they were not informed that
there is a summative test in Algebra.
Scenario 3.Grade 9 students complained that there were questions in their
Science test on the last unit which was not discussed in class.
Scenario 4. Grade 7 students complained that they were not told what to
study for the mastery test. They were simply told to study and prepare for
the test.
Scenario 5. Students were tested on an article they read about Filipino
ethnic groups. The article depicted Ilocanos as stingy. Students who hail
from Ilocos were not comfortable answering the test.