BRM Unit3
BRM Unit3
BRM Unit3
Unidimensional Constructs:
A unidimensional construct refers to a concept or variable that can be measured
along a single dimension or scale. In other words, it represents one attribute,
factor, or trait that can be measured consistently across a set of items.
Characteristics:
a. All items in the measurement tool relate to a single underlying trait
or concept.
b. The measurement scale reflects one aspect of a concept, and there is
only one way to interpret the scores.
c. Responses to items on the scale are typically combined or summed
to provide a single score representing the construct.
Examples:
Height: This can be measured with a single dimension (inches,
centimetres).
Depression: If measured with a scale that focuses solely on
emotional symptoms, it can be considered unidimensional.
Job Satisfaction (Unidimensional): A scale that measures only one
aspect of job satisfaction, such as satisfaction with pay, could be
considered unidimensional.
Application:
A unidimensional scale is used when a concept is straightforward, and all
the items are related to one underlying factor or trait.
Example Question (Unidimensional):
"How satisfied are you with your salary?" (Rating scale from 1 to 5)
Multidimensional Constructs:
A multidimensional construct refers to a concept that is composed of multiple
underlying factors, attributes, or dimensions, each of which needs to be
measured separately. Each dimension captures a different aspect of the construct.
Characteristics:
1. The construct is divided into multiple dimensions, and each
dimension represents a distinct part of the overall concept.
2. Each dimension is measured using its own set of items, and the
scores from each dimension can be analysed independently or
combined to create a composite score.
3. This is common for complex constructs that cannot be fully
captured by a single scale or trait.
Examples:
Health: Health can be measured using different dimensions such as
physical health, mental health, and social well-being.
Intelligence: Intelligence is often measured as a multidimensional
construct with verbal, mathematical, and spatial reasoning abilities.
Job Satisfaction (Multidimensional): A multidimensional measure of
job satisfaction could include satisfaction with pay, relationships with
coworkers, work-life balance, and opportunities for promotion.
Application:
Multidimensional scales are useful when a concept is complex, and measuring
only one aspect would not provide a complete understanding of the construct.
Example Question (Multidimensional):
"How satisfied are you with your salary?" (Dimension 1: Pay)
"How satisfied are you with your relationships with coworkers?"
(Dimension 2: Social Relationships)
"How satisfied are you with your work-life balance?" (Dimension 3: Work-
Life Balance)
When to Use Each Type:
Unidimensional scales are appropriate when the concept you are
measuring is simple and can be fully captured by a single set of related
items (e.g., height, income, basic satisfaction).
Multidimensional scales should be used when the concept is complex and
involves multiple factors or attributes (e.g., well-being, intelligence, job
satisfaction across various areas).
Measurement Scales
In research and data collection, measurement scales refer to the methods used to
assign numbers or labels to variables in order to represent their characteristics.
These scales are crucial for quantifying and analysing data. There are four primary
types of measurement scales: nominal, ordinal, interval, and ratio. Each type
differs in terms of the information it provides and the mathematical operations
that can be performed on the data.
1. Nominal Scale
Definition: The nominal scale is the simplest measurement scale. It is used
for categorizing data without implying any order or quantitative value. The
categories are mutually exclusive, and each item is assigned to a category
without any ranking or numeric significance.
Characteristics:
Categories are qualitative (non-numeric).
There is no inherent order between categories.
Numbers assigned to categories (if any) are labels and have no
mathematical meaning.
Examples:
Gender: Male = 1, Female = 2.
Blood Type: A, B, AB, O.
Marital Status: Single, Married, Divorced, Widowed.
Mathematical Operations: Only counting and mode can be used (e.g., how
many people are in each category).
2. Ordinal Scale
Definition: The ordinal scale represents data that is ranked in a specific
order, but the differences between the ranks are not necessarily equal. It
indicates relative position but does not quantify the exact difference
between them.
Characteristics:
Categories have a logical order or ranking.
The distance between categories is unknown or unequal.
Data can be compared in terms of greater than, less than, or equal
to, but not in terms of how much greater or lesser.
Examples:
Customer Satisfaction: Very Dissatisfied = 1, Dissatisfied = 2, Neutral
= 3, Satisfied = 4, Very Satisfied = 5.
Race Position: 1st place, 2nd place, 3rd place, etc.
Education Level: High school, Bachelor's, Master's, PhD.
Mathematical Operations: Median, percentiles, and rank-order can be
used, but you cannot calculate the mean or standard deviation.
3. Interval Scale
Definition: The interval scale is a quantitative scale where the difference
between values is meaningful and consistent, but there is no true zero
point. This means that while you can add and subtract values,
multiplication and division are not meaningful.
Characteristics:
There is a clear and equal interval between the values.
The scale does not have an absolute zero (a true "nothing" point), so
you cannot compare values as ratios.
Differences between values are meaningful, but the lack of a true
zero makes it impossible to say "twice as much" or "half as much."
Examples:
Temperature (Celsius or Fahrenheit): The difference between 10°C
and 20°C is the same as between 20°C and 30°C, but 0°C is not the
absence of temperature.
IQ Scores: A score of 100 is not "twice as smart" as a score of 50, but
the difference between 90 and 110 is meaningful.
Calendar Years: The difference between the years 2000 and 2010 is
10 years, but year 0 does not indicate "no time."
Mathematical Operations: You can perform addition, subtraction, mean,
and standard deviation, but not ratios (multiplication and division).
4. Ratio Scale
Definition: The ratio scale is the most informative scale of measurement. It
has all the properties of an interval scale, but with the addition of a true
zero point, which allows for the comparison of ratios (e.g., "twice as much"
or "half as much").
Characteristics:
There is a true zero point, meaning that zero represents the
complete absence of the property being measured.
Both differences and ratios between values are meaningful.
You can perform all mathematical operations (addition, subtraction,
multiplication, division).
Examples:
Weight: A person weighing 0 kg has no weight, and 100 kg is twice as
heavy as 50 kg.
Height: A person who is 180 cm tall is twice as tall as someone who is
90 cm.
Income: Earning $0 means no income, and earning $100,000 is twice
as much as earning $50,000.
Mathematical Operations: You can perform all mathematical operations,
including addition, subtraction, multiplication, and division. Calculating the
mean, median, standard deviation, and ratios are all valid.
Ratings and ranking scales are two common types of measurement tools used in
research to evaluate preferences, opinions, or perceptions. Both are used to
gather data, but they differ in how they capture responses and the kind of
information they provide.
Rating Scales
Rating scales involve asking respondents to assign a score to a particular item,
concept, or experience based on a set scale. These scales allow participants to
express degrees of agreement, satisfaction, importance, or frequency regarding
an item. Rating scales provide absolute judgments rather than relative
comparisons, meaning each item is assessed independently of the others.
Characteristics:
o Respondents assign a numerical value to each item, based on their
perception or opinion.
o The scale typically ranges from low to high (e.g., 1 to 5, 1 to 10), and
the scores reflect the intensity of the respondent’s feelings or
opinions.
o There is no need to compare different items to each other directly.
o Examples of rating scales include the Likert scale, semantic
differential scale, and numerical rating scales.
Types of Rating Scales:
5. Likert Scale:
Respondents are asked to indicate their level of agreement
with a statement on a scale, typically ranging from strongly
disagree to strongly agree.
Example: "I am satisfied with my job" (1 = Strongly Disagree, 5
= Strongly Agree).
6. Numerical Rating Scale:
Respondents assign a numerical value to express their opinion.
Example: "On a scale of 1 to 10, how would you rate your
overall satisfaction with our service?" (1 = Very Dissatisfied, 10
= Very Satisfied).
7. Semantic Differential Scale:
This scale presents a pair of opposite adjectives (e.g.,
"satisfied" and "dissatisfied"), and respondents rate their
opinion on a scale between these two extremes.
Example: "How do you feel about the product's quality?"
(Satisfied 1 2 3 4 5 Dissatisfied)
Pros:
Easy to implement and analyse.
Provides granular data, offering a range of response options to
capture varying degrees of opinions.
Allows for the calculation of averages, standard deviations, and
correlations.
Cons:
Respondents may tend to select the middle or extreme points (e.g.,
neutral or very high/low).
May lack comparative insight between different items, as each item
is assessed independently.
Ranking Scales
Ranking scales require respondents to rank items in order of preference or
priority, indicating a relative relationship between the items. In this type of scale,
respondents are forced to make direct comparisons between the items to
establish an order from highest to lowest or best to worst.
Characteristics:
Respondents are asked to compare multiple items to each other and
rank them in a particular order based on their preferences or
importance.
The emphasis is on the relative position of each item, not on the
absolute level of preference or opinion.
Ranking scales do not provide information about the distance
between items, only the order.
Types of Ranking Scales:
1. Simple Ranking:
Respondents rank a list of items in order of preference.
Example: "Rank the following features in terms of importance
when buying a smartphone:
(Battery life, Camera quality, Price, Screen size)."
(1 = most important, 4 = least important)
2. Paired Comparison:
Respondents are presented with pairs of items and asked to
choose the preferred one from each pair. This is often done
with multiple pairs, and the item that is chosen most
frequently is considered the top-ranking item.
Example: "Which feature is more important to you when
buying a smartphone? (Battery life vs. Camera quality)."
3. Forced Ranking:
Respondents are asked to rank items from most to least based
on a specific criterion, but they must assign each item a unique
rank.
Example: "Rank the following cities in terms of preference for
travel: (Paris, London, New York, Tokyo)."
Pros:
Provides relative insights, showing which items are preferred over
others.
Effective when you want to force respondents to make trade-offs
between choices.
Useful in situations where prioritization is required, such as in
marketing or decision-making.
Cons:
Does not capture the degree of preference, only the order.
More difficult for respondents if there are many items to rank.
May lead to ties or random rankings if respondents find it hard to
differentiate between certain items.
Example Scenarios:
Rating Scale Example (Likert):
"On a scale from 1 to 5, how satisfied are you with the customer service?"
(1 = Very Dissatisfied, 5 = Very Satisfied)
Ranking Scale Example:
"Rank the following features of the new smartphone from most important
(1) to least important (5):
Battery life
Camera quality
Screen size
Price
Design"
Differential Scaling
Differential Scaling, or semantic differential scaling, was developed by
psychologist Charles Osgood and colleagues in the 1950s. It is a type of bipolar
rating scale that measures the connotative meaning of concepts. This method
asks respondents to rate a concept (person, object, or idea) on a series of bipolar
adjective pairs that represent opposite ends of a scale (e.g., good-bad, happy-
sad).
Key Characteristics of Differential Scaling:
1. Bipolar Adjectives:
Respondents are presented with a set of adjective pairs that
represent opposing traits or descriptors of the concept being
measured.
The adjective pairs are placed at opposite ends of a scale, often a 7-
point or 5-point scale, with a neutral midpoint.
2. Concept Evaluation:
Respondents are asked to rate the concept or object by placing it on
the scale between the two opposing adjectives.
The idea is to capture the emotional or subjective connotation that
the concept holds for the respondent.
3. Dimensions of Measurement:
Differential scaling is used to measure multiple dimensions of a
concept, including:
Evaluation: How favorable or unfavorable the concept is (e.g.,
good-bad).
Potency: How strong or weak the concept is (e.g., strong-
weak).
Activity: How active or passive the concept is (e.g., active-
passive).
4. Score Calculation:
The scores are calculated by averaging the respondent’s ratings
across the various adjective pairs, resulting in a multidimensional
profile of the concept.
Example of Differential Scale:
If you wanted to measure a person’s attitude toward a new product, you might
ask them to rate the product on bipolar scales like:
Useful 1 2 3 4 5 Useless
Attractive 1 2 3 4 5 Unattractive
Affordable 1 2 3 4 5 Expensive
The ratings across all these scales would provide a profile of how the respondent
perceives the product in terms of usefulness, attractiveness, and affordability.
Advantages:
Simple and easy to administer.
Provides rich, multidimensional insights into how people perceive
concepts.
Captures emotional reactions and underlying attitudes toward the concept.
Disadvantages:
Respondents may struggle with interpreting certain adjective pairs.
May be subject to social desirability bias, where respondents rate concepts
based on what they think is acceptable or favourable.
Pilot Testing
Before launching the final questionnaire, it is important to pilot test it with a
small group from your target audience. This helps to:
Identify any confusing or unclear questions.
Ensure that the questionnaire takes the estimated amount of time to
complete.
Make necessary revisions based on feedback from the pilot test.
A well-designed questionnaire is essential for gathering accurate, reliable, and
useful data. The design process involves understanding your research objectives,
crafting clear and unbiased questions, organizing the questionnaire logically, and
ensuring the right mix of question types to achieve your goals. Pretesting and
thoughtful administration will help maximize response rates and data quality.
Development and Testing in research and product design refers to the processes
of creating a product, system, or tool and then evaluating its functionality,
usability, reliability, and performance. These steps are critical to ensure the final
product meets its intended purpose and performs as expected.
In the context of both software development and research, development is the
phase where the product or tool is designed and built, while testing is the phase
where it is systematically evaluated to identify any issues and ensure that it works
as intended.
Development
Development involves the planning, designing, and construction of a system,
product, or tool. It is an iterative process that typically follows these stages:
1. Requirements Gathering:
Understanding the needs of stakeholders or users.
In research, this could mean defining the goals of a study,
determining what needs to be measured, and outlining the tools
(e.g., a questionnaire or survey).
In product design, it involves gathering functional and non-functional
requirements (features, user interface needs, performance
expectations).
2. Planning and Design:
Outlining how the product or tool will be built.
In software, this involves designing the system architecture, database
structures, and user interfaces.
In research, this could involve creating study protocols,
questionnaires, or measurement tools that meet the study's
objectives.
3. Prototyping or Initial Development:
Developing an initial version of the product (a prototype or first
draft).
In research, this might include drafting a pilot version of a survey,
questionnaire, or tool to be used for data collection.
4. Iteration and Refinement:
After the initial development, the product is refined based on
feedback.
Iterative cycles of improvement are typical, especially in agile
methodologies, where development is ongoing and incremental.
5. Documentation:
Throughout the development phase, proper documentation is
critical. This includes user manuals, technical specifications, and
process records.
In research, documentation might involve writing protocols, detailing
study methods, and maintaining data collection tools.
Testing
Testing refers to the process of evaluating the product or system to ensure it
meets the defined requirements and performs correctly. It involves identifying
bugs or issues and correcting them before full-scale implementation or
deployment.
Types of Testing:
1. Unit Testing:
Focuses on testing individual components or units of the system for
correctness.
In software, this would involve testing individual functions, methods,
or classes.
In research, it could involve validating the accuracy of individual
survey questions or measurement items to ensure they function as
intended.
2. Integration Testing:
Evaluates how different components of the system work together.
In software, it ensures that modules or features integrate properly.
In research, this could involve testing whether different sections of a
questionnaire flow logically and whether they collectively measure
the intended construct.
3. System Testing:
Testing the entire system or product to verify that it meets all
requirements.
In software, this would be a comprehensive test of the system as a
whole.
In research, this could involve pilot testing the full survey or tool to
identify any areas where respondents may experience difficulties.
4. Usability Testing:
Evaluates how easy and user-friendly the system is for end users.
This can be crucial in both product development and research. A tool
or product must be intuitive and easy to use to ensure high levels of
user engagement.
For research surveys, this involves ensuring that the wording of
questions is clear, the layout is easy to navigate, and respondents do
not experience fatigue or confusion.
5. Validation Testing:
Confirms that the product performs as expected under real-world
conditions.
In research, this means testing the validity of a tool or instrument
(e.g., ensuring a questionnaire measures what it is intended to
measure).
6. Stress Testing (Performance Testing):
Tests the limits of a product under extreme conditions.
In software, this involves testing how the system handles heavy
loads, traffic, or data.
In research, stress testing could mean evaluating how a data
collection tool handles large datasets or whether the tool is effective
with various respondent groups.
7. Pilot Testing:
Conducting a smaller version of the full-scale deployment.
In research, pilot tests help identify any issues with data collection
tools (e.g., surveys) and allow for adjustments before a larger study.
Development and testing are crucial steps in both research and software projects.
Proper development ensures that the product is built according to the necessary
requirements, while testing validates its functionality, performance, and usability.
Both processes work in tandem to improve the final outcome, whether it’s a data
collection tool in research or a fully functional software product.
Reliability and Validity are two key concepts in research that are essential for
ensuring the accuracy and credibility of measurement tools, such as
questionnaires, tests, or surveys. These concepts are used to assess the quality of
the data and the research methods used to gather it.
Reliability
Reliability refers to the consistency or stability of a measurement tool. A reliable
tool will produce the same results under consistent conditions. If a research
instrument is reliable, it means that it consistently measures what it is supposed
to measure over time.
Types of Reliability:
1. Test-Retest Reliability:
Assesses the consistency of a test over time. The same test is given to
the same group of people at two different points in time, and the
results are compared.
Example: A survey on job satisfaction given to employees twice, a
few weeks apart. If the results are similar, the survey has high test-
retest reliability.
2. Inter-Rater Reliability:
Measures the level of agreement between different people (raters)
who are observing or assessing the same phenomenon.
Example: If two interviewers score a candidate's performance in an
interview similarly, their assessments have high inter-rater reliability.
3. Parallel-Forms Reliability:
Involves creating two equivalent forms of a test, where the questions
are different but measure the same concept. The results from both
forms are then compared.
Example: A psychology test where two different versions are
administered to the same group, and the scores are correlated.
4. Internal Consistency Reliability:
Measures how well the items within a test measure the same
concept. It assesses the consistency of results across items on the
same test.
Example: A questionnaire with multiple questions assessing anxiety.
Internal consistency ensures that all items related to anxiety produce
similar results.
The most common measure of internal consistency is Cronbach’s Alpha, which
ranges from 0 to 1. A higher value indicates greater internal consistency.
Why Reliability Matters:
Reliability ensures that the measurement tool produces stable and
consistent results.
Without reliability, any measurement tool or instrument is essentially
useless because it could give different results under the same conditions.
Validity
Validity refers to how well a test or measurement tool measures what it is
intended to measure. A valid instrument accurately reflects the concept it is
supposed to measure. While reliability focuses on consistency, validity focuses on
accuracy.
Types of Validity:
1. Content Validity:
Ensures that the test covers all aspects of the concept being
measured. The items on the test should represent the entire domain
of the concept.
Example: A mathematics test designed to assess algebra skills should
include a comprehensive range of algebra topics, not just a subset.
2. Construct Validity:
Refers to whether a test or instrument truly measures the theoretical
concept or construct it claims to measure.
Example: A personality test claiming to measure introversion must
truly assess traits that define introversion, such as preference for
solitude and reflection.
Construct validity can be divided into:
Convergent Validity: The degree to which two measures that should
be related are actually related.
Discriminant Validity: The degree to which measures that should not
be related are indeed unrelated.
3. Criterion-Related Validity:
Assesses whether a measure is related to an outcome or criterion. It
is divided into two subtypes:
Predictive Validity: Refers to how well a measure predicts
future outcomes. For instance, SAT scores predicting college
success.
Concurrent Validity: Assesses the relationship between the
measure and a criterion measured at the same time. For
instance, a job performance test correlating with actual job
performance.
4. Face Validity:
Refers to the degree to which a test appears to measure what it is
supposed to measure, based on a superficial inspection.
Example: A depression questionnaire may have face validity if it
includes questions about sadness, sleep patterns, and loss of interest
in activities.
Although face validity is the simplest form of validity, it is not necessarily scientific
or sufficient by itself.
Why Validity Matters:
Validity ensures that the research findings are meaningful and accurate.
Without validity, even a highly reliable test is useless because it doesn’t
measure what it’s supposed to.
SAMPLING
Sampling is a crucial process in research that involves selecting a subset of
individuals or items from a larger population to make inferences about that
population. This approach is often necessary due to practical constraints like time,
cost, and accessibility. Here's an overview of the steps in sampling, the different
types of sampling methods, and considerations for determining sample size.
Hypothesis Formulation
1. Definition of Hypothesis: A hypothesis is a statement that can be tested
statistically. It expresses a prediction or an assumption about a population
parameter or a relationship between variables.
2. Types of Hypotheses:
Null Hypothesis : This hypothesis states that there is no effect or no
difference. It is the hypothesis that researchers aim to test against.
Example (the population mean is equal to a specific value)
Alternative Hypothesis ( H1H_1H1): This hypothesis states that
there is an effect or a difference. It represents what the researcher
aims to support.
Steps in Hypothesis Testing
1. Formulate the Hypotheses: Define the null and alternative hypotheses
based on the research question.
2. Choose the Significance Level (α\alphaα): Typically set at 0.05 or 0.01, this
level represents the probability of rejecting the null hypothesis when it is
true.
3. Select the Appropriate Test: Choose a statistical test based on the data
type, distribution, and research question.
4. Calculate the Test Statistic: Use sample data to calculate the test statistic
(e.g., T-statistic, Z-statistic).
5. Determine the Critical Value or P-value: Compare the test statistic to a
critical value from the statistical distribution or calculate the p-value.
6. Make a Decision: Reject the null hypothesis if the test statistic exceeds the
critical value or if the p-value is less than α\alphaα.Fail to reject the null
hypothesis if the test statistic does not exceed the critical value or if the p-
value is greater than α\alphaα.
7. Draw Conclusions: Interpret the results in the context of the research
question.
Types of ANOVA
1. One-Way ANOVA:
o Used when there is one independent variable (factor) with three or
more levels (groups), and you want to compare the means of these
groups.
o Example: Testing if different diets (low-carb, low-fat, and high-
protein) result in different weight loss after 6 months.
2. Two-Way ANOVA:
o Used when there are two independent variables (factors), and it
allows you to test not only the main effects of each factor but also
their interaction effect.
o Example: Studying the effects of different diets (low-carb vs. high-
protein) and exercise regimes (no exercise vs. moderate exercise) on
weight loss.
3. Repeated Measures ANOVA:
o Used when the same subjects are used for multiple treatments or
measurements, i.e., when data is collected from the same subjects at
different times or under different conditions.
o Example: Measuring the effect of a drug over three different time
points (before treatment, 1 month after treatment, and 3 months
after treatment).
4. Multivariate Analysis of Variance (MANOVA):
o An extension of ANOVA that allows for the analysis of multiple
dependent variables simultaneously.
o Example: Testing the effect of different teaching methods on both
students' test scores and class participation rates.
Scenario:
You are testing three types of fertilizers (A, B, and C) to see if they
have different effects on plant growth (measured by the height of the
plants after one month). You have three groups of plants, each
treated with a different fertilizer, and you want to see if there is any
significant difference in plant heights among the groups.
Data:
Plant Plant
Plant 3
Fertilizer 1 2
Height
Type Height Height
(cm)
(cm) (cm)
A 10 12 14
B 15 17 16
C 8 9 10
Null Hypothesis (H₀): The means of the plant heights are the same for all
fertilizers.
H0:μA=μB=μCH_0: \mu_A = \mu_B = \mu_CH0:μA=μB=μC
Where μA\mu_AμA, μB\mu_BμB, and μC\mu_CμC are the mean
heights for Fertilizer A, B, and C, respectively.
Alternative Hypothesis (H₁): At least one fertilizer results in a significantly
different mean plant height.
H1:At least one of μA,μB,μC is different.
At least one of μA,μB,μC is different.
Regression Analysis
There are different types of regression analysis, but the most common are simple linear
regression and multiple linear regression.
Conclusion
Hypothesis formulation and testing are fundamental to statistical analysis in
research. By utilizing various tests like T-tests, Z-tests, ANOVA, Chi-square tests,
and regression analysis, researchers can draw meaningful conclusions about
population parameters and relationships between variables. Each test has its
specific application, assumptions, and interpretation, making it crucial for
researchers to select the appropriate method based on their research design and
data characteristics.