Task 3: (CITATION Bel18 /L 2057)
Task 3: (CITATION Bel18 /L 2057)
Task 3: (CITATION Bel18 /L 2057)
Referring to research methods literature write an academic essay critiquing the validity and
reliability of 3 data analysis techniques (approximately 1,500 words 25% of the mark).
1. Introduction
Data collection forms a major part of the research process. This data however has to be
analyzed to make sense of it. There are multiple methods of analyzing quantitative data
collected in surveys. The purpose of this essay to critiquing the validity and reliability of 3
data analysis techniques such as MaxDiff analysis, TURF Analysis, and Cross-tabulation
technique. The MaxDiff analysis is a statistical data analysis approach used to measure
customer satisfaction to obtain purchase and the parameters in this procedure are greater than
the others. TURF analysis is a quantitative data analytics tool that measures the overall
market reach of a product or service or a mixture of both. Cross-tabulation is the most often
utilized method of quantitative analysis of data. It is a favorite strategy as in the research
study it employs a simple tabular structure to conclude different data sets. It comprises
information that is mutually exclusive or connected.
There are limitations, as with most kinds of analyses. MaxDiff gives relative, not absolute,
significance levels. It is therefore crucial for individuals who have an insight or knowledge of
what interest characteristics are likely to be choosing the qualities of interest. The whole set
of possibilities should also be presented in the inquiry and the respondent should react
consistently. In contrast to other joint or discrete methods of choosing, MaxDiff is not the
model of interaction terms. In other words, for example, the MaxDiff-estimated customer
appeal is unlike the other features of the test set in the product. Max Diff-output also tends to
be quite strong in the major variables, but the weaker variable ratings are not as
distinct[ CITATION Bel18 \l 2057 ]. For responders to complete, the Max Diff question
type can also be immensely frustrating to produce early break-offs and unfilled. Therefore,
the questionnaire should be designed in such a way as to let respondents understand what to
expect[ CITATION Hen20 \l 2057 ].
Validity
There are noticeable drawbacks with MaxDiff for individuals with particular quality
difficulties. To yet, the quality of research with and without MD has been no substantial
1
543127372.docx
difference. Providing moderate-quality problems, the services are robust. Extreme speed and
direct fitting compress and encourage rank-and-file changes. When MD problems overlap
other quality measures, the results and interviewed profiles are further skewed. Independently
of other quality criteria, a separate subset of responders fails at MD. Individuals who take
online surveys, use “none” and rich multiple responses succeed. People who hurry up
questionnaires persevere on scales, and neglect to pick MD. The capacity to keep consistent
answers between "most" and "least" is the biggest factor of MD success. Validation that
speed and straight-line are crucial measurements of quality. MD breaches and validation offer
fresh and different things. As examined in this study, several quality indicators had minimal
effect on quality. Invalidated trimming may cause unwanted results alterations. In great part,
the unfavorable perception regarding visible professionals is false or at worst
incorrect[ CITATION Hal20 \l 2057 ].
Reliability
MaxDiff's standard output normally is to classify the objects evaluated on the basis of
rescaled utility systems. As MaxDiff creates data, it is possible to carry out further
multivariate analyses such as TURF and segmentation. MaxDiff's superior, but not without
defects than other existing solutions. The survey will be extended by several minutes,
especially if several things are tested. The results are based on the products examined.
The relative measure it delivers may or cannot be what you desire depending on your study
aims.
Validity
2
543127372.docx
[ CITATION And19 \l 2057 ] states that many validity threats are there which confirm
or raise problems regarding data accuracy or outcomes or the use of statistical tests to
determine outcome effects. A precise definition of inner validity is the essential criterion for
the interpretation of an experiment. Internal risks to validity include experimental techniques,
processes, or participant experiences that affect the researcher's capacity to draw the right
conclusions from experimental data. This is because of the use of insufficient techniques,
such as changes of the equipment or tool during experiments, changes in the research control
group, etc. Because of these insufficient methods, it should be determined by the
experimenter whether or not the experiment will change[ CITATION Dan19 \l 2057 ].
Reliability
The other shortcoming of TURF is that its customer preferences for the variety are not
properly measured. The majority of us, for instance, buy many varieties of ice cream, which
might vary from month to month and year to year. In the TURF model, the popular tastes and
the less popular aromas tend to overweight. When compared to real market shares,
researchers tend to discover that TURF exaggerates the market share in the most popular
aromas from TURF research. The modeling of choice offers a more precise estimate of
market share. No rival brands or price fluctuations are included in TURF — models are used.
The cannibalization or volume source cannot be measured by TURF, whereas modeling of
choice can[ CITATION Car18 \l 2057 ].
For each variety or product line variation, TURF supposes a distribution of 100% and a 100%
awareness. Such assumptions are seldom accurate. Choice modeling allows the incorporation
of variables of distribution levels and consciousness levels by taste or product variant into the
simulator to allow exploration and evaluation of more realistic product line situations.
3
543127372.docx
though, that the absence of cross-tabulation events would be a solid non-causation signal –
provided modeling is correct![ CITATION Dan18 \l 2057 ].
Over-reporting: linking n-grams with the subject to be examined. The count would grow
(probably) and the qualitative data may be misconstrued again. Of course, it is far easier for
the social analyst to examine his work for overreporting than underreporting. It might be
tough to comprehend raw data. While the data is minimal, merely by looking at statistics it is
all too simple to arrive at misconceptions. Cross-tabulation is an essential strategy to data
collection that provides unequivocal conclusions that limit the risk of misunderstanding or
imprecision. Cross tabulation can aid us in the collection of vital raw data information. When
raw data is displayed as a table, it is hard to perceive the insights. Due to the unambiguous
evidence of the association between categorical variables in the cross-tabulation, researchers
can gain a deeper and better understanding – insights that are often missing or time consumed
to interpret more complicated statistical analysis methods[ CITATION Sny19 \l 2057 ].
Validity:
Reliability
It is not allowed to utilise text values. This is because each cell has the summary function to
compute and numbers are only generated by summary functions. The report prefers to output
a text field count when you try to insert a cell field. When there are multiple reactions, a
significant number of tables might arise in the numerous methods in which the variables are
cross-tabulated. You may find it tough to identify who not all the cross-tabulations are
4
543127372.docx
significant and what till the cross-tabulations are done. Where the sample is tiny, there may
be a restricted number of objects that can be tagged together.
References
Bell, E., Bryman, A. and Harley, B., 2018. Business research methods. Oxford university
press.
Maher, C., Hadfield, M., Hutchings, M. and de Eyto, A., 2018. Ensuring rigor in qualitative
data analysis: A design research approach to coding combining NVivo with
traditional material methods. International Journal of Qualitative Methods, 17(1),
p.1609406918786362.
Richards, K.A.R. and Hemphill, M.A., 2018. A practical guide to collaborative qualitative
data analysis. Journal of Teaching in Physical Education, 37(2), pp.225-231.
Daniel, B.K., 2018. Empirical verification of the “TACT” framework for teaching rigour in
qualitative research methodology. Qualitative Research Journal.
Snyder, H., 2019. Literature review as a research methodology: An overview and guidelines.
Journal of Business Research, 104, pp.333-339.
Andrew, D.P., Pedersen, P.M. and McEvoy, C.D., 2019. Research methods and design in
sport management. Human Kinetics.
Daniel, B.K., 2019. Using the TACT Framework to Learn the Principles of Rigour in
Qualitative Research. Electronic Journal of Business Research Methods, 17(3).
Hennink, M., Hutter, I. and Bailey, A., 2020. Qualitative research methods. Sage.
Halperin, S. and Heath, O., 2020. Political research: methods and practical skills. Oxford
University Press, USA
5
543127372.docx