MLKLLKD 12 Qweqzbnsqqazxsswqwerrtgfdsa
MLKLLKD 12 Qweqzbnsqqazxsswqwerrtgfdsa
MLKLLKD 12 Qweqzbnsqqazxsswqwerrtgfdsa
in the total population. Faced with the impossibility of this task, the researcher
compromises; she studies not the entire population, but rather a sample of the
population. The researcher now has the resources to carry out this circumscribed
research effort. In drawing only a sample, however, the researcher gives up a critical
element in drawing research conclusions—certainty. She acknowledges that other
equally diligent and motivated researchers can collect their own independent samples.
Because these samples contain different patients with different experiences and
different data, how can we decide which one is most representative of the total
population?
This difference in the data across samples is sampling variability, and its presence
raises the very real possibility that the sample, merely through the play of chance, will
mislead the researcher about the characteristics of the total population. This can
happen even when the investigator uses modern sampling techniques to obtain a
representative sample. In the end, the investigator has one and only one sample, and
therefore, she can never be sure whether conclusions drawn from the sample data truly
characterize the entire population.
If executed correctly, a research program will obtain its sample of data randomly and
will lead to accurate statistical computations. When more than the data is random,
however, the underlying assumptions are no longer in play, and the statistical
computations are corrupted—they are incorrect and cannot be corrected. Consider the
following example.
Many researchers would have no problem with Dr C’s EDV-to-ESV end point switch,
arguing that each end point is a measurement of the same underlying physiology and
pathophysiology. Why should Dr C be criticized for making an initial wrong guess about
the best end point to choose? Because she had the insight to measure several different
indicators of left ventricular function, perhaps she should be commended for (1) her
foresight in measuring ESV and (2) her courage in raising the significant ESV result to a
prominent place in her report. Others among us would be uncomfortable with the end
point change but may uncertain as to exactly what the problem is. We might say that the
decision to change the end point was “data driven.” Well, what’s so wrong with that?
Aren’t the results of any study data driven?
What is wrong with Dr C’s analysis is not that the data are random—this we expect.
What is wrong is that the research effort itself is random. The initial idea was to execute
a fixed, anchored research plan that would accept random data. If this were the case,
statisticians and epidemiologists would know how to compute relative risks with
standard errors, confidence intervals, and P values accurately. In this familiar setting,
the random data component would be appropriately channeled to type I and type II error
for the medical community’s interpretation.
Programs for clinical research that are intended to present a conclusion for one end
point may yield unexpected findings for another. 65% less people died overall in 1996,
according to the US Carvedilol Heart Failure Program1, which was created to study how
carvedilol affected morbidity in individuals with congestive heart failure. Comparing the
relative effects of losartan and captopril on the renal function of elderly patients with
congestive heart failure, the Evaluation of Losartan in the Elderly (ELITE) study2 showed
a 46% decrease in overall mortality linked to losartan. Each of these initiatives was a
controlled, double-blind clinical experiment. The findings of each study's results were
extremely statistically significant. However, experts have advised using caution when