Biostatistics, 4e: The Bare Essentials
By Geoffrey Norman and David Streiner
5/5
()
About this ebook
Related to Biostatistics, 4e
Related ebooks
Practical Statistics Simply Explained Rating: 4 out of 5 stars4/5Concise Biostatistical Principles & Concepts: Guidelines for Clinical and Biomedical Researchers Rating: 0 out of 5 stars0 ratingsEpidemiology for Canadian Students: Principles, Methods, and Critical Appraisal Rating: 0 out of 5 stars0 ratingsIntroduction to Biostatistics with JMP (Hardcover edition) Rating: 1 out of 5 stars1/5Clinical Trials: A Practical Approach Rating: 5 out of 5 stars5/5Clinical Trial Design: Bayesian and Frequentist Adaptive Methods Rating: 0 out of 5 stars0 ratingsBiostatistics For Dummies Rating: 5 out of 5 stars5/5Biostatistics Explored Through R Software: An Overview Rating: 4 out of 5 stars4/5Medical Statistics Made Easy, fourth edition Rating: 5 out of 5 stars5/5PDQ Epidemiology Rating: 0 out of 5 stars0 ratingsLearn Epidemiology Fast Rating: 0 out of 5 stars0 ratingsEpidemiology Kept Simple: An Introduction to Traditional and Modern Epidemiology Rating: 0 out of 5 stars0 ratingsRecommendations for Biostatisticians in Managing and Conducting Medical Research Consultations Rating: 0 out of 5 stars0 ratingsFundamentals of Biostatistics for Public Health Students Rating: 0 out of 5 stars0 ratingsReal World Health Care Data Analysis: Causal Methods and Implementation Using SAS Rating: 0 out of 5 stars0 ratingsUnderstanding Clinical Research: An introduction Rating: 0 out of 5 stars0 ratingsPharmacoepidemiology, Pharmacoeconomics,Pharmacovigilance Rating: 3 out of 5 stars3/5Pharmacovigilance Medical Writing: A Good Practice Guide Rating: 4 out of 5 stars4/5Medical Statistics Made Easy 2e - now superseded by 3e Rating: 0 out of 5 stars0 ratingsStatistical Issues in Drug Development Rating: 0 out of 5 stars0 ratingsTeaching Clinical Research Methodology by Example Rating: 0 out of 5 stars0 ratingsConcise Epidemiologic Principles and Concepts: Guidelines for Clinicians and Biomedical Researchers Rating: 0 out of 5 stars0 ratingsAssociations and Correlations for Medical Research Rating: 0 out of 5 stars0 ratingsMethods of Multivariate Analysis Rating: 2 out of 5 stars2/5Public Health Epidemiology Rating: 0 out of 5 stars0 ratingsUnderstanding Biostatistics Rating: 0 out of 5 stars0 ratingsBiostatistics Using JMP: A Practical Guide Rating: 0 out of 5 stars0 ratingsDocumentation of Clinical Trial Monitoring: A practical guide compliant with Good Clinical Practice Rating: 5 out of 5 stars5/5Data Analysis with Stata Rating: 5 out of 5 stars5/5The Core Elements of Value in Healthcare Rating: 5 out of 5 stars5/5
Medical For You
The Emperor of All Maladies: A Biography of Cancer Rating: 5 out of 5 stars5/5Gut: the new and revised Sunday Times bestseller Rating: 4 out of 5 stars4/5The 40 Day Dopamine Fast Rating: 4 out of 5 stars4/5The Art of Listening Rating: 4 out of 5 stars4/5Hidden Lives: True Stories from People Who Live with Mental Illness Rating: 4 out of 5 stars4/5How Emotions Are Made: The Secret Life of the Brain Rating: 4 out of 5 stars4/5The Song of the Cell: An Exploration of Medicine and the New Human Rating: 4 out of 5 stars4/5With the End in Mind: Dying, Death and Wisdom in an Age of Denial Rating: 5 out of 5 stars5/5Mating in Captivity: Unlocking Erotic Intelligence Rating: 4 out of 5 stars4/5The Gene: An Intimate History Rating: 4 out of 5 stars4/5Period Power: Harness Your Hormones and Get Your Cycle Working For You Rating: 4 out of 5 stars4/5Women With Attention Deficit Disorder: Embrace Your Differences and Transform Your Life Rating: 5 out of 5 stars5/5What Happened to You?: Conversations on Trauma, Resilience, and Healing Rating: 4 out of 5 stars4/5The Gift of Therapy: An Open Letter to a New Generation of Therapists and Their Patients Rating: 4 out of 5 stars4/5Rewire Your Brain: Think Your Way to a Better Life Rating: 4 out of 5 stars4/5Adult ADHD: How to Succeed as a Hunter in a Farmer's World Rating: 4 out of 5 stars4/5How to Be Your Own Therapist: Boost your mood and reduce your anxiety in 10 minutes a day Rating: 5 out of 5 stars5/5Healthy Brain, Happy Life: A Personal Program to to Activate Your Brain and Do Everything Better Rating: 4 out of 5 stars4/5The Obesity Code: the bestselling guide to unlocking the secrets of weight loss Rating: 4 out of 5 stars4/5Creativity: The Owner's Manual Rating: 4 out of 5 stars4/5Mindsight: Transform Your Brain with the New Science of Kindness Rating: 4 out of 5 stars4/5NeuroTribes: Winner of the Samuel Johnson Prize for Nonfiction Rating: 5 out of 5 stars5/5The Checklist Manifesto: How To Get Things Right Rating: 4 out of 5 stars4/5Against Empathy: The Case for Rational Compassion Rating: 3 out of 5 stars3/5Brain on Fire: My Month of Madness Rating: 4 out of 5 stars4/5Proust and the Squid: The Story and Science of the Reading Brain Rating: 4 out of 5 stars4/5Becoming Aware: a 21-day mindfulness program for reducing anxiety and cultivating calm Rating: 0 out of 5 stars0 ratings
Reviews for Biostatistics, 4e
4 ratings1 review
- Rating: 5 out of 5 stars5/5I keep a copy of this book in my office to loan out to students who are reviewing for biostatistics exams, and colleagues who feel the need to brush up on their biostats.The book definitely does assume some basic knowledge about statistics -- I wouldn't recommend it for someone trying to learn about biostats for the first time. It's really best for someone who already has some kind of formal training in biostatistics, but doesn't want to have to dredge through the heavy textbooks for every question -- I can definitely find the answers to my most common questions in this book much more quickly than I can in my formal textbooks.For those looking for a less technical introduction to biostatistics that doesn't presume a lot of preexisting knowledge, I'd suggest Modulsky's 'Intuitive Biostatistics'.
Book preview
Biostatistics, 4e - Geoffrey Norman
THE
NATURE
OF
DATA
AND
STATISTICS
CHAPTER THE FIRST
The Basics
In this chapter, we will introduce you to the concepts of variables and to the different types of data: nominal, ordinal, interval, and ratio.
STATISTICS: SO WHO NEEDS IT?
The first question most beginning students of statistics ask is, Why do we need it?
Leaving aside the unworthy answer that it is required for you to get your degree, we have to address the issue of how learning the arcane methods and jargon of this field will make you a better person and leave you feeling fulfilled in ways that were previously unimaginable. The reason is that the world is full of variation, and sometimes it’s hard to tell real differences from natural variation. Statistics wouldn’t be needed if everybody in the world were exactly like everyone else;¹ if you were male, 172 cm tall, had brown eyes and hair, and were incredibly good looking,² this description would fit every other person.³ Similarly, if there were no differences and we knew your life expectancy, or whether or not a new drug was effective in eliminating your dandruff, or which political party you’d vote for in the next election (assuming that the parties finally gave you a meaningful choice, which is doubtful), then we would know this for all people.
Fortunately, this is not the case; people are different in all of these areas, as well as in thousands of other ways. The downside of all this variability is that it makes it more difficult to determine how a person will respond to some newfangled treatment regimen or react in some situation. We can’t look in the mirror, ask ourselves, Self, how do you feel about the newest brand of toothpaste?
and assume everyone will feel the same way.
DESCRIPTIVE AND INFERENTIAL STATISTICS
It is because of this variability among people, and even within any one person from one time to another, that statistics were born. As we hope to show as you wade through this tome, statistics allow us to describe the average
person, to see how well that description fits or doesn’t fit other people, and to see how much we can generalize our findings from studying a few people⁴ to the population as a whole. So statistics can be used in two ways: to describe data, and to make inferences from them.
Descriptive statistics are concerned with the presentation, organization, and summarization of data.
The realm of descriptive statistics, which we cover in this section, includes various methods of organizing and graphing the data to get an idea of what they show. Descriptive statistics also include various indices that summarize the data with just a few key numbers.
The bulk of the book is devoted to inferential stats.
Inferential statistics allow us to generalize from our sample of data to a larger group of subjects.
For instance, when a dermatologist gives a new cream, attar of eggplant, to 20 adolescents whose chances for true love have been jeopardized by acne, and compares them with 20 adolescents who remain untreated (and presumably unloved), he is not interested in just those 40 kids. He wants to know whether all kids with acne will respond to this treatment. Thus he is trying to make an inference about a larger group of subjects from the small group he is studying. We’ll get into the basics of inferential statistics in Chapter 6; for now, let’s continue with some more definitions.
VARIABLES
In the first few paragraphs, we mentioned a number of ways that people differ: gender,⁵ age, height, hair and eye color, political preference, responsiveness to treatment, and life expectancy. In the statistical parlance you’ll be learning, these factors are referred to as
A variable is simply what is being observed or measured.
Variables come in two flavors: independent and dependent. The easiest way to start to think of them is in an experiment, so let’s return to those acned adolescents. We want to see if the degree of acne depends on whether or not the kids got attar of eggplant. The outcome (acne) is the dependent variable, which we hope will change in response to treatment. What we’ve manipulated is the treatment (attar of eggplant), and this is our independent variable.
The dependent variable is the outcome of interest, which should change in response to some intervention.
The independent variable is the intervention, or what is being manipulated.⁶
Sounds straightforward, doesn’t it? That’s a dead giveaway that it’s too simple. Once we get out of the realm of experiments, the distinction between dependent and independent variables gets a bit hairier. For instance, if we wanted to look at the growth of vocabulary as a kid grows up, the number of different words would be the dependent variable and age the independent one. That is, we’re saying that vocabulary is dependent on age, even though it isn’t an intervention and we’re not manipulating it. So, more generally, if one variable changes in response to another, we say that the dependent variable is the one that changes in response to the independent variable.
Both dependent and independent variables can take one of a number of specific values: for gender, this is usually limited to either male or female; hair color can be brown, black, blonde, red, gray, artificial, or missing; and a variable such as height can range between about 25 to 40 cm for premature infants to about 200 cm for basketball players and coauthors of statistics books.
TYPES OF DATA
Discrete versus Continuous Data
Although we referred to both gender and height as variables, it’s obvious that they are different from one another with respect to the type and number of values they can assume. One way to differentiate between types of variables is to decide whether the values are discrete or continuous.
Discrete variables can have only one of a l imited set of values. Using our previous examples, this would include variables such as gender, hair and eye color, political preference, and which treatment a person received. Another example of a discrete variable is a number total, such as how many times a person has been admitted to hospital; the number of decayed, missing, or filled teeth; and the number of children. Despite what the demographers tell us, it’s impossible to have 2.13 children—kids come in discrete quantities.
Discrete data have values that can assume only whole numbers.
The situation is different for continuous variables. It may seem at first that something such as height, for example, is measured in discrete units: someone is 172 cm tall; a person slightly taller would be 173 cm, and a somewhat shorter person would measure in at 171 cm. In fact, though, the limitation is imposed by our measuring stick. If we used one with finer gradations, we may be able to measure in 1/2 cm increments. Indeed, we could get really silly about the whole affair and use a laser to measure the person’s height to the nearest thousandth of a millimeter. The point is that height, like weight, blood pressure, serum rhubarb, time, and many other variables, is really continuous, and the divisions we make are arbitrary to meet our measurement needs. The measurement, though, is artificial; if two people appear to have the same blood pressure when measured to the nearest millimeter of mercury, they will likely be different if we could measure to the nearest tenth of a millimeter. If they’re still the same, we can measure with even finer gradations until a difference finally appears.
Continuous data may take any value, within a defined range.
We can illustrate this difference between discrete and continuous variables with two other examples. A piano is a discrete
instrument. It has only 88 keys, and those of us who struggled long and hard to murder Paganini learnt that A-sharp was the same note as B-flat. Violinists (fiddlers
to y’all south of the Mason-Dixon line), though, play a continuous
instrument and are able to make a fine distinction between these two notes. Similarly, really cheap digital watches display only 4 digits and cut time into lminute chunks. Razzle-dazzle watches, in addition to storing telephone numbers and your bank balance, cut time into Zoo -second intervals. A physicist can do even better, dividing each second into 9,192,631,770 oscillations of a cesium atom. Even this, though, is only an arbitrary division. Only the hospital administrator, able to buy a Patek Phillipe analogue chronometer, sees time as it actually is: as a smooth, unbroken progression.⁷
Many of the statistical techniques you’ll be learning about don’t really care if the data are discrete or continuous; after all, a number to them is just a number. There are instances, though, when the distinction is important. Rest assured that we will point these out to you at the appropriate times.
Nominal, Ordinal, Interval, and Ratio Data
We can think about different types of variables in another way. A variable such as gender can take only two values: male and female. One value isn’t higher
or better
than the other;⁸ we can list them by putting male first or female first without losing any information. This is called a nominal variable.
A nominal variable consists of named categories, with no implied order among the categories.
The simplest nominal categories are what Feinstein (1977) calls existential
variables—a property either exists or it doesn’t exist. A person has cancer of the liver or doesn’t have it; someone has received the new treatment or didn’t receive it; and, most existential of all, the subject is either alive or dead. Nominal variables don’t have to be dichotomous; they can have any number of categories. We can classify a person’s marital status as Single/Married/Separated/ Widowed/Divorced/Common-Law (six categories); her eye color into Black/Brown/Blue/Green/Mixed (five categories⁹); and her medical problem into one of a few hundred diagnostic categories. The important point is that you can’t say brown eyes are better
or worse
than blue. The ordering is arbitrary, and no information is gained or lost by changing the order.
Because computers handle numbers far more easily than they do letters, researchers commonly code nominal data by assigning a number to each value: Female could be coded as 1 and Male as 2; or Single = 1, Married = 2, and so on. In these cases, the numerals are really no more than alternative names, and they should not be thought of as having any quantitative value. Again, we can change the coding by letting Male = 1 and Female = 2, and the conclusions we draw will be identical (assuming, of course, that we remember which way we coded the data).¹⁰
A student evaluation rating consisting of Excellent/Satisfactory/Unsatisfactory has three categories. It differs from a variable such as hair color in that there is an ordering of these values: Excellent
is better than Satisfactory,
which in turn is better than Unsatisfactory.
However, the difference in performance between Excellent
and Satisfactory
cannot be assumed to be the same difference as exists between Satisfactory
and Unsatisfactory.
This is seen more clearly with letter grades; there is only a small division between a B+ and a B, but a large one, amounting to a ruined summer, between a D- and an F+. This is like the results of a horse race; we know that the horse who won ran faster than the horse who placed, and the one who showed came in third. But there could have been only a 1-second difference between the first two horses, with the third trailing by 10 seconds. So letter grades and the order of finishing a race are called ordinal variables.
An nominal variable consists of ordered categories, where the differences between categories cannot be considered to be equal.
Many of the variables encountered in the health care field are ordinal in nature. Patients are often rated as Much improved/Somewhat improved/ Same/Worse/Dead; or Emergent/Urgent/Elective.¹¹ Sometimes numbers are used, as in Stage I through Stage IV cancer. Don’t be deceived by this use of numbers; it’s still an ordinal scale, with the numbers (Roman, this time, to add a bit of class) really representing nothing more than ordered categories. Use the difference test: Is the difference in disease severity between Stage I and Stage II cancer the same as exists between Stages II and III or between III and IV? If the answer is No, the scale is ordinal.
If the distance between values constant, we’ve graduated to what is called an interval variable.
An interval variable has equal distances between values, but the zero point is arbitrary.
Why did we add that tag on the end, the zero point is arbitrary,
and what does it mean? We added it because, as we’ll see, it puts a limitation on the types of statements we can make about interval variables. What the phrase means is that the zero point isn’t meaningful and therefore can be changed. To illustrate this, let’s contrast intelligence, measured by some IQ test, with something such as weight, where the zero is meaningful. We all know what zero weight is.¹² We can’t suddenly decide that from now on, we’ll subtract 10 kilos from everything we weigh and say that something that previously weighed 11 kilos now weighs 1 kilo. It’s more than a matter of semantics; if something weighed 5 kilos before, we would have to say it weighed -5 kilos after the conversion—an obvious impossibility.
An intelligence score is a different matter. We say that the average IQ is 100, but that’s only by convention. The next world conference of IQ experts can just as arbitrarily decide that from now on, we’ll make the average 500, simply by adding 400 to all scores. We haven’t gained anything, but by the same token, we haven’t lost anything; the only necessary change is that we now have to readjust our previously learned standards of what is average.
Now let’s see what the implications of this are. Because the intervals are equal, the difference between an IQ of 70 and an IQ of 80 is the same as the difference between 120 and 130. However, an IQ of 100 is not twice as high as an IQ of 50. The point is that if the zero point is artificial and moveable, then the differences between numbers are meaningful, but the ratios between them are not.
If the zero point is meaningful, then the ratios between numbers are also meaningful, and we are dealing with (not surprisingly) a ratio variable.
A ratio variable has equal intervals between values and a meaningful zero point.
Most laboratory test values are ratio variables, as are physical characteristics such as height and weight.A person who weighs 100 kilos is twice as heavy as a person weighing 50 kilos; even when we convert kilos to pounds, the ratio stays the same: 220 pounds to 110 pounds.
That’s about enough for the difference between interval and ratio data. The fact of the matter is that, from the viewpoint of a statistician, they can be treated and analyzed the same way.
Notice that each step up the hierarchy from ordinal data to ratio data takes the assumptions of the step below it and then adds another restriction:¹³
Although the distinctions among nominal, ordinal, interval, and ratio data appear straightforward on paper, the lines between them occasionally get a bit fuzzy. For example, as we’ve said, intelligence is measured in IQ units, with the average person having an IQ of 100. Strictly speaking, we have no assurance that the difference between an IQ of 80 and one of 100 means the same as the difference between 120 and 140; that is, IQ most likely is an ordinal variable. In the real world outside of textbooks, though, most people treat IQ and many other such variables as if they were interval variables. As far as we know, they have not been arrested for doing so, nor has the sky fallen on their heads.
Despite this, the distinctions among nominal, ordinal, interval, and ratio are important to keep in mind because they dictate to some degree the types of s tatistical tests we can use with them. As we’ll see in the later chapters, certain types of graphs and what are called parametric tests
can be used with interval and ratio data but not with nominal or ordinal data. By contrast, if you have nominal or ordinal data, you are, strictly speaking, restricted to nonpar- ametric
statistics. We’ll get into what these obscure terms mean later in the book.
PROPORTIONS AND RATES
So far, our discussion of types of numbers has dealt with single numbers—blood pressure, course grade, or counts. Sometimes, though, we deal with fractions. Even though this is stuff we learned in grade school, there’s still some confusion, owing, at least in part, to the sloppy English used by some statisticians. But, being purists, we’ll try to clear the air.
A proportion is a type of fraction in which the numerator is a subset of the denominator. That is, when we write X, we mean that there are three objects, and we’re talking about one of them. Percentages are a form of proportions, where the denominator is jigged to equal 100. This may seem so elementary that you may wonder why we bother to mention it. There are two reasons. First, we’ll later encounter other fractions (e.g., odds) where the numerator is not part of the denominator; and second (here’s where statisticians often screw up), people sometimes call a proportion a rate.
But, strictly speaking, a rate is a fraction that also has a time component. If we say that 23% of children have blue eyes (a figure we just made up on the spot), that’s a proportion. But, if we say that 1 out of every 1,000 people will develop photonumeropho- bia this year, that’s a rate, because we’re specifying a time interval.
So, with that as background, on to statistics!
EXERCISES
1. For the following studies, indicate which of the variables are dependent (DVs), independent (IVs), or neither.
a. ASA is compared against placebo to see if it leads to a reduction in coronary events. The IV is The DV is
b. The relationship between hypocholes- terolemia and cancer.The IV is The DV is
c. We know that members of religious groups that ban drugs, alcohol, smoking, meat, and sex (because it may lead to dancing) live longer than the rest of us poor mortals, but is it worth it? How do they compare with us on a test of quality of life? The IV is The DV is
d. One study (a real one, this time) found that bus drivers had higher morbidity rates of coronary heart disease than did conductors. it leads to a reduction in coronary events.
The IV is ____ The DV is ____
2. State which of the following variables are discrete and which are continuous.
a. The number of hair-transplant sessions undergone in the past year.
b. The time since the last patient was grateful for what you did.
c. Your anticipated before-taxes income the year after you graduate.
d. Your anticipated after-taxes income in the same year.
e. The amount of weight you’ve put on in the last year.
f. The number of hairs you’ve lost in the same time.
3. Indicate whether the following variables are nominal, ordinal, interval, or ratio.
a. or Your income (assuming it’s more than $0).
b. A list of the different specialties in your profession.
c. The ranking of specialties with regard to income.
d. Bo Derek was described as a 10.
What type of variable was the scale?
e. A range of motion in degrees.
f. A score of 13 out of 17 on the Schmedlap Anxiety Scale.
g. Staging of breast cancer as Type I, II, III, or IV.
h. ST depression on the ECG, measured in millimeters.
i. ST depression, measured as 1
± 1 mm, 2
= 1 to 5 mm, and 3
5 mm.
j. ICD-9 classifications: 0295 = Organic psychosis, 0296 = Depression, and so on.
k. Diastolic blood pressure, in mm Hg.
l. Pain measurement on a seven-point scale.
4. Indicate whether the following are proportions or rates:
a. The increase in the price of household good last year.
b. The ratio of males to females.
c. The ratio of new cases of breast cancer last month to the total number of women in the population.
d. The ratio of the number of women who have breast cancer to the total number of women in the population.
¹ We also wouldn’t need dating services because it would be futile to look for the perfect mate; he or she would be just like the person sitting next to you. By the same token, it would mean the end of extramarital affairs, because what’s the use? But that’s another story.
² Coincidently, this perfectly describes the person writing this section.
³ Mind you, if everybody in the world were male (or female), we wouldn’t need statistics (or anything else) in about 70 years.
⁴ As we’ll see later, a few
to a statistician can mean over 400,000 people, as in the Salk polio vaccine trial. So much for the scientific use of language.
⁵ Formerly referred to as sex.
⁶ These are different from the definitions offered by one of our students, who said that, An undependable variable keeps changing its value, while a dependable variable is always the same.
⁷ Actually, the escapement mechanism makes the second hand jump, but if you can afford a Patek, you’ll ignore this.
⁸ Although male chauvinist pigs and radical feminists would disagree, albeit for opposite reasons.
⁹ Bloodshot
is usually only a temporary condition and so is not coded.
¹⁰ Other examples of numbers really being nominal variables and not reflecting measured quantities would be telephone numbers,social insurance or social security numbers,credit card and politicians’IQs.
¹¹ This is similar to the scheme used to evaluate employees: Walks on water/Keeps head above water under stress/Washes with water/Drinks water/Passes water in emergencies.
¹² It’s a state aspired to by high fashion
models.
¹³ A good mnemonic for remembering the order of the categories is the French word NOIR. Of course, this assumes you know French. Anglophones will just have to memorize the order.
CHAPTER THE SECOND
Looking at the Data
A First Look at Graphing Data
Here we look at different ways of graphing data, how to make the graphs look both accurate and esthetic, and how not to plot data.
WHY BOTHER TO LOOK AT DATA?
Now that you’ve suffered through all these pages of jargon, let’s actually do something useful: Learn how to look at data. With the ready availability of computers on every desk, there is a great temptation to jump right in and start analyzing the bejezus out of any set of data we get. After all, we did the study in the first place to get some results that we could publish and prove to the Dean that we’re doing something. However, as in most areas of our lives (especially those that are enjoyable), we must learn to control our temptations in order to become better people.
It is difficult to overemphasize the importance and usefulness of getting a feel for the data
before starting to play with them. If there isn’t a Murphy’s Law to the effect that There will be errors in your data,
then there should be one. You do not look at the data just in case there are errors; they are there, and your job is to try to find as many as you can. Sometimes the problem isn’t an error as such; very often, a researcher may use a code number such as 99 or 999 to indicate a missing value for some variable, and then forget to tell you this little detail when he asks you to analyze his data. As a result, you may find that some people in his study are a few years older than Methuselah. Graphing the data beforehand may well save you from one of life’s embarrassing little moments.
A second purpose for looking at the data is to see if they can be analyzed by the statistical tests you’re planning to use. For example, some tests require the data to fit a given shape, or that a plot of two variables follow a straight line. Although there are specific tests of these assumptions, the power of the calibrated eyeball test
should not be underestimated.A quick look often gives you a better sense of the data than a bunch of numbers.
HISTOGRAMS, BAR CHARTS, AND VARIATIONS ON A THEME
The Basic Theme: The Bar Chart
Perhaps the most familiar types of graphs to most people are bar charts and histograms (we’ll tell you what the difference is in a little bit). In essence, they consist of a bar whose length is proportional to the number of cases. To illustrate it, let’s conduct a gedanken experiment.
¹ Imagine we do a study in which we survey 100 students and ask them what their most boring course was in college. We can then tabulate the data as is shown in Table 2-1.
The first step is to choose an appropriate length for the F-axis, where we’ll plot (at least for now) the number of people who chose each alternative. The largest number in the table is 42, so we will choose some number somewhat larger than this for the top of the axis. Because we’ll label the tick points every 10 units, 50 would be a good choice. If we had used the number 42, we would have had to label the axis either every 7 units (which are somewhat bizarre numbers²), or every even number, which would make the axis look too cluttered. So, our graph would look like Figure 2-1.
TABLE 2–1 Responses of 100 students to the question, What was your most boring introductory course?
FIGURE 2-1 Bar chart of the five least popular courses
FIGURE 2-2 Figure 2-1 redrawn so that the categories are in order of preference and the tick marks are outside the axes.
FIGURE 2-3 Figure 2-2 redrawn so that the bars are horizontal.
FIGURE 2-4 Figure 2-3 redrawn as a point graph.
At first glance, this doesn’t look too bad! However, we can make it look even better. It’s obvious that the data are nominal; the order is arbitrary, so we can change the categories around without losing anything. In fact, we gain something if we rank the courses so that the highest count is first and the lowest one is last. Now the relative standing of the courses is more readily apparent. (As a minor point, it’s often better to put the tick marks outside the axes rather than in. When the data fall near the F-axis, a tick mark inside the axis may obscure the data point, or vice versa.) Making these two changes gives us Figure 2-2.
This is the way most bar charts of nominal data looked until recently. Within recent years, though, things have been turned on their ear—literally. If the names of the categories are long, things can look pretty cluttered down there on the bottom. Also, some research (Cleveland, 1984) has shown that people get a more accurate grasp of the relative sizes of the bars if they are placed horizontally. Adding this twist (pun intended), we’ll end up with Figure 2-3.
Variation 1: Dot Plots
Another variant of the bar chart that is particularly useful when there are many categories is the dot plot, as shown in Figure 2-4. Instead of a bar, just a heavy dot is placed where the end of the bar would be. When there are many labels, smaller dots that extend back to the labeled axis are often used to make the chart easier to read.
Graphing Ordinal Data
The use of bar charts isn’t limited to nominal data; it can be used with all four types. However, a few other considerations should be kept in mind when using them with ordinal, interval, and ratio data. The first, which would seem obvious, is that because the values are ordered, you can’t blithely move the categories around simply to make the graph look prettier. If you were graphing the number of students who received Excellent/Satisfactory/Unsatisfactory ratings, it would confuse more than help if you put them in the order: Satisfactory/Excellent/Unsatisfactory just because most students were in the first category.
Graphing Interval and Ratio Data
A few other factors have to be considered in graphing interval and ratio data. Let’s say we have some data on the number of tissues dispensed each day by a group of 75 social workers. We look at our data, and we find that the lowest number is 10 and the highest is 117. The difference between the highest and lowest value is 107. (This difference is called the range. We’ll define it a bit more formally later in the next chapter.) If we have one bar for each value, we’ll run into a few problems. First, we have more possible values than data points, so some bars will have a height
of zero units, and many others will be only one or two units high. This leads to the second prob-lem, in that it will be hard to discern any pattern by eyeballing the data. Third, the X-axis is going to get awfully cluttered. For these reasons, we try to end up with between 10 and 20 bars on the axis.³
To do this, we make each bar represent a range of numbers; what we refer to as the interval width. If possible, use a width that most people are comfortable with: 2, 5, 10, or 20 points. Even though a width of 6 or 7 may give you an esthetically beautiful picture, these don’t yield multiples that are easily comprehended. Let’s use an example.
If we took 100 fourth-year nursing students and asked them how many bedpans they emptied in the last month, we’d get 100 answers, as in Table 2-2. The main thing a table like this tells us is that it’s next to impossible to make sense of a table like this. We’re overwhelmed by the sheer mass of numbers, and no pattern emerges. In fact, it’s very hard even to figure out what the highest and lowest numbers are; who’s been working like a Trojan and who’s been goofing off. To make our lives (and all of the next steps) easier, the first thing we should do is to put the data in rank order,⁴ starting with the smallest number and ending with the highest. Two notes are in order. First, you can go from highest to lowest if you wish, it makes no difference. Second, most computers have a simple routine, usually called SORT, to do the job for you. Once we do this, we’ll end up with Table 2-3.
With this table we can immediately see the highest and lowest values and get at least a rough feel for how the numbers are distributed; not too many between 1 and 10 or between 60 and 70, and many in the 20s and 30s. We also see that the range (66 - 1) = 65; far too large to graph when letting each bar stand for a unique number. An interval width of 10 would give us 7 boxes (not quite enough for our esthetic sense), whereas a width of 2 would result in 33 boxes (which is still too many). A width interval of 5 yields 14 boxes (which is just right). To help us in drawing the graph, we could make up a summary table, such as Table 2-4, which gives the interval and the number of subjects in that interval.
There are a few things to notice about this table. First, there are two extra columns, one labeled Midpoint and the other labeled Cumulative Total. The first is just what the name implies: It is the middle of the interval. Because the first interval consists of the numbers 0, 1, 2, 3, and 4, the midpoint is 2. If there were an even number of numbers, say 0, 1, 2, and 3, then the midpoint would again be in the middle. This time, though, it would fall halfway between the 1 and 2, and we would label it 1.5. The other added column, the Cumulative Total, is simply a running sum of the number of cases; the first interval had 1 case, and the second 4, so the cumulative total at the second interval is (1 + 4) = 5. The 9 cases in the third interval then produce a cumulative total of (5 + 9) = 14. This is very handy because, if we didn’t end up with 100 at the bottom, we would know that we messed up the addition somewhere along the line. The other point to notice is the interval. The first one goes from 0 to 4, the second from 5 to 9, and so on. Don’t fall into the trap of saying an interval width of 5 covers the numbers 0 to 5; that’s actually 6 digits.
TABLE 2–2 Number of bedpans emptied by 100 fourth-year nursing students in the past month
TABLE 2–3 Data from Table 2–2 put in rank Order
Another point to notice is that we’ve paid a price r grouping the data to make it more readable, and that price is the loss of some information. We can tell from Table 2-4 that 1 person emptied between 0 and 4 bedpans, but we don’t know exactly how many. In the next interval, we see that 4 people emptied between 5 and 9 pans, but again we’re not sure precisely how many future nurses dumped what number of bedpans. The wider the interval, the more information is lost.
TABLE 2–4 A summary of Table 2–3 , showing the intervals, midpoints, counts, and cumulative total
FIGURE 2-5 Histogram showing the number of bedpans emptied during the past month by each of 100 nursing students.
So, with these points in mind, we’re almost ready to start drawing the graph. There’s one last consideration, though: how to label the two axes. Looking at the count column in Table 2-4, we can see that the maximum number of cases in any one interval is 15. We would therefore want the F-axis to extend from 0 to some number over 15. A good choice would be 20, because this would allow us to label every fifth tick mark. Notice that on the X-axis, we’ve labeled the middle of the interval. If we labeled every possible number, the axis would look too cluttered; the midpoint cuts down on the clutter and (for reasons we’ll explore further in the next chapter) is the best single summary of the interval. Our end product would look like Figure 2-5
This figure differs from Figure 2-2 in a subtle way. In the earlier figure, because each category was different from every other one, we left a bit of a gap between bars. In Figure 2-5, the data are interval, so it makes both statistical as well as esthetic sense⁵ to have each bar abutting its neighbors. Now we can finally tell you the difference between bar charts and histograms:
Bar charts: There are spaces between the bars.
Histograms: The bars touch each other.
STEM-LEAF PLOTS AND RELATED FLORA
All these variants of histograms and bar charts are the traditional ways of taking a mess of data such as we found in Table 2-2 and transforming them into a graph such as Figure 2-5. The steps were as follows:
Rank order the data.
Find the range (the highest value minus the lowest).
Choose and appropriate width to yield about 10 to 20 intervals.
Make a new table consisting of the intervals, their midpoints, the count, and a cumulative total.
Turn this into a histogram.
Lose some information along the way, consisting of the exact values.
Tukey (1977) devised a way to eliminate steps 1 and 6 and to combine 4 and 5 into one step. The resulting diagram, called a Stem-and-Leaf Plot, thus consists of only three steps:
Find the range.
Choose an appropriate width to yield about 10 to 20 intervals.
Make a new table that looks like a histogram and preserves the original data.
Let’s take a look and see how this is done, at the same time explaining these somewhat odd-sounding terms. The leaf
consists of the least significant digit of the number, and the stem
is the most significant. So, for the number 94, the leaf is 4
and the stem is 9.
If our data included numbers such as 167, we would make the 16
the stem. Using the data from Table 2-3 and the same reasoning we did for the histogram, we would again opt for an interval width of 5. We then write the stems we need, vertically, as in Table 2-5 (it’s best to do this on graph paper, for reasons that will be readily apparent if you’ll just be patient).
No, you are not seeing double. Table 2-5 really does have two 0s, two 1s, and so on. The reason is that, because we’ve chosen an interval width of 5, the first 0 will contain the numbers 0 to 4. Strictly speaking, the 0 is the stem of the numbers 00 (zero) to 04 (four). The second interval covers the numbers 5 (05) to 9 (09); the first 1 is the stem for the numbers 10 to 14, the second for the numbers 15 to 19, and so on. Now, we go back to our original data and write the leaf of each number next to the appropriate stem. For example, the first number in Table 2-2 is 43, so we put a 3 (the leaf) next to the first 4. The second number, reading across, is 45 so we put a 5 next to the second 4, because this stem contains the intervals 45 to 49. If you did what we told you to earlier, and used graph paper, each leaf would be put in a separate and adjacent horizontal box. Table 2-6 shows a plot of the first 10 numbers, and Table 2-7 is the stem-and-leaf plot of all 100 numbers.
If you turn Table 2-7 sideways, you’ll see it has exactly the same shape as Figure 2-5. Moreover, the original data are preserved. Let’s take the third line down, the first stem with a 1. Reading across, we can see that the actual numbers were 11, 14, 14, 14, 12, 11, 11, 13, and 12. If we want to be a bit fancier, we can actually rank order the numbers within each stem. Computer programs that produce stem-leaf plots (see the end of this chapter) do this for you automatically. Most journals still prefer histograms or bar charts rather than stem-leaf plots, but this is slowly changing. In any case, it’s simple to go from the plot to the more traditional forms.
FREQUENCY POLYGONS
Another way of representing interval or ratio types of data is called a frequency polygon. Let’s start off by looking at one, and then we’ll describe it. Now, look at Figure 2-6. This shows the same data as Figure 2-5. However, instead of a bar that spans each interval, we’ve put a dot at the midpoint of the interval and then connected the dots with straight lines. There are a few other differences between histograms and frequency polygons.
First, as we’ve said, polygons should not be used with nominal or ordinal data because joining the dots makes the assumption that there is a smooth transition from one datum point to another. For example, imagine that we have a polygon with just two points, as in Figure 2-7. The first point, at a midpoint of 20, shows 100 units on the F-axis, and the second point, which falls at a midpoint of 30, shows 110 units. Even though we may not have gathered any data that correspond to an X-axis value of 25, we assume they fall on the line, halfway between 20 and 30. In this case, they would correspond to 105 units (where the dot is). We can make this assumption only because we’re using an interval or ratio level of data; if the distances between intervals are variable or unknown, as they are with ordinal data, we couldn’t make this assumption.
TABLE 2–5 First step in constructing a Stem-and-Leaf Plot: Writing the stems
TABLE 2–6 Stem-and-Leaf Plot of the first 10 items of Table 2–2
TABLE 2–7 Stem-and-Leaf Plot of all the data in Table 2–2
FIGURE 2-6 Frequency polygon showing the same data as in Figure 2-5.
FIGURE 2-7 The assumption of a smooth transition from point to point in frequency polygons.
FIGURE 2-8 Data for three groups displayed as bar graphs.
A second difference is that bar charts seem to imply that the data are spread equally over the interval. For instance, if we had an interval width of 5 units spanning the numbers 20 through 24, and 10 cases were in that interval, it would appear (and we would assume) that 2 cases fell at 20, 2 at 21, 2 at 22, and so on. With a frequency polygon, we assume all the cases had the value of the midpoint. This is a closer representation of what we actually do in statistics; if we don’t know the exact value of some variable, we usually use some midpoint as an approximation.
A third difference is that, by convention, frequency polygons begin and end with the line touching the X-axis. To accomplish this, we’ve added an extra interval at the upper end, which had a frequency count of zero. At the low end, it doesn’t make sense in this case to add another interval because it would cover the numbers -1 to -5, so we just continue the line to the origin. If we were plotting data that did not include a value of zero, such as blood pressure, IQ, or height, we would have added an extra empty
interval at the lower end.
So, when do we use a histogram and when a polygon? For nominal and ordinal data, you don’t have a choice; you’re limited to a bar chart. If you’re dealing with interval or ratio data and are showing the data for only one or two groups, it really doesn’t matter; it’s more a matter of personal preference, esthetics, and whatever your plotting package can manage. However, if you have more than two groups, then it’s often better to use frequency polygons, with each group represented by a different line. The advantage is that all the data for any one group are joined; with a histogram, the values for one group are often broken up by the bars for the other groups. We’ve shown an example of this in Figure 2-8. Figure 2-9 then shows the same data with a polygon, which we feel is easier to follow.
When you’re plotting two or more lines, they should be noticeably distinct from one another— different symbols representing the data points and different types of lines joining the points. If you’re showing the graph at a meeting, you can also use different colors; however, most publications are in black and white, so this isn’t an option.⁶
CUMULATIVE FREQUENCY POLYGONS
Before leaving the topic of graphing for a while, we’ll mention one more variant, a cumulative frequency polygon. Cast your mind back, if you will, to our discussion of the emptying of bedpans. When we drew up Table 2-4, we added another column, labeled the Cumulative Total, and mentioned that one reason for using it was as a check on our addition. Now we’ll mention another purpose; it helps us draw cumulative frequency polygons. With them, we plot not the raw count within each interval, but the cumulative count. You can also convert the cumulative total at each interval into a percentage of the total count and plot the cumulative percents, as we’ve done in Figure 2-10. In our example, because the total number of data points was 100, each cumulative total is also the percent, but you’ll rarely be in the fortunate position of having exactly 100 subjects. Figure 2-10 again shows the data in Table 2-4, but this time as a cumulative polygon. The only difference in drawing a regular frequency polygon and a cumulative one is where we put the point: in the former case, it was at the midpoint; with cumulative polygons, we put the mark at the upper end of the interval, for reasons that will soon be apparent.
In Figure 2-10 we’ve drawn a horizontal line at 50%, starting at the F-axis and extending to the curve, then dropped a vertical line to the X-axis. This shows us that 50% corresponds to 31 bedpans; that is, half of the people emptied fewer than 31 and half emptied more. We can also draw lines at other percentages, or even work backward; (e.g., draw a vertical line up from, say 40 bedpans, and see what percent of people dumped more or fewer).
This is the reason the data are plotted at the end of the interval, rather than at the midpoint. As we’ve mentioned, we have lost some information by grouping the data, so we don’t know exactly where within the interval the raw data actually occurred. We do know, though, how many cases there were, up to and including everyone within the interval. The difference may be small, but statisticians pride themselves on being accurate.⁷
Graphs of this sort are very common in plotting all sorts of anthropometric features, especially for kids— height, weight, head circumference, and other vital statistics. Then, after the doc takes the kid off the scale, she can look at a graph appropriate for age and sex and determine in what percentile this particular kid is.
HOW NOT TO GRAPH
As the old joke goes, We have some good news and some bad news.
The good news is that every spreadsheet program, slide presentation program, and statistics program now can make graphs for you at the press of a button; you simply have to enter the data. The bad news is that, almost without exception, they do it extremely badly. Many of the choices are worse than useless, and most default options are just plain wrong. In this section, we’ll discuss some very useless and misleading (albeit very pretty) ways of presenting data.
Do You Really Need a Graph?
Before we begin to discuss bad graphs, let’s decide whether a graph is even needed. Take a look at Figure 2-11. It shows the number of males and females in some study. In other words, it conveys one bit of information—the proportion of males is 54%. (Even though you haven’t gotten too far into this book yet, we bet you can figure out that the proportion of females is 46%.) Do you need a graph, something that takes up about X of a page, to tell you that? We can convey the same information in one sentence, which takes about 15 seconds to write and 2 seconds to read; we don’t have to waste 30 minutes drawing a figure. Use graphs to show relationships, not to report numbers.
FIGURE 2-9 The same data as in Figure 2-8, but displayed as frequency polygons. The lines are differentiated by color and symbol type.
FIGURE 2-10 Cumulative frequency polygon of data in Figure 2-6.
The Case of the Missing Zero
Dr. X⁸ wants to be considered for early promotion. To support his petition, he submits a graph, shown in Figure 2-12, to show that the amount of grant money he has received has risen dramatically in the past year. So, should he be promoted? The proportion of males and females in a study.
Not if this graph is any indication of the quality of his work. From the picture, it looks as though there has been almost a threefold increase in his funding (the actual value is about 275%). The reality is that it went from a measly $11,000 to a paltry $15,000, an increase of only 37%. The problem is with the F-axis. Instead of starting at zero, it begins at $10,000, so that small differences are magnified. We see examples of this every day on TV or in the newspapers; it looks as though the temperature or the stock market is fluctuating wildly, because the axis doesn’t start at zero.
FIGURE 2-11 The proportion of males and females in a study.
FIGURE 2-12 Grant money per year for Dr. X.
One way to check on this distortion is to use the Graph Discrepancy Index (GDI), which is simply:
In this case, it’s (275/37) - 1 = 6.43. That’s a tad higher than the recommended value of the GDI, which is 0.05 (Beattie and Jones, 1992). Gotcha, Dr. X!
3-D or Not 3-D, That is the Question
The bar charts and histograms that we’ve shown you so far look pretty drab and ordinary. Wouldn’t it be nice if we jazzed them up a bit by making them look three dimensional, or used fancier objects instead of just rectangles, or if we added shading, or converted them to pie charts? No, it would not be nice; it will just be confusing.
Let’s take Figure 2-2 and make it look sexy by adding some of the features we’ve just mentioned. Golly gee, Figure 2-13 looks hot! But quickly now, how many students said Economics? You’re excused if you said 39. You’d be wrong—the real answer is 42—but we’ll excuse you, because we’re nice guys. The problem is that the leading edge of the bar, which is where your eye is drawn, is just below the 40. The true value is actually indicated by the back edge of the bar, which confuses both the eye and its owner. For bars farther from the left side of the graph, we have to follow an imaginary line to the F-axis, make a turn, and then follow another imaginary line to where the legend is—a process that’s prone to error at every step. Compounding the problem, the back of the bar is not flush with the back wall, so the top of the bar is not at 40—you have to continue an invisible line until it hits the wall, two units above the top. As if that isn’t enough, the major purveyor of soft-ware (which will remain unnamed, but they make PowerPoint, Word, and other products) is inconsistent in this regard. Graph exactly the same data with PowerPoint and with Excel and you’ll get different results—one puts the bars against the back wall, one doesn’t. The greater the 3-D effect, the greater the confusion. So, the bottom line is, lose the 3-D.
FIGURE 2-13 A 3-D version of Figure 2-2.
Pie in the Sky, Not in a Graph
Now let’s take the same data to make a pie chart and use it to compare two groups, as in Figure 2-14. Are the numbers of people saying Sociology the same in both groups? Yet again we’ll excuse you if you answer, That’s hard to say.
You can relatively easily compare the first segment of the two pies, because they both start at 12 o’clock. But if the sizes of those segments are different, you now have to look at a segment of pie two, keep the angle constant as you rotate it until it’s at the same starting place as the corresponding segment of pie one, and judge the relative angles. Sounds like an impossible task, and it is. A pie chart may be good for showing data for one group, but is useless for comparing groups. Remember, the only place for a pie chart is at a baker’s convention.
FIGURE 2-14 Comparing two groups using pie charts.
But,
we hear you say, you can simply put numbers inside or next to the wedges, and that will remove any ambiguity.
Let’s keep in mind the difference between a table and a graph. A graph is ideal for giving the reader a very quick grasp of relationships that exist in the data; is there a trend over time, or does one group differ from another? If the precise numbers are important, use a table. Don’t mix up these two functions: communicating a picture, or reporting data.
The Worst of Both Worlds
Take a look at Figure 2-15. Quickly now, answer two questions: (a) put the segments in rank order; and (b) tell us how much bigger is segment D than segment C. If you struggled to put A through C in order, and couldn’t easily say how much bigger D is, then we would say, Gotcha!
The answers are: (a) segments A, B, and C are all equal, and (b) D is twice as big as each of them. Had the data been presented as a bar chart, the answers would have been obvious. The reason you had difficulty is that not only is this a pie chart, but it’s a 3-D pie chart, thus incorporating the worst features of each. Tilting the graph distorts the angles of the wedges, and the greater the 3-D effect, the worse the distortion.
STACKED GRAPHS
For a change, we’re not making some sort of sexist joke.⁹ Rather, we’re talking about graphs, much beloved by newspapers and magazines, where different values of a variable are placed on top of one another. Figure 2-16 is a stacked bar graph showing the marital status in three groups. As with a pie chart, we have no trouble comparing the groups with respect to the proportion married or single, because they have a common axis (the top or bottom of the graph). But, what about those who are widowed? To compare the groups, we have to try to keep the height of the segment in our mind while shifting the bases until they all line up, and then see if the heights are comparable; not an easy task by any means. These data would either be better presented in a table, or using separate bars for each category of marital status.
FIGURE 2-15 A 3-D pie chart.
FIGURE 2-16 A stacked bar chart.
FIGURE 2-17 A stacked line graph.
In Figure 2-17, we show the annual cost of three programs over time in a stacked line chart. This type of graph is fine if we want to see what’s happening to the total cost of the three, but it’s terrible for looking at the contributions of each. Which program is growing the fastest? The reality is that Programs A and B are increasing geometrically each decade (e.g., 2, 4, 8, 16), whereas C is only increasing arithmetically (2, 4, 6, 8). Hard to tell, isn’t it? The bottom line—don’t use it.
Conclusion
We’ll close with a beautiful quote from Howard Wainer (1990): "Although I shudder to consider it, perhaps there is something to be learned from the success enjoyed by the multi-colored, three-dimensional pie charts that clutter the pages of USA Today, Time, and Newsweek. I sure hope not much."
MAKING BETTER TABLES
So far, we’ve been showing you different ways of presenting data in graphs, as if this were the only way that data can be portrayed. Indeed, graphs are excellent for displaying one or two variables at a time. There are times, though, when only a table of numbers will do—when we have many variables to show at the same time, or when we want the reader to see the actual numbers. It may seem at first glance as if tables were the simplest thing in the world to construct: just write the names of the variables as the headings of the columns, the subjects along the left to indicate the rows, and fill in the blanks. Table 2-8 is such a table, and it is typical of many you’ll see. The countries are listed alphabetically, and the numbers are given with as much accuracy as possible.
Now, quickly—which is the largest country? The smallest? The one with the highest GNP? The lowest infant mortality rate (IMR)? If you think that was hard, imagine how hard it would be if we had listed all of the countries in Africa.¹⁰
Why was such a seemingly easy task so hard? The main reason is that there are too many numbers; not that there are too many columns but that we have unnecessary
accuracy. Don’t get us wrong; accuracy is good but, like a child, only in its place. If the exact numbers are important for archival purposes then, fine, maintain as many significant digits as you can come up with, but stick the table in an appendix. For most purposes, however, so many digits give an illusion of accuracy that is often misleading. For example, the population of Brazil is given as 110,098,992.¹¹ By the time you finish reading that number, it’s already wrong. Even assuming that the census was correct when it was taken (a dubious assumption at best in developed countries, and most likely a myth in developing ones), it was out of date almost as soon as it was recorded. If the population increases by 3% a year, then there are nearly seven additional people every minute, or almost 10,000 a day. Between the time the census was taken (and don’t forget it was probably taken over a period of weeks or months), recorded by the central government, reported in an official document, reproduced in the atlas, and read by you, years may have elapsed. That number is no longer correct—if it ever was