Antiscientific Attitudes: What Happens When Scientists Are Unscientific?

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Antiscientific Attitudes: What Happens

when Scientists Are Unscientific?


Ä

Larry E. Beutler and T. Mark Harwood


Graduate School of Education, University of California,
Santa Barbara

Scientists sometimes engage in quite unscientific behavior in order to


influence their peers or to obtain secondary gain. We explore some of the
causes and consequences of these acts, using examples of different ways
in which antiscientific attitudes are manifest among scientists. © 2001
John Wiley & Sons, Inc. J Clin Psychol 57: 43–51, 2001.

Keywords: science; fraud; bias

In early 1999, Dr. Laura Schlesinger launched an attack against the American Psycho-
logical Association (APA) for being “unscientific” and “immoral.” At the basis of her
attack was an article published in the APA’s Psychological Bulletin (Rind, Tromovitch, &
Bauserman, 1998). This article reported on a meta-analysis of 59 studies in which college
students who reported having been sexually abused as children were compared to those
who did not report such a history. The article concluded that the effects of childhood
sexual abuse were not as universally harmful as commonly supposed and even that it was
seen as helpful by some victims under certain circumstances. They suggested that the
concept of “childhood sexual abuse” (CSA), should be redefined to give it a less negative
meaning.
Schlesinger called for the APA to rescind the results of the study and held the asso-
ciation responsible for what the independent scientist-members published in peer-
reviewed journals of the association. She called into question the entire peer-review process
by advocating that when scientific findings are at variance with common knowledge,
they should not be published regardless of their peer-review status.

Correspondence concerning this article should be addressed to: Larry E. Beutler, Graduate School of Educa-
tion, University of California, Santa Barbara, CA 93106; e-mail: [email protected]

JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 57(1), 43–51 (2001)


© 2001 John Wiley & Sons, Inc.
44 Journal of Clinical Psychology, January 2001

Under most circumstances, such a claim would not likely be noticed or would be
written off to disputes among scientists. In this instance, however, the results of her attack
on the APA were widespread. The Alaska legislature issued a statement of censure against
the APA; Tom DeLay, the House Majority Whip, along with Republican representatives
Salmon of Arizona, Potts of Pennsylvania, and Weldon of Florida sponsored a U.S. House
resolution calling on the president of the United States to join congress in condemning
the APA; and the White House spokesperson, Joel Lockhart, was questioned by reporters
about the president’s intentions regarding this resolution.
But, Dr. Schlesinger is not a practicing scientist, though she does hold a doctorate in
a biomedical science from a prestigious eastern university, nor is she a psychologist.
What made her attack more than of passing interest was that Dr. Schlesinger is better
known as “Dr. Laura,” the moderator of a nationally syndicated radio talk show in which
she offers advice on “moral dilemmas.” She offers, as her mental health credentials, a
master’s degree in Marriage and Family Counseling. In this case, however, she reclaimed
her scientific training, though it was in a discipline that is far removed from mental
health, and used her access to a large radio audience to attack psychological science,
asserting that no true science would or should publish results of investigations that are at
odds with commonly accepted “knowledge.” Her campaign to bend the peer-review pro-
cess in the distribution of scientific knowledge, in order to preserve public morality, by
invoking political pressure is reminiscent of Copernicus being ordered by the church to
alter his results in order to preserve the belief that the Earth was the center of the uni-
verse. Truth by public persuasion and common sense is decidedly antiscientific.
The introduction to a volume devoted to the topic of antiscientific attitudes (Gross,
Levitt, & Lewis, 1996) leads off with the following from Goethe’s Faust, delivered by
Mephistopheles:
Scoff at all knowledge and despise
reason and science, those flowers of mankind.
Let the father of all lies
with dazzling necromancy make you blind,
then I’ll have you unconditionally—
(Gross, 1996, pp. 1)

The forgoing quote captures, albeit in dramatic fashion, the argument presented herein.
That is, antiscientific attitudes, especially among scientists, are dangerous because they run
counter to the objectives of science. Such attitudes hinder scientific progress by not ac-
cepting the primacy of truth over personal, evaluative attitudes and beliefs. These prob-
lematic attitudes circumvent intellectual inquiry, they impede the search for authentic
knowledge by remaining focused primarily on political and/or unreasonable and unscien-
tific objectives; they operate as ideological filters limiting the scope of possible investiga-
tions or resulting in unreasoned criticisms regarding scientific endeavors (Koertge, 1996).
There is no universal agreement as to what constitutes antiscience; however, one
could argue that fraud, plagiarism, and antiscientific attitudes are components that lie
somewhere along a broad continuum of antiscience. Scientific fraud, the most serious
type of scientific misconduct, is an intentional effort to deceive the scientific community
about the nature of research results (Schaffner, 1992). According to Schaffner, the defi-
nition of scientific fraud includes: (a) fabrication, (b) fudging, and (c) suppression of
results. Other serious types of scientific misconduct include plagiarism and suppression
of scientific advances. Antiscientific attitudes are the overarching and necessary elements
if the more behavioral components of antiscience (e.g., fraud, plagiarism, and suppres-
sion) are to manifest.
Antiscientific Attitudes 45

Antiscientific attitudes are nothing new, they have been with us since the emergence
of scientific endeavors, developing most fervently with the Enlightenment itself (Gross,
1996). For example, Broad and Wade (1982) compiled a list of 34 suspected or known
cases of scientific fraud that began with Hipparchus, a second century b.c. Greek astron-
omer accused of plagiarism and conclude with four cases that occurred in 1981. Early in
the nineteenth century, Babbage (1830) wrote about the decline of science in England,
and the case of Piltdown Man, in 1912, is perhaps the most well-known instance of
scientific fraud in this century (Goodstein, 1991).
The rejection of science and reason is not limited to the ignorant or irrational. Many
intellectually gifted individuals, for example, William Blake and Goethe (Gross, 1996),
and Bernoulli, Mendel, Newton, and Dalton (Broad & Wade, 1982), are thought to be
guilty of antiscientific attitudes. Indeed, academic antiscience is alive and well in univer-
sities and among professionals in virtually all fields of science and practice (Bunge,
1996; Gross, 1996). Scientific endeavor and reason are not well served if they are limited
by oversubjectivity and based upon political or ideological grounds (the safety of
participants/the conduct of safe research notwithstanding). Antiscientific attitudes are all
the more problematic when one recognizes that they may lead to the types of misconduct
that promote a public perception that is not supportive of scientific endeavors in general—
especially because research is more dependent on public support/funding today than ever
before (Woolf, 1988).
Considering the total amount of research being conducted, outright scientific fraud
probably occurs with a relatively low frequency (Goodstein, 1996; Woolf, 1988). Unfor-
tunately, the public perception and the perception of some in the media are at variance
with this view. Several cases made the headlines in recent times; many are treated sen-
sationally by the media (e.g., Breuning in 1987, Imanishi-Kari in 1991), and the press
often fails to acknowledge that effective (but not foolproof ) safeguards (e.g., peer review
and replication) are in place (Woolf, 1988). In a similar vein, a conclusion that was
reached based on survey research (Swazey, Anderson, & Lewis, 1993) involving 2,000
graduate students and 2,000 faculty in chemistry, engineering, sociology, and microbiol-
ogy indicated that these surveyed individuals thought that the problems involving mis-
conduct in scientific research are more common than insiders believe. This study was
reported widely in the popular press and, despite its many limitations, provides fuel to the
public perception that science is not as objective and honest as scientists themselves
indicate (Goodstein, 1996).
Although the relative frequency of scientific fraud appears to be low, the conclusions
from a project on scientific fraud and misconduct, sponsored jointly by the American Bar
Association and the American Association for the Advancement of Science, included the
statement that “the incidence of scientific fraud and deception appears to be increasing”
(Woolf, 1988, p. 37). Indeed, antiscientific attitudes appear to be on the rise, impacting
myriad arenas of science including psychology (Gross, 1996; Koertge, 1996). Sir Cyril
Burt, an educational psychologist, became the focus of inquiry when, in 1974, three years
after his death, inconsistencies and inadequacies in his data led many to conclude that he
was guilty of fraud (Goodstein, 1991); however, staunch supporters of Burt remain (e.g.,
Fletcher, 1991; Joynson, 1989). That same year, 1974, also saw the “painted mouse”
incident, the first highly publicized instance of fraud in biomedical research. This case
involved William Summerlin, a dermatologist, who was caught fabricating his research
results while at the Sloan-Kettering Institute in Minnesota. In the early and mid-1980s,
several noteworthy cases of alleged or known cases of scientific misconduct or fraud
were identified (i.e., Alsabti, Darsee, Hale, Long, Purves, Soman, Spector, & Straus) (see
Broad & Wade, 1982; Woolf, 1988). During the same decade, Stephen E. Breuning, a
46 Journal of Clinical Psychology, January 2001

physician and prominent researcher in the area of psychotropic medications with the
mentally challenged, was found guilty of a “chronic career of doctored research results
and reports of research that was not conducted at all, dating from the mid-1970s in
Chicago to April, 1984” (Holden, 1987, p. 1566). More recently, Theresa Imanishi-Kari
received a guilty verdict in 1991 from the National Institutes of Health for fabricating key
genetics data, and the debate about whether early reports on cold fusion were fraudulent
is far from over (Miller & Hersen, 1992).
On the lighter side, the prestigious think tank known as the Stanford Research Insti-
tute (SRI) was fooled into believing the claims of Uri Geller, a charismatic nightclub
magician, who asserted that he possessed the powers of clairvoyance and prognostica-
tion, and who professed to have the ability to distort magnetic fields and metal objects
(Randi, 1982).
Estimates of the frequency of intentional misconduct in research range from Koshland’s
perception that research is 99% pure (Miller & Hersen, 1992) to St. James-Roberts’
report (1972) that over 90% of researchers have had some involvement in fabricating or
massaging research data (Woolf, 1988). According to Woolf, the National Institutes of
Health receive approximately 20 allegations of research misconduct per year with about
12 meriting investigation; of these 12, 1 or 2 per year result in findings that are cause for
some type of action. The National Science Foundation investigated 12 allegations of
misconduct from 1980 to 1987 mainly involving charges of plagiarism in research pro-
posals (Woolf, 1988). The results of an analysis of 26 reported cases of serious scientific
misconduct occurring between 1980 and 1987 indicated that 1 was in physiology, 2 were
in chemistry, 2 were in psychology, and, with the lion’s share, 21 were in biomedical
science (Woolf, 1988). This is consistent with Goodstein’s view (1996) that outright
fraud, when it does occur, is almost always in biomedical science.
The aforementioned element of intent in scientific misconduct is an important one
because this distinguishes fraud (which must not be tolerated) from bad or sloppy science
(which we should diligently try to prevent). The former already was discussed; however,
the latter may be illustrated by a case such as the one involving the charismatic physician,
Franz Mesmer, who believed that all physical ailments were due to obstructed “magnetic
fluid” flowing from the stars (claims were discounted in an encounter with a skeptical
Benjamin Franklin in 1784.) (Lopez, 1993). A more contemporary illustration of sloppy
science occurred in 1903 and involved the “discovery” of N-Rays promulgated by their
“discoverer,” the French physicist, Rene Blondlot, who apparently believed that they
existed (until the visiting American physicist R.W. Wood proved otherwise) (Miller &
Hersen, 1992). Robert Millikan (the American physicist and Nobel prize recipient) and
Isaac Newton also are said to be guilty of at least some sloppy scientific practices (Broad
& Wade, 1982; Goodstein, 1991).
The rise of antiscientific attitudes, made most concrete by their products (e.g., fraud,
plagiarism), may be due to several factors. Goodstein (1996) identifies three elements
universal to the cases of scientific fraud that he examined. The first factor is career
pressure, which is in agreement with Hilgartner’s view (1990) that the pressure of “pub-
lish or perish” and/or “grant or get going” are institutional elements that appear to con-
tribute to scientific misconduct (Hilgartner, 1990). The second universal element is that
the perpetrators of fraud “knew, or thought they knew, what the answer would turn out to
be if they went to all the trouble of doing the work properly” (Goodstein, 1996, p. 33).
The third universal element is that researchers were working in a professional arena that
typically does not expect to produce exact replications of experiments. Hilgartner (1990)
also identifies moral (i.e., individual) elements and political factors that are partially
responsible for antiscientific behavior.
Antiscientific Attitudes 47

Another factor that may contribute to antiscience is the tendency of colleagues to


fear reprisals and avoid being “the whistle-blower,” thereby allowing any unscientific
conduct they are aware of (or suspect) to continue. Surveys of faculty and other profes-
sional researchers consistently indicate that many believe scientific misconduct often
goes unreported (Chalk, 1988). Sprague (1993) reports that whistle-blowers frequently
are intimidated in many professional arenas, and he presents the negative consequences
that he personally experienced as the whistle-blower in the Breuning case of scientific
fraud. The fear of retribution for whistle-blowing is probably greatest among students
(undergraduate and graduate) and postdocs or residents involved in research projects.
A more problematic and insidious process involves a resistance to scientific discov-
ery by scientists themselves (Barber, 1961). Empirical findings may be perceived as
threatening the careers or ideologies and political agendas of some; however, this runs
counter to the logical process and objectivity that is the essence of the true scientist’s
attitude (Broad & Wade, 1982). Planck (1949), upon encountering resistance to his own
scientific discoveries and upon observing resistance to the discoveries of others, stated,
“A new scientific truth does not triumph by convincing its opponents and making them
see the light, but rather because its opponents eventually die, and a new generation grows
up that is familiar with it” (pp. 33–34). Barber (1961) points to the difficulty Lister had
in promoting his theory of antisepsis, which, of course, is fully accepted today. In fact,
many great scientists (e.g., Copernicus, Young, Pasteur, Adams) encountered resistance
to their original discoveries (Barber, 1961). It is difficult to say if this resistance is more
pronounced today than it was in years past; however, the tendency of many scientists to
resist scientific discovery runs counter to the stereotype of the scientist as an objective
and open-minded individual (Barber, 1961).
Prior to his death, Neil Jacobson (personal communication to L.E.B., February 1998)
posed a credo for those who purport themselves to be clinical scientists: “Don’t do things
that are directly contradicted by empirical evidence, especially when there are empiri-
cally supported alternatives.”
This scientific attitude seems straightforward. Is it not logical? Can it not be the
credo of practice? The question of what constitutes an antiscientific attitude hinges on a
related question of what constitutes “empirical” evidence, and especially of what consti-
tutes a demonstration that such evidence has reached the point of being “satisfactory.”
What is it and to whom is empirical evidence satisfactory?
Philosophically, the resolution of these questions revolves around the criteria that
one uses to evaluate the truth or validity of information. And the nature of evidence
divides those who are primarily identified with the practice of psychology and those who
are identified with the science of psychology. “Empirical evidence” is a term that most
scientists take to indicate the use of systematic and objective observations, as applied
through the use of the scientific method. But, “empirical” also is a term that both Webster
and many practitioners take to indicate only that something has been directly observed. In
this latter meaning, empirical evidence need not be derived from scientific methods, nor
need it be systematic, replicated, or objective. Do clinical observations constitute suffi-
cient empiricism to warrant making decisions about what treatment methods should be
prescribed and proscribed?
There are strong reasons why the scientist rejects unsystematic, clinical observations
as the basis for defining truth. But scientists, themselves, are far from being consistent in
their own applications of value-free scientific criteria of empirical evidence as recent
controversies arose surrounding the decision to accord Raymond Cattell special recogni-
tion for his contributions to psychology, and of the inclusion of eye movement desensi-
tization and reprocessing (EMDR) therapy among those listed as “ Probably Efficacious”
48 Journal of Clinical Psychology, January 2001

by various APA bodies. Scientific progress is compromised when systematic observation


is allowed to be colored and obviated by moral imperatives and personal beliefs, no
matter how justified these moral concerns and beliefs may be in their own right. The case
of EMDR will be examined as an example of failures in science’s search for objectivity.
In 1998, the Division 12 Task Force on Empirically Supported Treatments published
its third report (Chambless et al., 1998). As in previous reports, it identified psychother-
apies under two categories, representing the strength of the support available for a given
treatment, based on extensive Task Force Member review. The highest category was
“well established” and required that at least two separately conducted (i.e., independent),
randomized clinical trial studies demonstrated that the treatment was more effective than
no treatment or a placebo or that it was equally effective as an established treatment for
the same population. A randomized clinical trial (RCT) required that the therapy be
identifiable—usually meaning that it was conducted according to a manual—that it was
applied to well-identified patients, that treatments were assigned randomly, and that out-
comes were reliably assessed independently of the treatment itself (e.g., by raters blind to
both the type[s] of therapy and the hypotheses).
The lower category was “ Probably Efficacious.” This category required that the
treatment meet the same criteria as the higher designation except that the studies did not
need to be conducted by separate investigators. Neither category of “Support” required
that the treatment’s proposed mechanism of action be proven to be correct, that it be
better than other treatments in the field, nor that it be unique.
Like the previous reports, this one received the usual negative reviews, with practi-
tioners expressing concern that it would limit practice to only the investigated therapies,
that it ignored therapist training and expertise, and that it would be misused by Managed
Health Care organizations to unduly control therapists. The concerns of practitioners
seldom reflected on issues of whether the conclusions were valid, and when they com-
mented at all on the scientific value of the guidelines, they did so in the most general way.
For example, critics, including academics as well as practitioners, argued that random-
ized clinical trials’ reliance on group designs ignored the individual nature of change, that
the outcome measures were largely ignorant of the subtlety of change or the nature of
those processes that best represent the changes that are associated with psychotherapy
processes, and that the samples used in RCT studies were not representative of the com-
plex conditions that typified practice, thus thwarting efforts to generalize the studies
to practice.
The more methodologically sophisticated pointed out that there are inherent flaws in
the assumptions of randomization that made reliance on this criterion inappropriate, that
nondiagnostic factors (e.g., coping style, reactance level) were not suitably considered,
and that the fact that two treatments coexisted under a single label (e.g., Cognitive Behav-
ioral Therapy) often hid a large number of procedural differences that attenuated efforts
to find differences and to generalize to other treatments of similar title.
These are all pointed and reasonable concerns, whether scientific or practical. But, in
the midst of these, there was a flurry of very disconcerting exchanges among members of
the e-mail network sponsored by the Society for a Scientific Clinical Psychology, a body
comprised largely of academic psychologists, many of whom are the leaders in our field.
These exchanges largely focused on the fact that the Division 12 Task Force placed on the
“ Probably Efficacious” list Eye Movement Desensitization and Reprocessing (EMDR),
noting that it may be effective for veteran-related trauma. The possibility that this treat-
ment would be included was of sufficient concern to some that they threatened with-
drawal of membership from the APA should this occur. One network member contacted
the Task Force chair before the report was released to lobby for excluding the named
Antiscientific Attitudes 49

therapy and claimed that the research results were faked or unduly influenced by Fran-
cine Shapiro, who developed the procedure. This led to a second review of EMDR research
by a separate panel of Task Force members, but a similar conclusion was reached. The
procedure met criteria for support as a “ Probably Efficacious” treatment.
The debate did not abate among the subscribers to the SSCP net over the next year.
Heated arguments have persisted, largely opposing any semblance of scientific credibil-
ity being given to EMDR by any APA body. These arguments took some interesting
forms. One early argument was based on an advertisement for one of Shapiro’s books, an
advertisement in which a publicity notice claimed that this treatment was uniquely effec-
tive and identified it as “a breakthrough treatment.” The argument was made that by
allowing the publicist to print such a statement, Shapiro revealed herself to be unethical.
Another complaint was lodged because Shapiro did not get her psychology training in a
conventional, university-based training program.
More reasoned arguments suggested that because EMDR had not been tested directly
against more conventional, exposure-based treatments, it should be excluded until it proved
itself to be superior to these treatments. These arguments ignored the fact that this crite-
rion was not included for any of the more conventional therapies that were reviewed, for
which a comparison with a no-treatment or placebo-treatment control was sufficient.
Still another argument attacked the method through which Shapiro discovered eye
movements as a potentially important contributor to the alleviation of anxiety (i.e., sub-
jective experience while walking in the woods and noticing her own eye movements) and
argued that the lack of scientific foundation in the theory was sufficient to ignore any
evidence of its efficacy. This argument ignored the fact that many poor theories were
found to have good results, and some good theories have had less than positive findings.
Many arguments confounded Shapiro’s assumed ethical breach with the effects of
the treatment, with some scientists arguing that any treatment whose major advocate was
so unethical as to claim unique and powerful effects in the absence of sufficient research
should be automatically excluded from the list. Compliance with this recommendation,
obviously, would have placed the Task Force members in the positions of becoming
ethical policemen who were called upon to investigate the personal lives of anyone who
wrote about and advocated one or another psychotherapy approach.
As a member of the Division 12 Task Force, at several points the first author ven-
tured to point out the flaws in confounding ethical behavior, personal reactions to the
treatment developer, and the scientific evidence. Although I (L.E.B.1 ) never have been
trained in EMDR, have met Shapiro on only one occasion, have never written or advo-
cated the use of EMDR, and do not consider myself an advocate for anything other than
good, objective science, my comments were met with severe attack. I was discredited for
being an “EMDR puppet.” Someone saw me speaking with Shapiro on the occasion of
our meeting and inferred that my opinion was somehow tainted. A foreign respondent
who was not personally known to those engaged in the debate dared to come to my
defense, but was attacked for having unusual sexual habits.
My efforts to encourage the use of the same criteria for disputed as for accepted
procedures when assessing their value and to keep the personalities and ethics of the
developer out of considerations of the effectiveness of treatment had little impact on my
scientific colleagues. This left both of us with some serious concerns about the ability of
scientists to remain objective. It became clear that some of our colleagues, for reasons

1
In the following paragraphs where the first person singular is used, we are recounting the experiences of the
first author, L.E. Beutler.
50 Journal of Clinical Psychology, January 2001

that are unknown to us, have developed a strong negative reaction to Francine Shapiro, so
much so that they would ignore any and all scientific evidence that provides any support
for her viewpoints. Others, often individuals whom we respect and admire, ignored pos-
itive findings or distorted the presentation of findings, ostensibly in order to cast EMDR
in the worst possible light. One colleague unabashedly responded to Shapiro’s suggestion
that when evaluating EMDR, the procedure should be checked for its compliance with a
set of criteria that she previously published, with a (paraphrased) quip, “You don’t have
the right to tell us how to do research.”
Yet, checks of treatment fidelity and compliance with criteria developed by the orig-
inal therapy theorist have been a routinely advocated and accepted standard in RCT
comparisons since the inception of the NIMH Treatment of Depression Collaborative
Research Program, the project from which all of these criteria developed.
We are reminded of the disastrous effects to a country’s science that were incurred
when postrevolutionary Russia outlawed all but Pavlovian Research and writing in the
early part of the twentieth century. All non-Pavlovian theories were considered to be
contrary to the newly formed values of Marxism. The psychological inquiry of the coun-
try eventually met a dead end, research on psychopathology stopped, and the nature of
psychological knowledge within Russia stagnated. For many years, one was not allowed
to discuss alternative approaches because they were seen as contrary to the political
values of the time. The confusion of values with empirical findings seriously threatens
the objectivity of science and restrains the search for truth.
Elsewhere, we (Beutler & Harwood, 2000) suggested that success as both a scientist
and a practitioner relies on six keys, arranged in their assumed order of development:
attitudes of objectivity, regard, sensitivity, and curiosity; knowledge of the principles of
behavior and behavior change; tools that can be used effectively to our clinical or research
purposes; techniques that are effective and powerful; time and patience to use the proce-
dures and observe the effects; and imagination enough to approach new situations with
creativity and ingenuity.
Unfortunately, many have gained knowledge without first developing the attitude
that is cardinal to good science; that is, allow empirical phenomena to speak for them-
selves. Until a scientist is able to be wrong and to abandon all previous research direc-
tions as unproductive, they are not able to accept new research findings objectively. Just
as one must be willing to be a student in order to be a good teacher, one must be willing
to be wrong in order to recognize scientific fact. This is an attitude that we must encour-
age young scientists to develop. In science, there never is a wrong answer, only a prob-
ability statement. We must be willing to reverse ourselves at any point in order to discover
the complexity of our world. As we do so, we will be able to appreciate the bias to which
even our best humor hints—“the only thing that causes cancer is research” and, “after
amputating the legs of a cricket, his/her failure to jump to a previously conditioned
auditory stimulus means that the cricket’s legs are connected to his/her hearing.”
In conclusion: “Let us tolerate, nay encourage, all search for truth, however eccentric
it may look, as long as it abides by reason and experience. But let us fight all attempts to
suppress, discredit, or fake this search. Let all genuine intellectuals join the Truth Squad
and help dismantle the ‘postmodern’ Trojan horse stabled in Academia before it destroys
them” (Bunge, 1996, p. 111).

References
Babbage, C. (1830). Reflections on the decline of science in England and on some of its causes.
London: Fellows.
Antiscientific Attitudes 51

Barber, B. (1961). Resistance by scientists to scientific discovery. Science, 134, 596– 602.
Beutler, L.E., & Harwood, T.M. (2000). Prescriptive Psychotherapy. New York: Oxford University
Press.
Broad, W., & Wade, N. (1982). Betrayers of the truth. New York: Simon & Schuster.
Bunge, M. (1996). In praise of intolerance to Charlatanism in academia. In P.R. Gross, N. Levitt, &
M.W. Lewis (Eds.), The flight from science and reason (pp. 96–115). New York: The New
York Academy of Sciences.
Chalk, R. (1988). Workshop summary. Project on Scientific Fraud and Misconduct: Report on
Workshop Number One (pp. 1–36). Washington, DC: American Association for the Advance-
ment of Science.
Chambless, D.L., Baker, M.J., Baucom, D.H., Beutler, L.E., Calhoun, K.S., Crits-Christoph, P.,
Daiuto, A., DeRubeis, R., Detweiler, J., Haaga, D.A.F., Johnson, S.B., McCurry, S., Mueser,
K. T., Pope, K.S., Sanderson, W.C., Shoham, V., Stickle, T., Williams, D.A., & Woody, S.R.
(1998). Update on empirically validated therapies, II. The Clinical Psychologist, 51, 3–16.
Fletcher, R. (1991). Science, ideology, and the media: The Cyril Burt scandal. New Brunswick, NJ:
Transaction Publishers.
Goodstein, D. (1991). Scientific fraud. Engineering & Science, Winter, 11–19.
Goodstein, D. (1996). Conduct and misconduct in science. In P.R. Gross, N. Levitt, & M.W. Lewis
(Eds.), The flight from science and reason (pp. 31–38). New York: The New York Academy of
Sciences.
Gross, P.R. (1996). Introduction. In P.R. Gross, N. Levitt, & M.W. Lewis (Eds.), The flight from
science and reason (pp. 1–7). New York: The New York Academy of Sciences.
Gross, P.R., Levitt, N., & Lewis, M.W. (Eds.). (1996). The flight from science and reason. New
York: The New York Academy of Sciences.
Hilgartner, S. (1990). Research fraud, misconduct, and the IRB. IRB: A Review of Human Subjects
Research, 12, 1– 4.
Holden, C. (1987). NIMH finds a case of “serious misconduct.” Science, 235, 1566–1567.
Joynson, R.B. (1989). The Burt affair. Worcester, MA: Billings & Sons Limited.
Koertge, N. (1996). Wrestling with the social constructor. In P.R. Gross, N. Levitt, & M.W. Lewis
(Eds.), The flight from science and reason (pp. 266–273). New York: The New York Academy
of Sciences.
Lopez, C.A. (1993). Franklin and Mesmer: An encounter. Yale Journal of Biology and Medicine,
66, 325–331.
Miller, D.J., & Hersen, M. (1992). Research fraud in the behavioral and biomedical sciences. New
York: Wiley.
Planck, M. (1949). Scientific autobiography and other papers. Westport, CT: Greenwood Press.
Randi, J. (1982). The truth about Uri Geller. Buffalo, NY: Prometheus Books.
Rind, B., Tromovitch, P., & Bauserman, R. (1998). A meta-analytic examination of assumed prop-
erties of child sexual abuse using college samples. Psychological Bulletin, 124, 22–53.
Schaffner, K.F. (1992). Ethics and the nature of empirical science. In D.J. Miller & M. Hersen
(Eds.), Research fraud in the behavioral and medical sciences. New York: Wiley.
Sprague, R.L. (1993). Whistleblowing: A very unpleasant avocation. Ethics and Behavior, 3, 103–133.
St. James-Roberts, I. (1972). Cheating in science. New Scientist, 72, 466.
Swazey, J.P., Anderson, M.S., & Lewis, K.S. (1993). Ethical problems in academic research. Amer-
ican Scientist, 81, 542–553.
Woolf, P. (1988). Deception in scientific research. Project on Scientific Fraud and Misconduct:
Report on Workshop Number One (pp. 37–86). Washington, DC: American Association for
the Advancement of Science.

You might also like