Baron - Bioethics vs. Utilitarianism
Baron - Bioethics vs. Utilitarianism
Baron - Bioethics vs. Utilitarianism
Bioethics
Utilitarianism
vs.
Bioethics is a recent phenomenon. It is an attempt to develop institutions that help people make difficult decisions about health care, health
research, and other research and applications in biology, such as genetic
engineering of crops. It draws on moral philosophy, law, and some of its
own traditions. These traditions take the form of documents and
principles that grew out of the Nuremberg trials after World War II. In
some cases, these documents have acquired the force of law, but in most
cases the principles are extralegalthat is, enforced by general agreement. Bioethics now has its own journals and degree programs. You
can make a living (with difficulty) as a bioethicist.
As a field, bioethics plays three roles somewhat analogous to
medicine. It is an applied discipline. People trained in bioethics work in
hospitals and other medical settings, sometimes for pay, to help staff and
patients think about difficult decisions. I was, for several years, a
member of the Ethics Committee of the Hospital of the University of
Pennsylva- nia. This committee had no full-time paid staff but it did
draw on vari- ous administrative resources of the hospital. We had
monthly meetings that consisted of case presentationsusually reports
of cases that were settledand sometimes more general presentations.
Members of the committee were on call for quickly arranged consults,
in which four or five members would meet with medical staff and
patient representa- tives (and rarely the patient herself) over some
difficult decision. The most common involved pulling the plug, but
other decisions involved
10
Chapter 2
matters such as whether to turn a patient out of the hospital, given that
she was no longer in need of hospital services but still unable to live on
her own because of a combination of incompetence, poverty, and lack of
others who might help her. Some members of this committee had taken
courses in bioethics, but most had no special training. Nonetheless, the
committee saw its work as informed by the tradition of applied
bioethics as I discuss it here.
The second role is academic. As I noted, bioethics has its own journals and societies, and some universities have departments of bioethics,
or parts of departments devoted to it. I have almost nothing to say
about the academic side of bioethics. The literature is huge, and it
would take me too far afield. Some contributors to this literature would
agree with things that I say here; others would dispute them. Many are
utilitarians (although few of these are involved with decision analysis).
There is no consensus. The consensus arises when the bioethics
literature is distilled into its final common paththat is, the actual
influence of bioethical dis- cussion on outcomes, and that will be my
focus.
The third role is in the formulation of codes of ethics. Here the situation is somewhat unique, since most codes of ethics are written by the
practitioners to whom the codes apply. When it comes to the ethics of
research, in particular, a certain tension arises between researchers and
those nonresearchers who write the rules. The tension plays out largely
in the review boards that examine research proposals (chapter 7).
2.1
because of their ethnic background or their political viewswere subjected to horrendous and harmful procedures. They had no choice, but
of course their options had already been sharply limited by their imprisonment. (Similar studies went on with conscientious objectors in
the United Statessuch as the starvation studies of Keys and Keys
but most of these subjects had more options.)
These abuses were uncovered at the Nuremberg war-crimes trials
af- ter Germanys defeat (19451946). The court, as part of its verdict
against several physicians involved in the experiments, proposed a set
of prin- ciples (Trials of War Criminals . . . 1949), which never acquired
the force of law, but which has been the basis of all subsequent codes.
The first principle of the Nuremberg Code was:
The voluntary consent of the human subject is absolutely essential. This means that the person involved should have
legal capacity to give consent; should be so situated as to
be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion;
and should have sufficient knowledge and comprehension
of the elements of the subject matter involved as to enable
him to make an understanding and enlightened decision.
This latter element requires that before the acceptance of an
affirmative decision by the experimental subject there
should be made known to him the nature, duration, and
purpose of the ex- periment; the method and means by
which it is to be con- ducted; all inconveniences and hazards
reasonable to be ex- pected; and the effects upon his health
or person which may possibly come from his participation
in the experiment.
Although this principle sounds reasonable, its subsequent
interpreta- tion has led to rules that have made things worse, such as
the rules that prevented infants from being subjects in the experiment
that killed Jesse Gelsinger, and a three-year moratorium on much
emergency research in the United States (section 6.4).
The Nuremberg Code inspired later codes, in particular the Declaration of Helsinki of 1964 (World Medical Organization 1996) and the Belmont Report (National Commission for the Protection of Human Sub-
jects 1979). The last of these was in part a response to the Tuskegee
study of syphilis, in which 600 black men, 400 of whom had syphilis,
were monitored from 1932 to 1972, without treatment, to observe the
natural course of this lifelong, debilitating disease, even though an
effec- tive treatment (penicillin) became available in the 1950s. The
experiment ended as a result of press coverage. In 1974, partly as a
result of this coverage, the United States Congress passed the National
Research Act, which, among other things, appointed a commission to
produce a report. The resulting Belmont Report has been the basis of
United States policy, although, like the Nuremberg Code, it never
acquired the force of law (National Commission for the Protection of
Human Subjects 1979). The 1974 act also mandated Institutional
Review Boards (IRBs) for review- ing human subjects research in
institutions that received United States government funds.1
Many national governments, and the United Nations, have established formal bodies to deal with bioethical questions. Some were created because of some immediate concern but then went on to address
other issues. For example, in the United States, President George W.
Bush created the Presidents Council on Bioethics largely to help him
decide what to do about stem cell research, but the Council has now
produced several reports dealing with other issues, such as the use of
biotechnology to increase happiness. The United Nations Educational,
Scientific and Cultural Organization (UNESCO) has a bioethics section
that coordinates the activities of member states, most of which have
their own bioethics advisory committees or agencies. In the United
States, in- dividual states have bioethics panels. Universities run degree
programs in bioethics, and their graduates are employed in hospitals,
government agencies, and professional societies.
2.2
Principles of bioethics
Most introductions to bioethics provide an overview of various philosophical approaches to morality but then conclude with some sort of
list of basic principles. Typical are those listed in the Belmont Report
(Na1
Fairchild and Bayer (1999) discuss critically the extensive use of Tuskegee as a source
of analogies in bioethics.
The quotations that follow are from the Belmont Report, Part B, Basic ethical principles.
2.2.2Beneficence
Persons are treated in an ethical manner not only by respecting their
decisions and protecting them from harm, but also by making efforts
to secure their well-being. . . . Two general rules have been formulated
as complementary expressions of beneficent actions in this sense: (1) do
not harm and (2) maximize possible benefits and minimize possible
harms (Belmont Report, Part B, National Commission for the
Protection of Hu- man Subjects 1979).
The do no harm maxim, originally from the Hippocratic oath, is
often interpreted as meaning that one should not injure one person regardless of the benefits that might come to others, but the Belmont Report argues that it is acceptable to increase the risk of harm to someone
in order to help someone else (e.g., in research). Even the law considers
an increase in the risk of harm as a harm in itself. The report seems to
be attempting to balance risk of harm and risk of benefit, but it does
this in a crude way.
The principle of beneficence also creates conflict: A difficult ethical
problem remains, for example, about research that presents more than
minimal risk without immediate prospect of direct benefit to the
children involved. Some have argued that such research is inadmissible,
while others have pointed out that this limit would rule out much
research promising great benefit to children in the future.
2.2.3 Justice
An injustice occurs when some benefit to which a person is entitled is
denied without good reason or when some burden is imposed unduly.
Another way of conceiving the principle of justice is that equals ought
to be treated equally. As the Belmont Report (Part B, National Commission for the Protection of Human Subjects 1979) notes, this
statement begs the questions of what benefits or burdens are due or
of what the measure of equality is (contribution, need, effort, merit, and
so forth).
In practice, considerations of justice come up when, for example, research is done on poor subjects, who bear the risk, while the benefits of
the research often accrue to rich patients who can pay for the resulting
new technology. Thus, justice in practice is often a way of limiting further harm to those who are already down and out.
These three principles generate their own internal conflicts, but they
also conflict with each other. Justice may demand punishment, but beneficence may demand mercy in the same case.
2.3 Bioethical
utilitarianism
principles
vs.
The situation in applied bioethics is much like that in the law. We have
superordinate principles either in the common law or a constitution.
These principles are then made more specific by legislatures and
courts, and more specific still by case law. Applied bioethics does not
seem to have an analogue of case law.
More broadly, the same sort of cognitive processes are involved in
the development of religious rules. Some case arises, judgments are
made by applying known principles. These are weighed against each
other somehow. Ultimately a new precedent is set. If a similar case uses
the precedent, then a new, subordinate principle emerges (Hare 1952,
ch. 4). Much of the academic literature concerns the reconciliation of
prin- ciples, but I shall deal with practice. In practicefor example,
when review boards decide cases or when government agencies write
regula- tionsthe process is more intuitive. When principles conflict,
people differ in their belief about which principle dominates. People
may be especially attached to one principle or another, or chance
factors may
judgment, and that fact means that utilitarianism does not dictate the
answer mechanically. But at least it tells us what judgment we ought to
try to make.
Finally, consider justice. Its concern for the worse-off is rooted in
util- itarianism combined with the declining marginal utility of goods.
Most goods have less utility, the more of them we have. In the case of
money the reason for this is simple. As we gain in a certain period of
time, we rationally spend the money first on the things that are most
important. Food before fashion. Thus, the first $100 gives us more
utility than the last $100. Other goods show the same decline because
of satiation. The first apple eaten is better than the fifth. There are
exceptions to this rule; some goodssuch as peanuts, perhapscreate
increased desire for the same goods up to a point. But these are
interesting only because they are rare, and they do not affect the
general conclusion about the declining utility of money.
The principle of declining marginal utility by itself would prescribe
equality in the allocation of money and other goods. If we start from
equality and increase the wealth of one person at the expense of another, the gainer would gain less than the loser loses, in terms of what
the money can buy. On the other side, if all money were distributed
equally, then money could no longer provide an incentive to work. The
amount of productive work would be reduced. Thus, the other side of
justice is roughly the idea of equity as distinct from equality, that is, the
idea (as stated by Aristotle in the Nichomachean Ethics and by many
others since) that reward should be in proportion to contribution. The
precise formula of proportionality is not necessarily the optimal
incentive, but it is surely a reasonable approximation. Ideally, to
maximize utility across all peo- ple, some sort of compromise between
these two principles is needed, as is discussed at length in the theory of
optimal taxation. For example, one common proposal is to tax income
at a fixed percentage after subtracting some minimum amount.
Although the idea of equality and the idea of equity are both rooted
in utilitarian theory, people may apply these ideas even when the
utilitar- ian justification is absent. Greene and Baron (2001), for
example, found that people want to distribute utility itself in the same
way they would distribute money. They preferred distributions that
were more equal. The subjects judgments were internally inconsistent,
because the sub-
20
Chapter 2
jects themselves judged that the utility of money was marginally declining, hence they should have taken this decline into account, if only a
little, in their judgments of distributions of utility.
On the other side, Baron and Ritov (1993) found that people wanted
to penalize companies for causing harm even when the size of the
penalty would not affect compensation to the victim and when the
penalty would provide no incentive for anyone to change behavior
(because the penalty would be secret and the company that did the
harm is going out of busi- ness anyway). The idea of penalties for
causing harm is an application of the equity principle to the cases of
losses. Possibly the same sort of result would be found for gains.
In sum, the basic principles of traditional ethics look a lot like
heuris- tics designed to be rough guides to utility maximizationin
other words, rules of thumb. When their utilitarian justification is not
understood, they take on a life of their own, so that they are applied
even in cases when the fail to maximize utility.
Bi
oe
th
ic
s
vs
.
Ut
ili
ta
ri
21
22
Chapter 2
23
does not help. In fact it hinders, because some judges will want to
follow it literally. Thus, the attempt to mechanize judgment fails. People
must fall back on trading off quantitative attributes even while trying to
follow rules that prohibit such trade-offs.
The utilitarian alternative I advocate is to make the trade-offs and
judgmentswhen they are involvedexplicit. Given that judgments
are involved anyway, the introduction of judgment is no loss. It is an
explicit acknowledgment of what happens anyway. Making the process
explicit, in the form of decision analysis, may lead to better judgments.
At the very least, explication serves the purpose of describing more
accurately how decisions are made, for the benefit of those affected by
them. In the next chapter, I begin the discussion of the basis of decision
analysis.
The effort to make the trade-offs explicit need not always involve
numbers. It could simply involve bearing in mind a rule that permits
them. For example, weigh the probability and magnitude of harm
against the benefits. Such a rule, although vague, would be as easy to
apply as the (equally vague) rules now in use, but it would also focus
judges on the issues that matter most in terms of consequences.
It may seem that decision analysis involves false precision. It is true,
of course, that when we judge a probability to be .43, we might actually be happy with a judgment of .38 or .50. Most numerical judgments
are soft, and some are softer than others. However, most of the time
the results of analysis will not depend on which reasonable numbers
we choose. When they do, the decision is truly a close one, and we
cannot go far wrong with either choice. True precision is not required.
2.5
Conclusion
24
Chapter 2