The idea that we have an unlimited moral imperative to pursue medical research is deeply rooted in American society and medicine. In this provocative work, Daniel Callahan exposes the ways in which such a seemingly high and humane ideal can be corrupted and distorted into a harmful practice.
Medical research, with its power to attract money and political support, and its promise of cures for a wide range of medical burdens, has good and bad sides—which are often indistinguishable. In What Price Better Health?, Callahan teases out the distinctions and differences, revealing the difficulties that result when the research imperative is suffused with excessive zeal, adulterated by the profit motive, or used to justify cutting moral corners. Exploring the National Institutes of Health's annual budget, the inflated estimates of health care cost savings that result from research, the high prices charged by drug companies, the use and misuse of human subjects for medical testing, and the controversies surrounding human cloning and stem cell research, Callahan clarifies the fine line between doing good and doing harm in the name of medical progress. His work shows that medical research must be understood in light of other social and economic needs and how even the research imperative, dedicated to the highest human good, has its limits.
What Price Better Health? Hazards of the Research Imperative
Is Research a Moral Obligation?
In 1959 Congress passed a "health for peace" bill, behind which was a view of disease and disability as "the common enemy of all nations and peoples."1 In 1970 President Nixon declared a "war" against cancer. Speaking of a proposal in Great Britain in 2000 to allow stem-cell research to go forward, Science Minister Lord Sainsbury said, "The important benefits which can come from this research outweigh any other considerations," a statement that one newspaper paraphrased as outweighing "ethical concerns."2 Arguing for the pursuit of potentially hazardous germ-line therapy, Dr. W. French Anderson, editor-in-chief of Human Gene Therapy, declared that "we as caring human beings have a moral mandate to cure disease and prevent suffering."3 A similar note was struck in an article by two ethicists who held that there is a "prima facie moral obligation" to carry out research on germ-cell gene therapy.4
As if that was not enough, in 1999 a distinguished group of scientists, including many Nobel laureates, issued a statement urging federal support of stem-cell research. The scientists said that because of its "enormous potential for the effective treatment of human disease, there is a moral imperative to pursue it."5 Two other ethicists said much the same, speaking of "the moral imperative of compassion that compels stem cell research," and adding that at stake are the "criteria for moral sacrifices of human life," a possibility not unacceptable to them.6 The Human Embryo Research Panel, created by the National Institutes of Health, contended in 1994 that federal funding to create embryos for research purposes should be allowed to go forward "when the research by its very nature cannot otherwise be validly conducted," and "when the fertilization of oocytes is necessary for the validity of a study that is potentially of outstanding scientific and therapeutic value."7
The tenor of these various quotations is clear. The proper stance toward disease is that of warfare, with unconditional surrender the goal. Ethical objections, when they arise, should give way to the likely benefits of research, even if the benefits are still speculative (as with stem-cell and germ-line research). The argument for setting aside ethical considerations when research could not otherwise be "validly conducted" is particularly striking. It echoes an objection many researchers made during the1960s to the imminent regulation of human-subject research: regulations would cripple research. That kind of reasoning is the research imperative in its most naked—and hazardous—form, the end unapologetically justifying the means. I am by no means claiming that most researchers or ethicists hold such views. But that reasoning is one of the "shadows" this book is about.
What should be made of this way of thinking about the claims of research? How appropriate is the language of warfare, and how extensive and demanding is the so-called moral imperative of research? I begin by exploring those questions and then move on to the wars against death and aging, two fundamental, inescapable biological realities so far—and two notorious and clever foes.
The Metaphor of "War"Since at least the 1880s—with the identification of bacteria as agents of disease—the metaphor of a "war" against illness and suffering has been popular and widely deployed. Cancer cells "invade" the body, "war stories" are a feature of life "in the trenches" of medicine, and the constant hope is for a "magic bullet" that will cure disease in an instant.8 Since there are surely many features of medicine that may be likened to war, the metaphor is hardly far-fetched, and it has proved highly serviceable time and again in the political effort to gain money for research.
Less noticed are the metaphor's liabilities, inviting excessive zeal and a cutting of moral corners. The legal scholar George Annas has likened the quest for a cure of disease to that of the ancient search for the Holy Grail: "Like the knights of old, a medical researcher's quest of the good, whether that be progress in general or a cure for AIDS or cancer specifically, can lead to the destruction of human values we hold central to a civilized life, such as dignity and liberty."9 "Military thinking," he has also written, "concentrates on the physical, sees control as central, and encourages the expenditure of massive resources to achieve dominance."10 The literary critic Susan Sontag, herself a survivor of cancer, has written, "We are not being invaded. The body is not a battlefield. . . . We—medicine, society—are not authorized to fight back by any means possible. . . . About that metaphor, the military one, I would say, if I may paraphrase Lucretius: Give it back to the war-makers."11
While some authors have tried to soften the metaphor by applying a just war theory to the war against disease (a sensible enough effort), the reality of warfare does not readily lend itself to a respect for nuanced moral theory. Warriors get carried away with the fight, trading nasty blow for nasty blow, single-mindedly considering their cause self-evidently valid, shrugging aside moral sensitivities and principles as eminently dispensable when so much else of greater value is thought to be at stake. It is a dangerous way of thinking, all the more so when—as is the case with so much recent research enthusiasm—both the therapeutic benefits and the social implications are uncertain.
Is Research a Moral Obligation?Yet if the metaphor of war is harmful, lying behind it is the notion of an insistent, supposedly undeniable moral obligation. Nations go to war, at least in just wars, to defend their territory, their values, their way of life. They can hardly do otherwise than consider the right to self-defense to be powerful, a demanding and justifiable moral obligation to protect and defend themselves against invaders. To what extent, and in what ways, do we have an analogous moral obligation to carry out research aiming to cure or reduce suffering and disease, which invade our minds and bodies?12
Historically, there can be little doubt that an abiding goal of medicine has been the relief of pain and suffering; it has always been considered a worthy and highly defensible goal. The same can be said of medical research that aims to implement that goal. It is a valid and valuable good, well deserving of public support. As a moral proposition it is hard to argue with the idea that, as human beings, we should do what we can to relieve the human condition of avoidable disease and disability. Research has proved to be a splendid way of doing that.
So the question is not whether research is a good. Yes, surely. But we need to ask how high and demanding a good it is. Is it a moral imperative? Do any circumstances justify setting aside ethical safeguards and principles if they stand in the way of worthy research? And how does the need for research rank with other social needs?
The long-honored moral principle of beneficence comes into play here, as a general obligation to help those in need when we can do so. Philosophically, it has long been held that there are perfect and imperfect obligations. The former entail obligations with corresponding rights: I am obliged to do something because others have rights to it, either because of contractual agreements or because my actions or social role generate rights that others can claim against me. Obligations in the latter category are imperfect because they are nonspecific: no one can make a claim that we owe to them a special duty to carry out a particular action on their behalf.13
Medical research has historically fallen into that latter category. There has long been a sense that beneficence requires that we work to relieve the medical suffering of our fellow human beings, as well as a felt obligation to pursue medical knowledge to that end. But it is inevitably a general, imperfect obligation rather than a specific, perfect obligation: no one can claim a right to insist that I support research that might cure him of his present disease at some point in the future. Even less can it be said that there is a right on the part of those not yet sick who someday might be (e.g., those at risk of cancer), to demand that I back research that might help them avoid getting sick. Nor can a demand be made on a researcher that it is his or her duty to carry out a specific kind of research that will benefit a specific category of sick people.
This is not to say that a person who takes on the role of researcher and has particular knowledge and skills to combat disease has no obligation to do so. On the contrary, the choice of becoming a researcher (or doctor, or firefighter, or lawyer) creates role obligations, and it would be legitimate to insist that medical researchers have a special duty to make good use of their skills toward the cure of the sick. But it is an imperfect obligation because no individuals can claim the right to demand that a particular researcher work on their specific disease. At most, there is an obligation to discharge a moral role by using research skills and training responsibly to work on some disease or another. Even here, however, we probably would not call a researcher who chose to carry out basic research but had no particular clinical application in mind an irresponsible researcher.
These are no mere ethical quibbles or hair-splitting. If the language of an "imperative" applies, we can reasonably ask who exactly has the duty to carry out that imperative and who has the right to demand that someone do so. If we cannot give a good answer to those questions, we might still want to argue that it would be good (for someone) to do such research and that it would be virtuous of society to support it. But we cannot then meaningfully use the language of a "moral imperative." We ought to act in a beneficent way toward our fellow citizens, but there are many ways of doing that, and medical research can claim no more of us than many other worthy ways of spending our time and resources. We can be blamed if we spent a life doing nothing for others, but it would be unfair to blame us if we chose to do other good works than support, much less personally pursue, medical research. Hence a claim that there is any kind of research—such as medical research—that carries a prima facie imperative to support and advance it distorts a main line of Western moral philosophy.
The late philosopher Hans Jonas put the matter as succinctly as anyone:
Let us not forget that progress is an optional goal, not an unconditional commitment, and that its tempo in particular, compulsive as it may become, has nothing sacred about it. Let us also remember that a slower progress in the conquest of disease would not threaten society, grievous as it is to those who have to deplore that their particular disease be not conquered, but that society would indeed be threatened by the erosion of those moral values whose loss, possibly caused by too ruthless a pursuit of scientific progress, would make its most dazzling triumphs not worth having.14In another place Jonas wrote, "The destination of research is essentially melioristic. It does not serve the preservation of the existing good from which I profit myself and to which I am obligated. Unless the present state is intolerable, the melioristic goal is in a sense gratuitous, and this not only from the vantage point of the present. Our descendants have a right to be left an unplundered planet; they do not have a right to new miracle cures."15
In the category of "intolerable" states would surely be rapidly spreading epidemics, taking thousands of young lives and breaking down the social life and viability of many societies. AIDS in poor countries, and some classic earlier plagues, assault society as a whole, damaging and destroying their social infrastructure. But though they bring terrible individual suffering, few other diseases—including cancer and heart disease—can be said now to threaten the well-being and future viability of any developed society as a society. They do not require an obsession with victory that ignores moral niceties and surfaces at times in the present war against disease, where, to paraphrase Lord Sainsbury's words, its benefits outweigh ethical concerns.
Jonas was by no means an enemy of research. Writing in the context of the 1960s debate on human-subject research, he insisted that the cost in time lost because of regulatory safeguards to protect the welfare of research subjects was a small, but necessary, price to pay to preserve important moral values and to protect the good name of research itself. But he was also making a larger point about absolutizing disease, as if no greater evil existed, in order to legitimize an unbounded assault. Not everything that is good and worthy of doing, as is research, ought to be absolutized. That view distorts a prudent assessment of human need, inviting linguistic hyperbole and excessive rationalization of dubious or indefensible conduct.16 Moreover, as well as any other social good, medical research has its own opportunity costs (as an economist could put it); that is, the money that could be spent on medical research to improve the human condition could also be spent on something else that would bring great benefits as well, whether public health, education, job-creating research, or other forms of scientific research, such as astronomy, physics, and chemistry.
Health may indeed be called special among human needs. It is a necessary precondition to life. But at least in developed countries, with high general levels of health for most people for most of their lives, that precondition is now largely met at the societal, if not the individual, level; other social goods may legitimately compete with it. With the exception of plagues, no disease or medical condition can claim a place as an evil that must be erased, as a necessary precondition for civilization, though many would surely be good to erase.
Hardly anyone in medical research is likely to deny the truth of those assertions, but no one is eager to introduce them to public debate. One way to absolutize, and then abuse, medical research is to turn the evils it aims to erase into nasty devils, evil incarnate. In this way good and just wars often descend to nasty and immoral wars. The weapons of war, including those brought to bear against disease, then easily become indispensable, for no other choice is available. The language of war, or moral imperative, can thus be hazardous to use, giving too high a moral and social place to overcoming death, suffering, and disease. It becomes "too high" when it begins to encroach upon, or tempt us to put aside, other important values, obligations, and social needs.
Nonetheless, there is a way of expressing a reasonable moral obligation that need not run those dangers. It is to build on and incorporate into thinking about research the most common arguments in favor of universal health care, that is, the provision of health care to all citizens regardless of their ability to pay for that care. There are different ways of expressing the underlying moral claim: as a right to health care, which citizens can claim against the state; as an obligation on the part of the state to provide health care; and as a commitment to social solidarity. The idea of a "right" to health care has not fared well in the United States, which is one reason the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research used the language of a governmental obligation to provide health care in 1984.17 A characteristic way of putting the rights or obligations is in terms of justice. As one of the prominent proponents of a just and universal-care system, Norman Daniels, has put it, "by keeping people close to normal functioning, healthcare preserves for people the ability to participate in the political, social, and economic life of their society."18 To this I add the ability to participate in the family and private life of communities. The aim in Daniels's view is that of a "fair equality of opportunity."
The concept of "solidarity," rarely a part of American political thought, is strong in Canada and Western Europe. It focuses on the need of those living together to support and help one another, to make of themselves a community by putting in place those health and social resources necessary for people to function as a community.19 The language of rights and obligations characteristically focuses on the needs of the community. The language of solidarity is meant to locate the individual within a community and, with health care, to seek a communal and not just an individual good.
It is beyond the scope of this book to take up in any further details those various approaches to the provision of health care. It is possible, however, to translate that language and those approaches into the realm of medical research. We can ask whether, if there is a social obligation to provide health care—meaning those diagnostic, therapeutic, and rehabilitative capabilities that are presently available—there is a like obligation to carry out research to deal with those diseases and medical conditions that at present are not amenable to treatment. "Fair equality of opportunity," we might argue, should reach beyond those whose medical needs can be met with available therapies to encompass others who are not in this lucky circle. Justice requires, we might add, that they be given a chance as well and that research is necessary to realize it.
Three important provisos are necessary. First, rationing must be a part of any universal health-care system: no government can afford to make available to everyone everything that might meet their health-care needs. Resource limitations of necessity require the setting of priorities—for the availability of research funds and health-care-delivery funds alike. Second, no government can justify investments in research that would knowingly end in treatments or therapies it could not afford to extend to all citizens or ones available only privately to those with the money to pay for them (chapters 9 and 10 pursue these two points). Third, neither medical research nor health-care delivery are the only determinants of health: social and economic and environmental factors have a powerful role as well.
Instead of positioning medical research as a moral imperative we can understand it as a key part of a vision of a good society. A good society is one interested in the full welfare of its citizens, supportive of all those conditions conducive to individual and communal well-being. Health would be an obviously important component of such a vision, but only if well integrated with other components: jobs, social security, family welfare, social peace, and environmental protection. No one of those conditions, or any others we could plausibly posit, is both necessary and sufficient; each is necessary but none is sufficient. It is the combination that does the work, not the individual pieces in isolation. Research to improve health would be a part of the effort to achieve an integrated system of human well-being. But neither perfect health nor the elimination of all disease is a prerequisite for a good twenty-first-century society—it is a prerequisite only to the extent that it is a major obstacle to pursuing other goods. Medical researchers and the public can be grateful that the budget of the NIH has usually outstripped other science and welfare budgets in its annual increases. But the NIH budget does not cover the full range of our social needs, which might benefit from comparable increases in programs devoted to them.
In the remainder of this chapter, I turn to death and aging, both fine case studies to begin my closer look at the research imperative. Long viewed as evils of a high order, they now often serve as stark examples of evil that research should aim to overcome. Two of my closest friends died of cancer and another of stroke during the year in which I wrote this book, so I do not separate what I write here from my own reflections. I wish they were still alive and I have mixed feelings about getting old, not all of them optimistic.
I choose death and aging as my starting point in part because, though fixed inevitabilities, they are unlike. Death is an evil in and of itself with no redeeming features (unless, now and then, as surcease from pain). With aging, by contrast, the evil is there—who wants it and who needs it?—but the flavor is one of annoyed resignation, of an evil we (probably) can't avoid but, if we let our imaginations roam, we might understand differently or even forestall.20 Hence the fight against death is imperative, and the fight against the diseases of aging worthy and desirable, even if it does not quite make the heavyweight class of death.
The War Against DeathBy far the most important opponent in modern medical warfare is death. The announcement of a decline in mortality rates from various diseases is celebrated as the greatest of medical victories, and it is no accident that the NIH has provided the most research money over the years to those diseases that kill the most people, notably cancer, strokes, and heart disease. Oddly enough, however, the place of death in human life, or the stance that medicine ought, ideally or theoretically, to take toward death in medicine has received remarkably little discussion. The leading medical textbooks hardly touch the topic at all other than (and only recently) the care of the terminally ill.21 While death is no longer the subject no one talks about, Susan Sontag was right to note that it is treated, if at all, as an "offensively meaningless event"—and, I would add, fit only to be fought.22
Of course this attitude is hardly difficult to understand. Few of us look forward to our death, most of us fear it, and almost all of us do not know how to give it plausible meaning, whether philosophical or religious. Death is the end of individual consciousness, of any worldly hopes and relationship with other people. Unless we are overburdened with pain and suffering, there is not much good that can be said about death for individual human beings, and most people are actually willing to put up with much suffering rather than give up life altogether. Death has been feared and resisted and fought, and that seems a perfectly sensible response.
Yet medicine does more than resist death or fight it. Death is, after all, a fact of biological existence and, since humans are at least organic, biological creatures, we have to accept it. Death is just there, built into us, waiting only for the necessary