Cover Image

Larger ImageView Larger

What Price Better Health?

Hazards of the Research Imperative

Daniel Callahan (Author)

Available worldwide
READ AN EXCERPT

Paperback, 341 pages
ISBN: 9780520246645
January 2006
$29.95, £19.95
Other Formats Available:
The idea that we have an unlimited moral imperative to pursue medical research is deeply rooted in American society and medicine. In this provocative work, Daniel Callahan exposes the ways in which such a seemingly high and humane ideal can be corrupted and distorted into a harmful practice.

Medical research, with its power to attract money and political support, and its promise of cures for a wide range of medical burdens, has good and bad sides—which are often indistinguishable. In What Price Better Health?, Callahan teases out the distinctions and differences, revealing the difficulties that result when the research imperative is suffused with excessive zeal, adulterated by the profit motive, or used to justify cutting moral corners. Exploring the National Institutes of Health's annual budget, the inflated estimates of health care cost savings that result from research, the high prices charged by drug companies, the use and misuse of human subjects for medical testing, and the controversies surrounding human cloning and stem cell research, Callahan clarifies the fine line between doing good and doing harm in the name of medical progress. His work shows that medical research must be understood in light of other social and economic needs and how even the research imperative, dedicated to the highest human good, has its limits.
Foreword by Daniel M. Fox and Samuel L. Milbank
Acknowledgments
Introduction: An Imperative?

1. The Emergence and Growth of the Research Imperative
2. Protecting the Integrity of Science
3. Is Research a Moral Obligation?
4. Curing the Sick, Helping the Suffering, Enhancing the Well
5. Assessing Risks and Benefits
6. Using Humans for Research
7. Pluralism, Balance, and Controversy
8. Doing Good and Doing Well
9. Advocacy and Priorities for Research
10. Research and the Public Interest

Notes
Index
Daniel Callahan is Director of the International Program at the Hastings Center and Senior Fellow at Harvard Medical School. He is the author of False Hopes (1998), The Troubled Dream of Life (1993), What Kind of Life? (1990), and Setting Limits (1987).
In 2011, Callahan received the Matteo Ricci, S.J. Award for his contributions to Christian culture.
“...explores many of the most important ethical issues...” “...offers clinical researchers an important ethical guide.”—Sara Rosenbaum Journal Of Clinical Investigation
"Callahan has written an important book. The research imperative may not be quite as invulnerable as he thinks, but it is certainly imperative that the case he makes against it be given the close and thoughtful attention that his book provokes."—Arthur Caplan Nature
"One of the most interesting and detailed among recent efforts to examine the history and modern scope of American medical research."—Stanley J. Reiser, M.D., Ph.D. New England Journal Of Medicine
"With a masterly command of the policy and economic literature, Callahan traces the subtler threats posed by the imperative for medical research during the last two decades."—Andrew Lustig Commonweal
“Callahan inspires the reader with an interesting, readable, and comprehensive account of the prominent position of medical research in a contemporary society.”—Charlotte Williams, M.D. Psychiatric Services Journal
"The book contains so much thoughtful discussion and valuable insight that it is bound to become a staple in courses on science policy and a must-read for anyone concerned with medical research and, indeed, with health policy at large."—Uwe E. Reinhardt Science (AAAS)
"This book is of special importance. Callahan brings together in one volume the history of biomedical research in the U.S., a discussion of the goals, process, and conduct of biomedical research, and a compelling proposal for reforming the balance between research and public health policies."--Dorothy Rice, coauthor of The Dynamics of Disability: Measuring and Monitoring Disability for Social Security Programs

"One of the foremost bioethicists of our age questions the central dogmas of biomedical research, namely that more science necessarily delivers a better life and that aging is a preventable disease. Callahan brilliantly deconstructs the myths behind medical research; his arguments and socratic inquiry will shake your complacency as it did my own."—Sheldon Krimsky, author of Science in the Private Interest

"This book is the fruit of many years of reflection by one who has been at the center of the bioethics movement in this country. Managing to be simultaneously readable and knowledgeable, Callahan has also not been afraid to be provocative. His book will be required reading for all who want to ponder the ethics of research."—Gilbert Meilaender, author of Body, Soul and Bioethics

Is Research a Moral Obligation?

 

In 1959 Congress passed a "health for peace" bill, behind which was a view of disease and disability as "the common enemy of all nations and peoples."1 In 1970 President Nixon declared a "war" against cancer. Speaking of a proposal in Great Britain in 2000 to allow stem-cell research to go forward, Science Minister Lord Sainsbury said, "The important benefits which can come from this research outweigh any other considerations," a statement that one newspaper paraphrased as outweighing "ethical concerns."2 Arguing for the pursuit of potentially hazardous germ-line therapy, Dr. W. French Anderson, editor-in-chief of Human Gene Therapy, declared that "we as caring human beings have a moral mandate to cure disease and prevent suffering."3 A similar note was struck in an article by two ethicists who held that there is a "prima facie moral obligation" to carry out research on germ-cell gene therapy.4

As if that was not enough, in 1999 a distinguished group of scientists, including many Nobel laureates, issued a statement urging federal support of stem-cell research. The scientists said that because of its "enormous potential for the effective treatment of human disease, there is a moral imperative to pursue it."5 Two other ethicists said much the same, speaking of "the moral imperative of compassion that compels stem cell research," and adding that at stake are the "criteria for moral sacrifices of human life," a possibility not unacceptable to them.6 The Human Embryo Research Panel, created by the National Institutes of Health, contended in 1994 that federal funding to create embryos for research purposes should be allowed to go forward "when the research by its very nature cannot otherwise be validly conducted," and "when the fertilization of oocytes is necessary for the validity of a study that is potentially of outstanding scientific and therapeutic value."7

The tenor of these various quotations is clear. The proper stance toward disease is that of warfare, with unconditional surrender the goal. Ethical objections, when they arise, should give way to the likely benefits of research, even if the benefits are still speculative (as with stem-cell and germ-line research). The argument for setting aside ethical considerations when research could not otherwise be "validly conducted" is particularly striking. It echoes an objection many researchers made during the1960s to the imminent regulation of human-subject research: regulations would cripple research. That kind of reasoning is the research imperative in its most naked—and hazardous—form, the end unapologetically justifying the means. I am by no means claiming that most researchers or ethicists hold such views. But that reasoning is one of the "shadows" this book is about.

What should be made of this way of thinking about the claims of research? How appropriate is the language of warfare, and how extensive and demanding is the so-called moral imperative of research? I begin by exploring those questions and then move on to the wars against death and aging, two fundamental, inescapable biological realities so far—and two notorious and clever foes.

 

The Metaphor of "War"

Since at least the 1880s—with the identification of bacteria as agents of disease—the metaphor of a "war" against illness and suffering has been popular and widely deployed. Cancer cells "invade" the body, "war stories" are a feature of life "in the trenches" of medicine, and the constant hope is for a "magic bullet" that will cure disease in an instant.8 Since there are surely many features of medicine that may be likened to war, the metaphor is hardly far-fetched, and it has proved highly serviceable time and again in the political effort to gain money for research.

Less noticed are the metaphor's liabilities, inviting excessive zeal and a cutting of moral corners. The legal scholar George Annas has likened the quest for a cure of disease to that of the ancient search for the Holy Grail: "Like the knights of old, a medical researcher's quest of the good, whether that be progress in general or a cure for AIDS or cancer specifically, can lead to the destruction of human values we hold central to a civilized life, such as dignity and liberty."9 "Military thinking," he has also written, "concentrates on the physical, sees control as central, and encourages the expenditure of massive resources to achieve dominance."10 The literary critic Susan Sontag, herself a survivor of cancer, has written, "We are not being invaded. The body is not a battlefield. . . . We—medicine, society—are not authorized to fight back by any means possible. . . . About that metaphor, the military one, I would say, if I may paraphrase Lucretius: Give it back to the war-makers."11

While some authors have tried to soften the metaphor by applying a just war theory to the war against disease (a sensible enough effort), the reality of warfare does not readily lend itself to a respect for nuanced moral theory. Warriors get carried away with the fight, trading nasty blow for nasty blow, single-mindedly considering their cause self-evidently valid, shrugging aside moral sensitivities and principles as eminently dispensable when so much else of greater value is thought to be at stake. It is a dangerous way of thinking, all the more so when—as is the case with so much recent research enthusiasm—both the therapeutic benefits and the social implications are uncertain.

Is Research a Moral Obligation?

Yet if the metaphor of war is harmful, lying behind it is the notion of an insistent, supposedly undeniable moral obligation. Nations go to war, at least in just wars, to defend their territory, their values, their way of life. They can hardly do otherwise than consider the right to self-defense to be powerful, a demanding and justifiable moral obligation to protect and defend themselves against invaders. To what extent, and in what ways, do we have an analogous moral obligation to carry out research aiming to cure or reduce suffering and disease, which invade our minds and bodies?12

Historically, there can be little doubt that an abiding goal of medicine has been the relief of pain and suffering; it has always been considered a worthy and highly defensible goal. The same can be said of medical research that aims to implement that goal. It is a valid and valuable good, well deserving of public support. As a moral proposition it is hard to argue with the idea that, as human beings, we should do what we can to relieve the human condition of avoidable disease and disability. Research has proved to be a splendid way of doing that.

So the question is not whether research is a good. Yes, surely. But we need to ask how high and demanding a good it is. Is it a moral imperative? Do any circumstances justify setting aside ethical safeguards and principles if they stand in the way of worthy research? And how does the need for research rank with other social needs?

The long-honored moral principle of beneficence comes into play here, as a general obligation to help those in need when we can do so. Philosophically, it has long been held that there are perfect and imperfect obligations. The former entail obligations with corresponding rights: I am obliged to do something because others have rights to it, either because of contractual agreements or because my actions or social role generate rights that others can claim against me. Obligations in the latter category are imperfect because they are nonspecific: no one can make a claim that we owe to them a special duty to carry out a particular action on their behalf.13

Medical research has historically fallen into that latter category. There has long been a sense that beneficence requires that we work to relieve the medical suffering of our fellow human beings, as well as a felt obligation to pursue medical knowledge to that end. But it is inevitably a general, imperfect obligation rather than a specific, perfect obligation: no one can claim a right to insist that I support research that might cure him of his present disease at some point in the future. Even less can it be said that there is a right on the part of those not yet sick who someday might be (e.g., those at risk of cancer), to demand that I back research that might help them avoid getting sick. Nor can a demand be made on a researcher that it is his or her duty to carry out a specific kind of research that will benefit a specific category of sick people.

This is not to say that a person who takes on the role of researcher and has particular knowledge and skills to combat disease has no obligation to do so. On the contrary, the choice of becoming a researcher (or doctor, or firefighter, or lawyer) creates role obligations, and it would be legitimate to insist that medical researchers have a special duty to make good use of their skills toward the cure of the sick. But it is an imperfect obligation because no individuals can claim the right to demand that a particular researcher work on their specific disease. At most, there is an obligation to discharge a moral role by using research skills and training responsibly to work on some disease or another. Even here, however, we probably would not call a researcher who chose to carry out basic research but had no particular clinical application in mind an irresponsible researcher.

These are no mere ethical quibbles or hair-splitting. If the language of an "imperative" applies, we can reasonably ask who exactly has the duty to carry out that imperative and who has the right to demand that someone do so. If we cannot give a good answer to those questions, we might still want to argue that it would be good (for someone) to do such research and that it would be virtuous of society to support it. But we cannot then meaningfully use the language of a "moral imperative." We ought to act in a beneficent way toward our fellow citizens, but there are many ways of doing that, and medical research can claim no more of us than many other worthy ways of spending our time and resources. We can be blamed if we spent a life doing nothing for others, but it would be unfair to blame us if we chose to do other good works than support, much less personally pursue, medical research. Hence a claim that there is any kind of research—such as medical research—that carries a prima facie imperative to support and advance it distorts a main line of Western moral philosophy.

The late philosopher Hans Jonas put the matter as succinctly as anyone:

Let us not forget that progress is an optional goal, not an unconditional commitment, and that its tempo in particular, compulsive as it may become, has nothing sacred about it. Let us also remember that a slower progress in the conquest of disease would not threaten society, grievous as it is to those who have to deplore that their particular disease be not conquered, but that society would indeed be threatened by the erosion of those moral values whose loss, possibly caused by too ruthless a pursuit of scientific progress, would make its most dazzling triumphs not worth having.14

In another place Jonas wrote, "The destination of research is essentially melioristic. It does not serve the preservation of the existing good from which I profit myself and to which I am obligated. Unless the present state is intolerable, the melioristic goal is in a sense gratuitous, and this not only from the vantage point of the present. Our descendants have a right to be left an unplundered planet; they do not have a right to new miracle cures."15

In the category of "intolerable" states would surely be rapidly spreading epidemics, taking thousands of young lives and breaking down the social life and viability of many societies. AIDS in poor countries, and some classic earlier plagues, assault society as a whole, damaging and destroying their social infrastructure. But though they bring terrible individual suffering, few other diseases—including cancer and heart disease—can be said now to threaten the well-being and future viability of any developed society as a society. They do not require an obsession with victory that ignores moral niceties and surfaces at times in the present war against disease, where, to paraphrase Lord Sainsbury's words, its benefits outweigh ethical concerns.

Jonas was by no means an enemy of research. Writing in the context of the 1960s debate on human-subject research, he insisted that the cost in time lost because of regulatory safeguards to protect the welfare of research subjects was a small, but necessary, price to pay to preserve important moral values and to protect the good name of research itself. But he was also making a larger point about absolutizing disease, as if no greater evil existed, in order to legitimize an unbounded assault. Not everything that is good and worthy of doing, as is research, ought to be absolutized. That view distorts a prudent assessment of human need, inviting linguistic hyperbole and excessive rationalization of dubious or indefensible conduct.16 Moreover, as well as any other social good, medical research has its own opportunity costs (as an economist could put it); that is, the money that could be spent on medical research to improve the human condition could also be spent on something else that would bring great benefits as well, whether public health, education, job-creating research, or other forms of scientific research, such as astronomy, physics, and chemistry.

Health may indeed be called special among human needs. It is a necessary precondition to life. But at least in developed countries, with high general levels of health for most people for most of their lives, that precondition is now largely met at the societal, if not the individual, level; other social goods may legitimately compete with it. With the exception of plagues, no disease or medical condition can claim a place as an evil that must be erased, as a necessary precondition for civilization, though many would surely be good to erase.

Hardly anyone in medical research is likely to deny the truth of those assertions, but no one is eager to introduce them to public debate. One way to absolutize, and then abuse, medical research is to turn the evils it aims to erase into nasty devils, evil incarnate. In this way good and just wars often descend to nasty and immoral wars. The weapons of war, including those brought to bear against disease, then easily become indispensable, for no other choice is available. The language of war, or moral imperative, can thus be hazardous to use, giving too high a moral and social place to overcoming death, suffering, and disease. It becomes "too high" when it begins to encroach upon, or tempt us to put aside, other important values, obligations, and social needs.

Nonetheless, there is a way of expressing a reasonable moral obligation that need not run those dangers. It is to build on and incorporate into thinking about research the most common arguments in favor of universal health care, that is, the provision of health care to all citizens regardless of their ability to pay for that care. There are different ways of expressing the underlying moral claim: as a right to health care, which citizens can claim against the state; as an obligation on the part of the state to provide health care; and as a commitment to social solidarity. The idea of a "right" to health care has not fared well in the United States, which is one reason the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research used the language of a governmental obligation to provide health care in 1984.17 A characteristic way of putting the rights or obligations is in terms of justice. As one of the prominent proponents of a just and universal-care system, Norman Daniels, has put it, "by keeping people close to normal functioning, healthcare preserves for people the ability to participate in the political, social, and economic life of their society."18 To this I add the ability to participate in the family and private life of communities. The aim in Daniels's view is that of a "fair equality of opportunity."

The concept of "solidarity," rarely a part of American political thought, is strong in Canada and Western Europe. It focuses on the need of those living together to support and help one another, to make of themselves a community by putting in place those health and social resources necessary for people to function as a community.19 The language of rights and obligations characteristically focuses on the needs of the community. The language of solidarity is meant to locate the individual within a community and, with health care, to seek a communal and not just an individual good.

It is beyond the scope of this book to take up in any further details those various approaches to the provision of health care. It is possible, however, to translate that language and those approaches into the realm of medical research. We can ask whether, if there is a social obligation to provide health care—meaning those diagnostic, therapeutic, and rehabilitative capabilities that are presently available—there is a like obligation to carry out research to deal with those diseases and medical conditions that at present are not amenable to treatment. "Fair equality of opportunity," we might argue, should reach beyond those whose medical needs can be met with available therapies to encompass others who are not in this lucky circle. Justice requires, we might add, that they be given a chance as well and that research is necessary to realize it.

Three important provisos are necessary. First, rationing must be a part of any universal health-care system: no government can afford to make available to everyone everything that might meet their health-care needs. Resource limitations of necessity require the setting of priorities—for the availability of research funds and health-care-delivery funds alike. Second, no government can justify investments in research that would knowingly end in treatments or therapies it could not afford to extend to all citizens or ones available only privately to those with the money to pay for them (chapters 9 and 10 pursue these two points). Third, neither medical research nor health-care delivery are the only determinants of health: social and economic and environmental factors have a powerful role as well.

Instead of positioning medical research as a moral imperative we can understand it as a key part of a vision of a good society. A good society is one interested in the full welfare of its citizens, supportive of all those conditions conducive to individual and communal well-being. Health would be an obviously important component of such a vision, but only if well integrated with other components: jobs, social security, family welfare, social peace, and environmental protection. No one of those conditions, or any others we could plausibly posit, is both necessary and sufficient; each is necessary but none is sufficient. It is the combination that does the work, not the individual pieces in isolation. Research to improve health would be a part of the effort to achieve an integrated system of human well-being. But neither perfect health nor the elimination of all disease is a prerequisite for a good twenty-first-century society—it is a prerequisite only to the extent that it is a major obstacle to pursuing other goods. Medical researchers and the public can be grateful that the budget of the NIH has usually outstripped other science and welfare budgets in its annual increases. But the NIH budget does not cover the full range of our social needs, which might benefit from comparable increases in programs devoted to them.

In the remainder of this chapter, I turn to death and aging, both fine case studies to begin my closer look at the research imperative. Long viewed as evils of a high order, they now often serve as stark examples of evil that research should aim to overcome. Two of my closest friends died of cancer and another of stroke during the year in which I wrote this book, so I do not separate what I write here from my own reflections. I wish they were still alive and I have mixed feelings about getting old, not all of them optimistic.

I choose death and aging as my starting point in part because, though fixed inevitabilities, they are unlike. Death is an evil in and of itself with no redeeming features (unless, now and then, as surcease from pain). With aging, by contrast, the evil is there—who wants it and who needs it?—but the flavor is one of annoyed resignation, of an evil we (probably) can't avoid but, if we let our imaginations roam, we might understand differently or even forestall.20 Hence the fight against death is imperative, and the fight against the diseases of aging worthy and desirable, even if it does not quite make the heavyweight class of death.

 

The War Against Death

By far the most important opponent in modern medical warfare is death. The announcement of a decline in mortality rates from various diseases is celebrated as the greatest of medical victories, and it is no accident that the NIH has provided the most research money over the years to those diseases that kill the most people, notably cancer, strokes, and heart disease. Oddly enough, however, the place of death in human life, or the stance that medicine ought, ideally or theoretically, to take toward death in medicine has received remarkably little discussion. The leading medical textbooks hardly touch the topic at all other than (and only recently) the care of the terminally ill.21 While death is no longer the subject no one talks about, Susan Sontag was right to note that it is treated, if at all, as an "offensively meaningless event"—and, I would add, fit only to be fought.22

Of course this attitude is hardly difficult to understand. Few of us look forward to our death, most of us fear it, and almost all of us do not know how to give it plausible meaning, whether philosophical or religious. Death is the end of individual consciousness, of any worldly hopes and relationship with other people. Unless we are overburdened with pain and suffering, there is not much good that can be said about death for individual human beings, and most people are actually willing to put up with much suffering rather than give up life altogether. Death has been feared and resisted and fought, and that seems a perfectly sensible response.

Yet medicine does more than resist death or fight it. Death is, after all, a fact of biological existence and, since humans are at least organic, biological creatures, we have to accept it. Death is just there, built into us, waiting only for the necessary conditions to express itself. Why, then, should medicine treat it as an enemy, particularly a medicine that works so hard to understand how the body works and how it relates to the rest of nature? The late physician-essayist Lewis Thomas had an articulate biological sense of death as "a natural marvel. All of the life of the earth dies, all of the time, in the same volume as the new life that dazzles us each morning, each spring. . . . In our way, we conform as best we can to the rest of nature. The obituary pages tell us of the news that we are dying away, while the birth announcements in finer print, off at the side of the page, inform us of our replacements."23

Many biologists and others have pointed out the importance of death as a means of constantly replenishing the vitality and freshness of human life as a species. New people come into the world and thereby open the way for change and development; others die and thus facilitate the new and the novel. Moreover, does our recognition of the finiteness of our lives, the brute fact that they come to an end, not itself sharpen our appreciation of what we have and what we might do to make the most of it? If we had bodily immortality in this world, would not the danger of boredom and tedium be a real possibility? "Nothing less will do for eternity," Bernard Williams has written, "than something that makes boredom unthinkable." And Williams believes it exceedingly difficult to imagine an unendingly satisfying model of immortality.24

Jonas caught what seems to me the essence of the ambivalence about death when he wrote of mortality as, in some inextricable way, both a burden and a blessing: "the gift of subjectivity only sharpens the yes-no polarity of all life, each side feeding on the strength of the other. Is it, in the balance, still a gain, vindicating the bitter burden of mortality to which the gift is tied, which it makes even more onerous to bear?"25 His answer was yes, in part because of the witness of history to the renewal that new lives bring and the passing of the generations makes possible, and in part because it is hard to imagine that a world without death would be a richer biological and cultural world, more open in its possibilities than the world we now have.

Part of the problem is simply that we know nothing beyond the bounds of our own existence: we know that life, when it is good, is good. Only a religious vision of immortality holds out hope of something better. Hence, we hold tight to what we know. Even so, simply extending life is no guarantee that the good we now find in life, at younger ages, would continue indefinitely into the future; boredom, ennui, the tedium of repetition may well weigh us down. Nor is there any guarantee that our bodies would remain free of frailty, late-late-onset dementia, failing organs. Even under the best prospects, there would be hazards, physical and mental, to negotiate for a prize that might hardly be worth winning.

My own conclusion is this: while it makes sense for medicine to combat some causes and forms of death, it makes no sense to consider death as such the enemy. To give it a permanent priority distorts the goals of medicine, taking money from research that could improve the quality of life. And so far in human history, however much we spend to combat death, it always wins in the long run. There will always be what I called elsewhere the "ragged edge of progress"—that point where our present knowledge and technology run out, with illness and death returning; and however much progress is made, there will always be such a point.26 No matter how far we go, and how successful we are in the war on death, people will continue to die, and they will die of some lethal disease or other that disease research has yet to master. The most serious questions we need to consider are how much emphasis research should place on the forestalling of death, and just which kinds of death it should tackle.

Medicine's Schism about Death

One obstacle delays any attempt to answer those questions. At the heart of modern medicine is a schism over the place and meaning of death in human life. On one side is the research imperative to overcome death; on the other, the newly emergent (even if historically ancient) clinical imperative to accept death as a part of life in order to help make dying as tolerable as possible. Reflecting medicine's fundamental ambivalence about how to interpret and deal with death, the schism has untoward consequences for setting medical research priorities and for understanding medicine's appropriate stance toward death in the care of patients. My question is this: if this schism is truly present, and if it creates research pressure that generate serious clinical problems, are there ways to soften its impact, to lessen the friction, and to find a more coherent understanding of death?

In the classical world death was not medicine's enemy. It could not be helped. Only with the modern era, and the writings of René Descartes and Francis Bacon in the sixteenth and seventeenth centuries, did the goal of a medical struggle against death emerge.27 The earlier cultural and religious focus was on finding a meaning for death, giving it a comprehensible place in human experience, and making the passage from life to death as comfortable as possible.28 The post-Baconian medicine put aside that search. It declared death the enemy. Karl Marx once said that the task of philosophy is not to understand the world but to change it. Modern medicine, to paraphrase Marx, has seemed in effect to say that its task is not to understand death but to eliminate it. The various "wars" against cancer and other diseases in recent decades reflect that mission. For what is the logic of an unrelenting war against all lethal disease other than trench warfare against death itself?

The tacit message behind the research imperative is that, if death itself cannot be eliminated—no one is so bold as to claim that—then at least all the diseases that cause death can be done away with; and that amounts to the same thing. As William Haseltine, chairman and chief executive officer of Human Genome Sciences breathtakingly put it, "Death is a series of preventable diseases."29 From this perspective, the researcher is like a fine sharpshooter who will pick off enemies one by one: cancer, then heart disease, then diabetes, then Alzheimer's, and so on. The human genome effort, the latest contender offering eventual cures for death, will supposedly get to the genetic bottom of things, radically improving the sharpshooter's aim30

I chose the word "logic" for the research enterprise that aims to eliminate all the known causes of death, in order to point out that its ultimate enemy must be death itself, the final outcome of that effort. But most researchers and physicians, in my experience, do not see themselves as trying to stamp out death itself, even if they would like to understand and overcome its causes. They know that death is now, and will remain, part of the human condition; medicine is not in hot pursuit of immortality. Even so, the struggle against the causes of death continues, as if researchers must and will continue until they eliminate those causes. Perhaps this tension, or contradiction, is best understood as an expression of an ideal of research confronting a biological reality: the spirit of the research enterprise is to eliminate the causes of death, even as it is understood that death itself will not be eliminated. We might, then, think of the struggle against death as a goal we may never achieve, a dream we may never realize. However we understand this phenomenon, it has its effect at the clinical level.

But why should this dream affect the care of those who are dying, having passed beyond the limits of effective help? For one thing, as already mentioned, it has turned out to be very difficult, medically and psychologically, to trace a bright line (as a lawyer might put it) between living and dying. The increased technological possibility of doing just a little bit more, and then just a little bit more again, to sustain life means that it's getting harder and harder to tell just where that line is. Moreover, the thrust of the research drive is to turn death itself into a contingent, accidental event. Why do people keep dying? Listen to the now-common explanations: they die because they did not take care of their health, or because they had genetically unhealthy parents, or because their care was of a low quality, or because the available care is inequitably distributed, or because this year's technologies don't sufficiently sustain life (but not necessarily next year's), or because research has not yet (but will eventually) find cures for those diseases currently killing us. No one just dies anymore, and certainly not from something as vague as "old age." Everyone dies from specific causes, and we can cure them. Death, in that sense, has been rendered contingent and accidental.

The Clinical Spillover

What difference does this drive or dream make for the clinician at the bedside? Such is the pervasive power of the research imperative (even of a benign kind)—rooted in a vision of endless progress and permeating modern (and particularly American) medicine—that it can easily lead clinicians to think and act as if the death of this patient at this time is accidental or a failure, not inevitable. Even if they have done everything possible to keep the patient alive, guilt is perhaps one spillover effect of the research stance toward death: maybe we could have even should have done more—if we had only known what it was. Understood narrowly, the technological imperative is still another spillover effect, indicating a belief that if we use technology well, this patient need not die at this time and, understood broadly, that technological innovation is the royal road to cure.

In the United States, the research imperative to fight death stands foursquare against fatalism, against giving up hope, and against thinking we cannot tame nature. Should we be surprised that its mode of thinking influences clinical medicine as well, introducing profound uncertainties about the appropriate stance toward death? Can we really expect the various reform efforts in clinical care to be as successful as they might so long as the research-induced uncertainty about the inevitability of death is so powerful?

At this point two skeptical thoughts are sure to arise. One of them is a point of logic: the biological inevitability of death does not entail its inevitable occurrence at any given point in life. We will die, but just how and when is not at all determined. Death is possible at any time by any means, coming faster or slower, brought about by one disease rather than another. In that sense death is, then, contingent. It has no predetermined, fixed time in a person's life. Since this is true, is progress possible simply by substituting later for earlier death, faster for slower, peaceful for painful? Not quite. If "later" is always assumed to be better, then the war against death admits of no victory and the research imperative against it admits of no limits. If, however, the wiser goal is a faster and more peaceful death—admitting of potential success in a way that an all-out struggle against death does not—then a more useful research agenda is possible.

The second skeptical thought is more fundamental. Perhaps the research imperative (eliminate death, disease by disease) and the clinical imperative (accept death as an unavoidable biological reality) are inescapably at odds. Perhaps they represent one of the many instances of incompatible goods that admit of no happy reconciliation. We may just have to live with the contradiction, conceding its force but remaining helpless to get beyond it. Though most of us can think of elderly people who have found an equilibrium—working to stay alive, yet ready at any moment to die—not everyone, and particularly a younger person, can, especially if they face a premature death. We want to live but know we must die, an ancient and wrenching clash.

There is no easy way beyond this clash. I find a quotation of the theologian Gilbert Meilaender (though not, I think, a theological statement) to be helpful: "We can say death is no enemy at all, or we can say that death is the ultimate enemy. Neither of these does justice to what I take to be the truth: that death is an enemy because human life is a great good, but that since continued life is not the highest good, death cannot be the greatest evil."31

Is a longer life necessarily a better life? A shorter life holds fewer possibilities of experiencing the goods of life than a longer life might afford. But on that view (assuming continued good health) nothing less than an indefinitely continued life will do, as goal after goal appears on the horizon. But in general, the fact that good things—poems, music, pleasant vacations, glorious sunsets—end does not subvert their value. If finitude is not inherently evil, then neither is a finite life span.

The Mixed Record of Reform

Given that background of debate, the fitful success of various efforts over recent decades to improve care at the end of life and promote a different outlook on death is easy to understand. During the early to mid-1970s, they included three major reform efforts. The first was the effort to introduce advances directives into patient care, a strategy designed to give patients choice about the kind of care they receive when dying. The second was the hospice movement, pioneered by Cicely Saunders in Great Britain and introduced at the Yale-New Haven Hospital in 1974. The third effort was to improve the education of medical students and residents on care at the end of life.

Of the three, hospice is probably the most successful, caring for over 500,000 patients a year (their deaths represent about 20 percent of the 2.3 million annual deaths). But hospice services have been mainly effective with cancer patients, even though there have been recent efforts to extend it to other lethal conditions, heart disease and Alzheimer's in particular. There is general agreement, moreover, that many terminally ill patients come to hospice much too late, sometimes just a few days before their deaths. Neither families nor physicians are always ready to accept death. Advance directives have had at best a mixed record. Despite considerable publicity for twenty-five years, probably no more than 15 percent of the population have such directives. Even worse, as a number of studies have shown, having them by no means guarantees patients of getting what they want.32

Death is still denied, evaded, and in the case of many clinicians fought to the end, bitter or otherwise for patients. As for the educational efforts, they have surely given the issues more salience in medical schools, but what students learn in didactic courses or seminars is often at odds with their experience during their clinical years, where the technological imperative—to aggressively use the available life-sustaining technologies—may still reign supreme. A recent survey of medical textbooks found the subject of death strikingly absent and little guidance for physicians in the care of dying patients.33

An important thread running through each of the struggling reform efforts has been medicine's characteristic ambivalence toward death: patients' and physicians' confusion about how best to understand and situate death in human life; an unwillingness to accept the coming of death, and the persistence of the turn to intensified technology in response to uncertainty about death. The great improvement in, and the new prominence of, palliative care is a powerful antidote to that pattern, representing both a return to older traditions of care and a fresh, less troubled response to death.

This record of mixed success has of late been met with a renewed effort at analysis and education. The Project on Death in America program of the Soros Foundation, and the Last Acts Campaign of the Robert Wood Johnson Foundation, have contributed generously to that work. It is too early to tell what this new round of initiatives, though most welcome, will achieve. If the schism persists, their success is likely to remain limited.

The conventional model of treatment, even if rarely articulated in any precise way, is to undertake every effort possible to save life, until that moment when treatment becomes futile, and a palliative mode replaces the therapeutic. What's wrong with that model? For all its seeming reasonableness it is beset by two confounding elements. One is the difficulty of determining when treatment is truly futile.34 Constant technological advances mean that there is almost always something more that can be done for even the sickest patient, one more last, desperate intervention. The other lies in assuming that physicians—not to mention patients and their families—can suddenly and at just the exactly right moment switch from an interventionist to a palliative mode; it betrays its psychological naïveté.35 It is often much more like an attempt to stop a large train, which goes a long distance down the track before the brakes take hold.

A Modest Proposal

Inescapably, the research imperative complicates and even undermines medicine's clinical mission, which works for better end-of-life care. How might we proceed, to bring the two sides closer? After nearly thirty years of analysis and reform efforts, the clinical side has determined that a peaceful death requires an acceptance of death by both physician and patient. The acceptance may be affirming or grudging or simply acquiescent, but it is essential. Death just is and must be given its due. The research drive, which seems to treat death as a biological accident, possible to overcome, must somehow be reconciled with the clinical perspective. To cooperate in the clinical mission, I propose several strategies.

Focus research on premature death. Not only is eradication of death an unattainable goal, it also promotes the idea among the public and physicians that death represents a failure of medicine, one that research will eventually overcome. It is, however, reasonable for medicine to seek to reduce premature death. The federal government now defines a "premature death" as one that occurs before the age of sixty-five. That standard should probably be raised a few years, but what should not be changed is the concept of a premature death. An implication of this strategy is that, when the average age of death from a disease comes later than the prematurity standard, there should be a reduction of (not an elimination of) research funds to combat it; the money saved should be switched to diseases where most deaths come before the prematurity line. By this standard, and in light of the fact that cancer is increasingly a disease of the elderly, the NIH cancer budget could be reduced, not constantly expanded. Understood this way, cancer remains an important research target, but one whose priority would gradually lower over time, giving way to more pressing needs.

Give "compressing morbidity" a research status equivalent to that of saving and lengthening life. The notion of compressing morbidity—shortening the period of poor health before death—has been around at least since the time of the French philosophe Condorcet two hundred years ago. It seemed only a pipe dream. But in recent years evidence has begun to accumulate that to some extent we can falsify the common adage of "longer life, worse health." For those who have good health habits and an adequate socioeconomic foundation to their lives, there can be a significantly lessened chance of a premature death and an old age burdened by illness and disability.36 Death is not the enemy, but a painful, impaired, and unhealthy life before death. Research on health promotion and disease prevention requires much greater financial support, as does research designed to improve the quality of life within a finite life span.

Persuade clinicians that the ideal of helping a patient achieve a peaceful death is as important as that of averting a patient's death. I contended above that one clinical spillover effect of the research war against death is an implicit purveying of the notion that death is an accidental, contingent biological phenomenon. For the clinician that message has meant that the highest duty is to struggle against death and that (with the help of research) such a struggle need not be in vain. In that context, helping patients achieve a peaceful death will always be seen as the lesser ideal, what is to be done when the highest ideal—continuing life—cannot be achieved.

The two goals should have equal value. In practice, this would mean, in a patient's critical illness and with death on its way, that the physician's struggle against a poor death would equal the struggle against death itself. The two ideals, of course, rarely admit a wholly comfortable resolution. Nonetheless, a serious and meaningful tension between them would help weaken the influence of the values inherent in the research imperative against death, by giving it a meaningful competitor; and it would also help improve palliative-care medicine and good patient care at the end of life. Because we will all die, palliative care should apply to everyone and not just to the losers, those whom medicine could not save. And of course research on improving palliative care should be given an increased budget.

Redefine medical "progress." We now commonly understand the crown jewel of medical progress to be the conquest of lethal disease. And we celebrate the triumph of a declining mortality rate, whether from heart disease, cancer, or AIDS. No doubt that celebration will continue and, with premature deaths, it should. But medical progress should increasingly refer to the avoidance of illness and disability, to rehabilitating those who have succumbed to disability, to tackling conditions that do not kill but otherwise ruin lives (such as serious mental illness), and to helping people understand how to take care of their own health. Death remains an enemy, but it is only one item in a list of many enemies of life—and not in the long run the most important.

Modern medicine, at least in its research aspiration, made death Public Enemy Number One. It is so no longer, at least in developed countries, when average life expectancies are approaching eighty. The enemy now ought to be lives blighted by chronic illness and the inability to function successfully. Death will always be with us, pushed around a bit to be sure, with death from one disease being superseded by death from another disease. That cannot and will not be changed. But we can change the way people are cared for at the end of life and we can significantly reduce the burden of illness. It is not, after all, death but a life poorly lived that people fear most, particularly when they are old. Something can be done about that, and research has much to contribute.

 

Aging and Death

Though not death's identical twin, aging too is feared and marked by decline. Less terrible than death, it has nonetheless been considered bad enough to merit the laments of poets, writers, ordinary people, and the medically inclined, just about everyone. For centuries, the notion of conquering aging, or rendering its burdens less harsh, has been a part of every culture's reflection on human fate, joining the struggle against aging with that of the struggle against death. There is another linking characteristic: unless someone dies a premature or accidental death, aging is now more than ever understood to be the main biological gateway to death. With the decline in infant and child mortality—and with life expectancy far beyond sixty-five for a majority of people in developed countries—we cannot think about eliminating or ameliorating death without also thinking about aging, or think about improving old age without doing something also about death.

The ancient world took death to be a harsh but unavoidable reality, old age as simply a burden to endure. The modern world has been more hopeful. A softer view, going back to the Italian Renaissance, envisions an old age marked by wisdom and delight in the simple pleasures of life. Still another picture, even more common, was suggested some years ago by Gerald J. Gruman, one that joins the Enlightenment optimism of Condorcet to that of modern individualism. It counsels the elderly to reject what Gruman called "medical mortalism" in favor of a scientific attack on aging and death. No less important is a kind of living for oneself, a rejection of communal notions of a self-sacrificial life in favor of personal creativity and self-assertion. Specifically rejected are idle musings about "central questions of meaning and value," which are endlessly "open for future resolution."37 This is not far from another look into the future, one that sees the scientific conquest of aging and added years of youth as bringing "the transformation of our society from a pattern of war and struggle to an era of utopian peace . . . [allowing] adequate time to uncover the secrets of the natural universe . . . that could serve as the foundation for a civilization of never-ending progress."38

Aging as "Disease"

But where does aging stand as an object of scientific research? Is it a disease like other physical pathologies or is it, like death, a "natural" biological inevitability? The strongest case for its inevitability is that, unlike other pathologies, it occurs in every human being and in every other organic creature. In a way that nothing else ordinarily classified as a disease is, aging is predictable. We may or may not get cancer or heart disease or diabetes, but we will surely get old and die. Yet much of the decline associated with age, particularly the increase in chronic disease and disability, is accessible to cure or relief. Even many of the other biological indices of aging—decline of hearing, rise of blood pressure, bone mineral loss, reduced muscle mass, failing eyesight, decreased lung function—are open to compensatory intervention though not at present to complete reversal.

In short, if aging is in many respects something other than disease, it has enough of the characteristics of disease to invite, and respond to, medical tinkering and improvement. Certainly there is no reason to classify it as "natural," if by that we mean that nothing should be done about it. On the contrary, it can be—and has in fact been—treated effectively as if it is a disease, not by combating old age as such but treating the undesirable conditions associated with it.39 That route is one possibility, while the other is to take on the biological process of aging itself as a research target. Timothy Murphy has suggested two pertinent questions here. Instead of asking "is aging a disease?" we should ask, first, is "aging objectionable such that its prevention and cure ought to be sought?" Second, can a convincing argument be developed in favor of a "cure" for aging to show that "human significance warrants [it] and possibly seeks such a cure and that the social costs of curing aging are morally acceptable?"40

Is aging "objectionable"? Well, it is hard to find many people who welcome it, at least in its advanced phases, where the decline is steep and the disabilities crippling. But does the fact that we don't like it show that it is inherently objectionable, an offense against human dignity? That is a harder case to make, especially since various cultures have found ways to treat the aged with dignity and allow the aged to accept their aging. To make the idea of dignity dependent on the state of our bodies or minds trivializes it. If we do so, then dignity becomes nothing but an accident of biology, with some people lucky to have it and others not. That is a corruption of the idea of human dignity, the essence of which is not to reduce value of people to a set of acceptable characteristics, such as the proper race or sex, social class or bodily traits, but to ascribe dignity to them simply as human beings apart from their individual characteristics.

There is another way to look at aging. While it is possible to situate the place of death within evolution and see its value in endlessly renewing human vigor and possibility, that is not so easy to do with aging. It seems to serve no useful biological function other than as a prelude to death; and for just that reason might itself be understood as part of the same biological process. But if we can distinguish aging from the decline that brings death, perhaps we can sensibly resist the former while not equally resisting death. We might then agree that, while aging is not incompatible with human dignity, it is objectionable enough to merit serious scientific attention. The collective "we" of evolution may need it together with its twin, death, but the "we" of living cultures could do with considerably fewer of its burdens and downward slopes.

Aging and Its Longevity

Does that mean we need to find a "cure" for its burdens? An immediate difficulty here is that it is not clear what a "cure" of aging might look like. If death is the final outcome of aging for all biological creatures, does it begin at birth or in adulthood? A scientific answer to that question might then lead us to ask whether a cure would look to a perpetual youth or a perpetual adulthood (and then young or old adulthood). Or we might envision a slowing of the aging process to a snail's pace, not exactly a clean cure but an indefinite forestalling of the worst of the present consequences of aging and its final outcome, death.

I set out three meaningful possibilities for the cure or amelioration of aging (and use them also in the next chapter in another context).

Normalizing life expectancy. The aim here is to bring everyone up beyond what would be considered a premature death to what is now the average life expectancy in the most developed countries of the world (in Japan, for example, it is eighty-five for women) and bring men up a few years to a life expectancy equal to females. This trend is already under way (though not in all poor countries), driven by improved public health standards, better education, housing, diets, and economic status. Normalization must, however, be accompanied by improved standards in the quality of life, and much of that can be accomplished through research and technological innovation. The cure or amelioration of osteoporosis, arthritis, Alzheimer's disease and other dementias, and improved methods of dealing with loss of hearing and sight, would be high on any list of valuable research goals.

My characterization of normalization retains the idea of a premature death. There are at least four ways of defining a premature death, each of them arbitrary to a considerable degree. There is a death that comes earlier than the average life expectancy in a population; that might be called the statistical definition. There is the cultural definition that classifies people as young or old for various social or political purposes. Since the Bismarckian welfare programs of the late nineteenth century in Germany, the age of sixty-five has been widely used as the dividing line. Then there is what I think of as the psychological meaning of a premature death, the age at which people begin thinking of themselves as old. Finally, there is a biographical definition, that stage when people have accomplished the main tasks and goals of their lives: education, work, parenthood, travel, and whatever else their individual talents allowed.

Each of these definitions is arbitrary in the sense that each is, and always will be, variable and moving. The statistical definition will change as average life expectancy increases (most places) or decreases (Russia and many sub-Saharan countries). The cultural definition will move as more people go into old age in good health, are capable of remaining active even if not employed, and are seen as still part of the productive, nondependent segment of society. The psychological definition will reflect the cultural of course, but not entirely; people do vary in their own sense of age and aging. And the biographical definition will depend on idiosyncratic life goals.

Despite the variables in each of the definitions, they remain useful for establishing social programs, for creating conventions and expectations of behavior at different ages—many are grateful when old age relieves them of earlier responsibilities—and for helping set targets for biomedical research. Death at sixty-five now seems to require the label "premature," while at seventy it has become increasingly plausible; and the cutoff age may go up further in the future. There can well be a legitimate gap between what is culturally thought of as a premature death and the aim of bringing everyone up to the statistical average of eighty-five. My rationale for the distinction is that most people may well have lived a full and fruitful biographical life before age seventy, and thus we mourn their loss less than that of a much younger person. We may also have different reasons for setting the age of various social programs (e.g., employment possibilities) lower than average life expectancy (such as Medicare or eligibility for special housing).

Much of the research agenda is already in place for the normalizing of aging, consisting of what is already known to improve health and to avoid premature death. It is a mixture of improved public health programs, decent medical care (with an orientation to health promotion programs designed to change behavior, and primary care), healthy life styles, good education, jobs, housing, and a welfare safety net. Beyond that, research on chronic diseases that lead to premature death, create disability in old age, and ruin or significantly diminish the quality of life is appropriate. Equally appropriate is governmental support for such research, which contributes to the overall health and well-being of the population as a whole. This will not be true of the next category.

Maximizing life expectancy. The purpose of research efforts to maximize life expectancy would be to bring everyone up to what are now the historically longest known human life spans, between 110 and 122 years. If some few people can live that long (and want to), why not make it theoretically possible for everyone to get there? There is a certain plausibility to that idea, if only because the course of evolution has shown that species have acquired very different life spans; life spans are biologically malleable. Recent research has, moreover, begun to suggest that there may be no fixed maximum life expectancy.41 Earlier estimates at the least have again and again been proved wrong in recent years, often because of extrapolations from past trends of causes of death or age of death, both of which have been changing.

The death of a Frenchwoman at 122 in the late 1990s and the regularity with which people living between 105 and 110 are now being reported cannot fail to catch the eye. Before 1950 centenarians were rare, and there may never have been any before 1800. Since 1950, however, their numbers have doubled every ten years in Western Europe and Japan (with women outnumbering men by four to one and even more so with higher ages); and those centenarians now alive live two years longer on average than those a few decades ago did.42

Nonetheless, while the trend is strongly in the direction of more people who are very old, S. J. Olshansky has presented strong data indicating how hard it will be to move everyone far along in that direction. Working with mortality trends in France and the United States, he shows that it would take huge reductions in mortality rates at every age from present levels. To move, for instance, from an average life expectancy of seventy-seven in France (combining male and female) to eighty would require an overall mortality decline of 23 percent; and it would take a decline of 52 percent for all ages to move the average to age eighty-five, and 74 percent for the average to move up to age ninety. Since mortality rates are already low for younger ages, most of the mortality decline would have to take place among those over age fifty.43 That decline is not theoretically impossible but is in practice implausible.

To get an idea of the enormity of the task, recall that a cure for cancer, the second greatest killer in the United States, would only bring about a 1.5 percent overall decline in mortality. In response to the contention that lifestyle modifications could bring about changes of the necessary magnitude, a number of studies have suggested that mortality would not significantly change if the entire population lived in an ideally healthy way.44 J. W. Vaupel (never citing Olshansky and admitting that his own calculations are rough) is more optimistic. He holds that a decline in mortality rates in France at the pace that has prevailed for the past means that most people can expect to live to ninety in the not too distant future.45 Whatever the final truth here—which will take decades to appear—there is considerable good sense in Kirkwood's conclusion that "the record breakers [for individual life span] are important . . . [but] the major focus for research must be to address the main body of the life span distribution, i.e., the general population, and to improve knowledge of the causes of age-associated morbidity and impaired quality of life."46

Optimizing life expectancy. The most ancient version of optimization is bodily immortality, and since no clear scientific theory of how to achieve it exists, I put it aside here. The other is to move the average life expectancy to, say, more than 150 years. As the previous analysis of maximizing life expectancy suggested, it will be very hard, even if not theoretically impossible, to get average life expectancy to eighty-five, much less one hundred. Most commentators seem able to envision incremental gains within the limits of present biological and medical knowledge but agree as well that only striking genetic breakthroughs could get us to and beyond 150. The principal obstacle appears to be the multifactorial nature of the aging process; no single magic bullet is likely to do the job. All of the human organs, including the brain, would have to benefit simultaneously from the breakthrough for the results to be anywhere near desirable.

As someone who has been following the scientific developments in understanding aging for over thirty years, I am aware how remarkably little practical progress seems to have been made, even though there has been a gain in knowledge about the aging process. Among the earlier theories that have been rejected or called into question are notions of a fixed limit to the replication of cells over a life span (once thought to be fifty times), of cells' aging, or the evolutional necessity of unalterable programmed death. At the same time, recent research on telomeres—stretches of DNA and the proteins that bind them and protect the ends of chromosomes—has shown that they become shorter over time as a cell divides, eventually dying. It reconfirms the notion of a division limit to cells that, if better understood, could hold off the accumulated, progressive decline of the cells. It is an extension of the long-held view that the aging process is one that sees a gradual breakdown of the genetic mechanisms that preserve life; the trick is to find a way to stop or slow their decline. In this view, aging is a failure of biological adaptation, which Michael Rose says is a case of "natural selection abandoning you."47 Research on telomeres, nutrition, free radicals, antioxidants, apoptosis (cell breakdown), hormonal regulation, cell rejuvenation, and ways of repairing DNA are well under way. Alternatively, the search is on for those positive genetic factors that protect life, have helped individuals flourish in earlier years of life, and might be enhanced to continue doing so.48

In sum, the genetic approach to life-span extension aims to find the basic underlying mechanisms of aging, still poorly understood, and then to discover ways of changing and manipulating them. If there is to be a radical change in life expectancy, this approach is currently the only seriously envisaged way of getting there.49 A medical approach, by contrast, focuses on the various disabilities and diseases that bring poor health, and eventually death, to the elderly and has been (as noted above) the main approach to the incrementalism of the maximizing strategy, far more limited in its ultimate possibilities.

 

Do We Need a Much Longer Life? Can We Stand It?

Whether a form of rationalization or a higher insight, most of the imaginative literature on life extension reached a negative conclusion. Citing boredom or debility, it debunked the idea of superextending existence in its youthful or riper guise. Even so, the vision of a fountain of youth or a more up-to-date iteration of long life in good, vigorous health, hangs on. There are many people—and all of us know a few—who would like to live indefinitely; and almost everyone, if not in utter misery and given a choice, , would want to live at least one more day, and one more after that. Even if I see some point in the evolutionary benefits of death and a change of the generations, that is a terribly abstract way of looking at my own life: it is doing well and not too interested in making its evolutionary contribution. A longer life beckons.

I am not alone. The National Alliance for Aging Research reported in 2000 that it had discovered twenty-five firms it labeled "gero-techs" because of their focus on applied aging research. The alliance itself sees gerotechnology as a viable market possibility that can help find ways—through improved health—to avoid rationing health care to the elderly in the future. Its director, Daniel Perry, anticipates that through gerotechnology in the twenty-first century, "the drive to discover the means to produce youthful health and vitality [will be] no less than a matter of national necessity."50 Gerotechnology, then, would hope to assure longer and healthy lives and thus avoid the economic and other problems of aging societies.

The language of "national necessity" seems to me a variant way of speaking of a research imperative. An immediate problem comes to mind. Is the aim to improve the health of the elderly within the normalization model, that is, to get everyone up to the present average life expectancy? If so, it is consistent with a goal of compressing morbidity, increasingly feasible. Or is the aim (of some of those gero-tech firms, for instance) to push forward the length of life into the maximizing or optimizing range? In that case, it may make geriatricians' efforts to compress morbidity that much harder. Most (though not all) of those who reach the age of 100 require significant help with what geriatricians call activities of daily living, suffer from various chronic conditions, are usually frail, and will have some degree of dementia; and it all gets worse after 105. It may well be the case that a reduction in morbidity can keep pace with a reduction in mortality, but most likely only if the net result for life expectancy gains move slowly.

Yet the health problems and uncertainties connected with increased longevity are hardly the only ones we need to think about. What are the other social consequences of efforts in that direction? That is a difficult question to answer, if only because there are different possible directions (and mixtures of directions) in which future developments could go. We are already on one path, that of normalization, aiming to improve the quality of life of the elderly, not directly trying to lengthen the average life expectancy but getting, without trying, a gradual movement in that direction.

There is already considerable knowledge about that trend. Within twenty to thirty years, when the United States' baby boom generation retires in large numbers and the proportion of elderly moves from 13 to 18 percent, there will be serious problems sustaining the present Medicare program at its present level.51 Comparable, and perhaps even worse, problems will face officials in other countries (Germany, for example, expects to have nearly 24 percent of its population over age sixty-five). The correlative decline in the number of young people to pay for the health care of the old (the so-called dependency ratio issue) will only exacerbate the situation, as will the continued introduction of new, often beneficial, but usually more expensive technologies; public demand for ever-better medicine will not be much help either.

Something will have to give here, and there is already the expectation of moving forward the age of eligibility for Medicare, from sixty-five to sixty-seven, and further moves in that direction will occur. The promise of reduced disability for the aged in the years to come will be of great help here, but even so various unpleasant policies will probably accompany it: means-testing for the elderly rather than full and free coverage; rationing of health care, overt or hidden; constraints on health-care providers and hospitals; and constant efforts to wring greater efficiency out of the system. A universal system of health care (which I support, but which is not yet on the horizon) might lead to a generally more rational and equitable system, but it would increase the governmental cost of health care and might not directly help solve the elderly health-care problem. There are optimistic voices to be found, but not too many. The group on research into aging, noted above, believes that salvation lies in improved technologies to bring better health to the elderly. Others believe that some combination of greater efficiency, more choice on the part of people, and savings accounts will make it possible to weather the baby boom era—an era that will eventually end anyway, somewhere in the vicinity of 2050 or so, bringing a more affordable situation.

Once a move is made toward maximization and beyond, then a larger range of problems begins to emerge, and much would depend on the kind of age extension that research might produce. A longer life with a concomitant gain in vigor would be one possibility. Another would be a longer life but at present levels of vigor. Still another would be a longer life that simply stretched the length of the decline. And still another would be a longer life with mixed effects, mental and physical, some good and some bad.

Each of these possibilities would raise its own set of problems, and I will not try to imagine what they might be. Whatever they might be, even small changes toward any of them would have strong general effects. Included would be the impact on younger generations jockeying for jobs and promotion and positions of leadership. In the struggle to pay for the extended years without wiping out retirement and social security, would the elderly too be forced to work more years than they might like? There would also be a great impact on childbearing and child rearing, as different definitions of youth and middle age emerged and as the job market for women of childbearing age changed (and what would that age come to be?); and an impact on social status and community respect, as a larger and larger portion of the elderly emerged (and with that emergence the possibility of intergenerational conflict). Everything, in a word, would change.

Suffice it to say that a society with a much larger proportion of elderly would be a different kind of society, perhaps good, perhaps bad; much would depend on the strategies employed to cope with all the needed changes and how much time was necessary to put them in place. If by chance a striking genetic breakthrough should allow lives of 150 years, the impact would be all the more dramatic, and the necessary changes in social policy all the more radical.

Do We Need to Increase Average Life Expectancy?

Do societies worldwide need a breakthrough to the possibilities of maximizing and optimizing average life expectancy—and can any afford a research drive to achieve it? It is very hard to find any serious argument to support that development, as if future societies will be inadequate and defective unless we all have longer lives. None of our current social problems—in education, jobs, national defense, environmental protection, or other urgent issues—stem from a low average life expectancy and none would vanish with a higher average life expectancy. Many problems would grow exponentially. At most, many individuals have said they would like to try a longer life and would probably be willing to pay for it. But for how much of the total direct and indirect costs of living out extended life spans? Ought we to want it for ourselves?

I say "ought" to force myself and my readers to ask just exactly what we think we would gain beyond a life that ended on average at, say, eighty? This question should give everyone pause since no one could know whether they would in fact fare well, whether the kind of extended life span would be one they found acceptable, and what they would do if it did not turn out as planned. We might agree that there are many unfortunate features of the present situation and most of us can think of reasons why we would like more years. But no clear correlation between a satisfying life (assuming good health and the avoidance of a premature death) and length of life has ever been demonstrated. How many people have any of us known who died at age eighty or ninety, but for whom we felt sorrow because of all the possibilities that lay before them? I have been to many funerals of very old people and have yet to hear anyone lament a loss of future possibilities, however much they grieved to lose a friend or relative.

Some of us may be prepared to take our chances. As a policy matter, what stance should we take toward deliberate research efforts to extend average life expectancy and individual life spans? There are three possibilities: to support such research at the public, governmental level and encourage the private sector to pursue it; to refuse public grants for such research but permit it in the private sector; to refuse public grants and to use considerable social and economic pressure to discourage it at the private level (I ignore here the possibility of banning such research, which is neither likely nor easy to do).

Unless someone can come up with a plausible case that the nation needs everyone to live much longer, and longer than the present steady gain of normalization will bring, there is no reason whatever for government-supported research aimed at maximizing or optimizing life spans. Longer lives may in any case come about as an accidental by-product of efforts to improve the quality of life for elderly people; but there is no reason to court that possibility directly with targeted research. Nor is there any reason to encourage the private sector to pursue it either, and for the same reasons.

Yet that sector will undoubtedly do so if promising leads open up, and if it believes a profit can be made. Should that happen, there would be every reason to put moral, political, and social pressure on the private sector not to move on in the research unless it took part in a major national effort to work through in advance the likely problems that success might bring everyone. The matter would be important enough, the implications grave enough, that it would be folly to wander in with no forethought or strategies in place to deal with the economic and social consequences, many of which can be realistically imagined. To drop a new and far-reaching technology on our society, or any society, simply because people will buy it would be irresponsible. It would instead require the fullest airing over a decent period of time and in a systematically organized fashion. The public could then decide what it would like to see happen and be in a position to make a considered judgment about a collective response.

There is no doubt also that a private-sector, age-extending, anti-aging product would be expensive (most new pharmaceuticals are and would not otherwise be worth developing) and probably unavailable to everyone at first (and perhaps not ever). As with many expensive new technologies, no public-sector body could reasonably deny the pharmaceutical to everyone simply because not everyone could afford it.52 But since there will undoubtedly be disagreement on the matter, research efforts to extend life expectancy to some optimizing level should attempt to reach community consensus on the technology's merits rather than simply go by default to the market and private choice. However difficult a collective consensus may be to achieve, the numerous problems that would arise for everyone if some had the technology but others did not (of which inequity might be the least of them) are easy to imagine. Would governments have to devise different social security, retirement, and job arrangements for the former to live side-by-side with those who did not choose (or could not afford) to take the product? What responsibility would the former bear for the consequences of their choice—total personal responsibility, for better or worse, or would a social safety net be available to help them (paid for by those who did not choose to go that way)? Those are questions a pure market approach cannot answer, but a failure to raise and resolve them would put not only those who chose to live longer lives at risk, but the rest of us as well.

The question of research deliberately aimed at extending average life expectancy, at changing the course of aging, bears directly on the goals of medicine. I argue that death itself is not an appropriate medical target, and that there is no social need to greatly extend life expectancy. But how might we think more broadly about medical research that expands its traditional goal of preserving and restoring health to enhance human nature and human characteristics?

 

NOTES

3. Is Research a Moral Obligation?

1. Renée C. Fox, "Experiment Perilous: Forty-Five Years as a Participant Observer of Patient-Oriented Clinical Research," Perspectives in Biology and Medicine 39 (1996): 210.

2. Ian Gallagher and Michael Harlow, "Health Chiefs? Yes to Human Clones," International Express, 1-7 August 2000, 10.

3. W. French Anderson, "Uses and Abuses of Human Gene Therapy," Human Gene Therapy 3 (1992): 1.

4. Ronald Munson and Lawrence H. Davis, "Germ-Line Gene Therapy and the Medical Imperative," Kennedy Institute of Ethics Journal 2 (1992): 137.

5. Letter to the president and members of Congress sent by the American Society for Cell Biology, March 4, 1999.

6. Glenn McGee and Arthur L. Caplan, "The Ethics and Politics of Small Sacrifices in Stem Cell Research," Kennedy Institute of Ethics Journal 9 (1999): 152.

7. Human Embryo Research Panel, Report of the Human Embryo Research Panel (Bethesda: NIH, 1994), 44-45.

8. James F. Childress, "Metaphor and Analogy," in Encyclopedia of Bioethics, ed. Warren Thomas Reich, rev. ed. (New York: Simon and Schuster, 1995), 1765-73.

9. George J. Annas, "Questing for Grails: Duplicity, Betrayal and Self-Deception in Postmodern Medical Research," Journal of Contemporary Health Law Policy 12 (1996): 297-324.

10. George J. Annas, "Reforming the Debate on Health Care: Reform by Replacing Our Metaphors," New England Journal of Medicine 332 (1995): 744-47.

11. Susan Sontag, AIDS and Its Metaphors (New York: Farrar, Straus, and Giroux, 1989), 95.

12. Gilbert Meilaender, "The Point of a Ban: Or, How to Think About Stem Cell Research," Hastings Center Report 31, no. 1 (2001): 9-16.

13. Onora O'Neill, "Duty and Obligation," in Encyclopedia of Ethics, ed. Lawrence J. Becker and Charlotte B. Becker (New York: Garland Publishing, 1992); Richard B. Brandt in Encyclopedia of Ethics, 278; and Richard B. Brandt, "The Concepts of Obligation and Duty," Mind 73 (1964): 374-93.

14. Hans Jonas, "Philosophical Reflections on Experimenting with Human Subjects" (1969), in Philosophical Essays: From Ancient Creed to Technological Man, ed. Hans Jonas (Englewood Cliffs: Prentice-Hall, 1974), 129.

15. Ibid., 117.

16. See also Meilaender, "The Point of a Ban."

17. President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Securing Access to Health Care (Washington, D.C.: Government Printing Office, 1983), 1:22-23.

18. Norman Daniels, "Justice, Health, and Healthcare," American Journal of Bioethics 1 ( 2001):3.

19. Ibid. See also Ronald Bayer, Arthur L. Caplan, and Norman Daniels, eds., In Search of Equity (New York: Plenum Press, 1983); and "European Issue: Solidarity in Health Care," Journal of Medicine and Philosophy 17 (1992): 367-477.

20. Ronald Puccetti, "The Conquest of Death," Monist 59 (1976): 249-63.

21. Annette T. Carron, Joanne Lynn, and Patrick Keaney, "End-of-Life Care in Medical Textbooks," Annals of Internal Medicine 130 (1999): 82-86.

22. Susan Sontag, Illness as Metaphor (New York: Farrar, Straus, and Giroux, 1977), 8.

23. Lewis Thomas, The Lives of a Cell: Notes of a Biology Watcher (New York: Penguin Books, 1978), 115.

24. Bernard Williams, Problems of the Self (Cambridge, Mass.: Harvard University Press, 1976), 94-95; and Eugene Fontinell, Self, God, and Immortality: A Jamesian Investigation (New York: Fordham University Press, 2000), chs. 7-8.

25. Hans Jonas, "The Burden and Blessing of Mortality," Hastings Center Report 22, no. 1 (1992): 37.

26. Daniel Callahan, The Troubled Dream of Life: In Search of a Peaceful Death (New York: Simon and Schuster, 1993), 63.

27. Darrel W. Amundsen, "The Physician's Obligation to Prolong Life: A Medical Duty Without Classical Roots," Hastings Center Report 8, no. 4 (1978): 23-30.

28. Philippe Ariès, The Hour of Our Death, trans. Helen Weaver (New York: Knopf, 1981).

29. Quoted in Lawrence M. Fisher, "The Race to Cash in on the Genetic Code," New York Times, 29 August 1999, C-1.

30. William B. Schwartz, Life Without Disease: The Pursuit of Medical Utopia (Berkeley: University of California Press, 1998).

31. Gilbert Meilaender, personal communication, 2002.

32. E. J. Larson and T. A. Eaton, "The Limits of Advanced Directives: A History and Reassessment of the Patient Self-Determination Act," Wake Forest Law Review 32 (1997): 349-93.

33. Carron et al., "End-of-Life Care."

34. L. Scheiderman and N. Jecker, Wrong Medicine: Doctors, Patients, and Futile Treatment (Baltimore: Johns Hopkins University Press, 1995).

35. Callahan, Troubled Dream of Life.

36. Anthony J. Vita et al., "Aging, Health Risk, and Cumulative Disability," New England Journal of Medicine 338 (1998): 1035-41.

37. Gerald J. Gruman, "Cultural Origins of Present-Day 'Ageism': The Modernization of the Life Cycle," in Aging and the Elderly: Humanistic Perspectives in Gerontology, ed. Stuart F. Spicker et al. (Atlantic Highlands, N.J.: Humanities Press, 1978), 359-87.

38. Robert Prehoda, Extended Youth: The Promise of Gerentology (New York: G. P. Putnam, 1968), 254.

39. Arthur L. Caplan, "The Unnaturalness of Aging—A Sickness Unto Death?," in Concepts in Health and Disease, ed. Arthur L. Caplan, H. Tristram Engelhardt, Jr., and James McCarthey (Reading, Mass.: Addison-Wesley, 1981), 725-37; and Daniel Callahan, "Aging and the Ends of Medicine," in Biomedical Ethics: An Anglo-American Dialogue, ed. Daniel Callahan and G. R. Dunstan (New York: New York Academy of Sciences, 1988), 125-32.

40. Timothy F. Murphy, "A Cure for Aging?," Journal of Medicine and Philosophy 11 (1986): 237-55.

41. T. B. L. Kirkwood, "Is There a Limit to the Human Life Span?," in Longevity: To the Limit and Beyond, ed. Jean-Marine Robine, James W. Vaupel, and Michael Bernard Jeune (Berlin: Springer, 1997), 69-76; and Ali Ahmed and Trygve Tollefshol, "Telomeres and Telomerase: Basic Science Implications for Aging," Journal of the American Geriatric Society 49 (2000): 1105-9; and Jim Oeppen and James W. Vaupel, "Broken Limits to Life Expectancy," Science 296 (2002): 1029-31.

42. Shiro Horiuchi, "Greater Lifetime Expectations," Nature 405 (2000): 744-45.

43. S. J. Olshansky, "Practical Limits to Life Expectancy in France," in Longevity: To the Limit and Beyond, 1-10.

44. Ibid.

45. James W. Vaupel, "The Average French Baby May Live 99 or 100 Years," in Longevity: To the Limit and Beyond, 11-27.

46. Kirkwood, "Is There a Limit?," 75.

47. Michael R. Rose, "Aging as a Target for Genetic Engineering," in Engineering the Human Germline, ed. Gregory Stock and John Campbell (New York: Oxford University Press, 2000), 54.

48. James W. Vaupel et al., "Biodemographic Trajectories of Longevity," Science 280 (1998): 855-60; and James R. Carey and Debra S. Judge, "Life Span Extension in Humans Is Self-Reinforcing: A General Theory of Longevity," Population and Development Review 27 ( 2001): 411-36.

49. E. Timmer et al., Variability of the Duration of Life of Living Creatures (Amsterdam: IOS Press, 2000), 161-91.

50. Daniel Perry, "The Rise of the Gero-Techs," Genetic Engineering News 20 (2000): 57-58.

51. See Callahan, "Aging and the Ends of Medicine," 125-32; Victor R. Fuchs, "Medicare Reform: The Larger Picture," Journal of Economic Perspectives 14 (2000): 57-70; David M. Cutler, "Walking the Tightrope on Medicare Reform," Journal of Economic Perspectives 14 (2000): 45-56; and Mark McClellan, "Medicare Reform: Fundamental Problems, Incremental Steps," Journal of Economic Perspectives 14 (2000): 21-44.

52. John Harris, "Intimations of Mortality," Science 288 (2000): 59.

Join UC Press


Members receive 20-40% discounts on book purchases. Find out more