Chapter 4. Analyses
In the discussion of models of policy making, I compared the rational and incremental models. In the rational model, goals are clear and agreed upon; policy makers have complete and reliable data; problems are well defined; a full range of policy options is identified; effects of options are understood and predictable; and final choices maximize previously stated goals. All in all, this is not a bad image of how public policy should be made. We like to think of ourselves as logical and well informed, of our government institutions as well-oiled machines making sound choices, of policy as made up of well-reasoned outcomes furthering worthy social goals.
The incremental model offers a contrasting view of policy making. Here the goals may be unclear or in conflict; information is missing or unreliable; options may be poorly defined or ignored; policy emerges piecemeal, in fits and starts; and results often are different from what was intended. An incremental model presents a less appealing but more realistic view. Studies of several kinds of policy decisions—whether in budgeting, foreign policy, or other areas—support the conclusion that policy making is less rational and more incremental than we would like it to be.
This is not to say that public policy making is irrational. Most institutions and policy makers probably aspire to be as rational as possible. But in a complex world, there simply are too many choices, reconciling too many different goals, based on too little information, made by too many people, to enable us to meet the high standards of the rational model. The best that we can achieve is Herbert Simon's concept of a bounded rationality, wherein we try to move as far from the incremental to the rational as we can.
Analysis is simply one way of extending the boundaries of rationality in public policy. When they use analysis to make decisions, policy makers try to understand the problems they are dealing with, the various constraints on their choices, how one way of responding to the problems compares to others that they might use, and the overall (for society) and specific (for groups in society) consequences of their decisions. In this chapter, I examine two kinds of analysis that play a role in environmental policy. The first is risk analysis, which estimates the harm of an activity, substance, or technology. The second is economic analysis, which calculates and predicts the costs and (sometimes) the benefits of different policy goals or decisions.
I begin with the concept of risk and the process that most government agencies use to analyze health risks. Then I turn to economic analysis and its uses in environmental policy making. The chapter concludes with a look at three of many issues that come up when we use risk and economic analysis in environmental policy. 1
Risk Analysis and the Environment
The concept of risk is central to environmental policy. Nearly any environmental problem can be seen as a matter of risk, which we can define simply as the possibility of suffering harm. Of course, risks are all around us, and they are not limited to environmental causes. Driving cars, making investments, climbing mountains, starting a small business—all of these expose us to physical, financial, psychological, or other risks. 2
My focus here, though, is risk from contamination of air, soil, and water.
There are two dimensions to this notion of risk as the possibility of suffering harm. 3
The first is the probability or likelihood of the harm; the second is the severity of the harm, its magnitude or significance. When we have a choice, most of us are inclined to avoid taking risks that pose highly probable and severe harm. Knowing that 1 in 10 of the people who try to scale Mount Everest die in the effort is enough to discourage most of us from the attempt, whatever the chance for glory. The odds are bad (1 in 10) and the harm (death) is one that most of us would regard as severe. The risk of someone in this country dying in a car accident in any year is a much lower 2 in 10,000. It is a risk nearly all of us have decided is worth taking. To compare some sources of everyday risk, consider table 5, which lists activities that pose a 1 in 1,000,000 increased chance of death in a year. Risk is all around us, often from unlikely sources. 4
Health Versus Ecological Risks
|TABLE 5 COMPARING EVERYDAY RISKS |
|Activities That Increase Chance of Death |
|by 0.000001 (one in a million) |
|Activity ||Chance of |
|Smoking 1.4 cigarettes ||Cancer, heart disease |
|Spending 1 hour in a coal mine ||Black lung disease |
|Traveling 6 minutes by canoe ||Accident |
|Traveling 300 miles by car ||Accident |
|Traveling 1,000 miles by jet ||Accident |
|Eating 40 tablespoons of peanut butter ||Liver cancer caused by aflatoxin B |
|Living 150 years within 20 miles of a ||Cancer caused by radiation |
|nuclear power plant || |
|Eating 100 charcoal-broiled steaks ||Cancer caused by benzopyrene |
|Living 2 months with a cigarette smoker ||Cancer, heart disease |
|SOURCE: Richard Wilson, "Analyzing the Daily Risks of Life," Technology Review 81 |
|(February 1979): 45. |
Environmental policy makers work in the context of two broad categories of risk. The first is the possibility of harm to human health—anything from eye irritation from air pollution to death from exposure to high levels of a toxic chemical. The object of concern is people and their well-being. We describe health risks according to several features: whether the effects are acute (immediate) or chronic (long-term), how serious they are, whether they are reversible, and the numbers and kinds of people that are affected. Society typically responds quickly to evidence of acute risks, because causes and effects usually are fairly easy to establish. Most of the debate is over chronic risks, where relationships between causes and effects are harder to establish. There is uncertainty about the sources of problems—or whether there is a problem at all.
Until recently, it was likely that when environmental policy makers referred to a chronic health risk they meant cancer. The concern about cancer has dominated government risk assessment. The state of the art for assessing cancer risks is ahead of that for other chronic risks. When agencies justify regulatory action, they usually base their case on cancer, as there is more research and data to draw on. Politically, a focus on cancer has helped environmental agencies generate public support for their programs. To ancients and moderns alike, James T. Patterson has written, cancer has been seen as "voracious, insidious, and relentless." 5
By casting itself as a cancer protection agency in the late 1970s, EPA was able to sustain public support for its programs in troubled economic times. For all of these reasons, cancer has dominated the regulatory policy agenda.
Yet many noncancer health effects should concern us as well. The causes include common pollutants, for example, lead, which is pervasive in contemporary society. Exposure to lead may impair children's physical and mental development and cause high blood pressure in white males. Another example is ozone, which forms when volatile organic compounds (VOCs) from cars and industrial sources react with nitrogen oxides in the presence of sunlight. Ozone causes short-term respiratory problems and stresses the cardiovascular system. Long-term exposure may impair lung function permanently. The list of noncancer but chronic threats to health goes on; we can expect that policy makers will give more attention to such risks in the years to come, as the methods for studying them improve and concern about them grows. Table 6 lists several types of noncancer health effects that have been linked to environmental pollution and an example of each.
|TABLE 6 TYPES OF NONCANCER HEALTH RISKS THAT ARE || |
|LINKED TO ENVIRONMENTAL POLLUTION |
|Type of Health Risk ||Example |
|Cardiovascular ||Increased heart attacks |
|Developmental ||Birth defects |
|Hematopoietic ||Impaired heme synthesis |
|Immunological ||Increased infections |
|Kidney ||Dysfunction |
|Liver ||Hepatitis A |
|Mutagenic ||Hereditary disorders |
|Neurotoxic/Behavioral ||Retardation |
|Reproductive ||Increased spontaneous abortions |
|Respiratory ||Emphysema |
|Other ||Gastrointestinal diseases |
|SOURCE: John J. Cohrssen and Vincent T. Covello, Risk Analysis: A Guide to Principles and Methods for Analyzing Health and Environmental Risks (Washington, D.C.: Council on Environmental Quality, 1989). |
Another major category of risk is harm to ecological resources. When many people think of environmental
protection, they probably are more likely to think of ecological than health risks. Fish thriving in a clean river, a clear view of the Grand Canyon, an untainted estuary full of shellfish, a white beach with no litter in sight, a tropical rain forest with tremendous diversity in its plant and animal species—all describe ecological resources worth protecting. Of course, health and ecological risks may overlap. Contaminated fish may pose health as well as ecological risks. But other problems, like emissions from a power plant that impair visibility in a park, are largely aesthetic and ecological. The distinctions between health and ecological risks are important, even when they occur as part of the same problem. They pose different kinds of questions and present choices among diverse, often competing values. They also require different methods for estimating risks and evaluating benefits.
The differences between health and ecological risk assessment result mainly from the greater variety in the forms of life affected and the many endpoints (the range of bad things that may happen) for ecological risks. When assessing health risks, we analyze the effects on human health. For ecological risks, we look at a variety of receptors—birds, plants, ecosystems, and so on. In addition, the endpoints are more diverse in the case of ecological risks and may include the effects on organisms, populations, habitat, natural systems, and others. Ecologists account for levels of ecological organization. They sort living systems into organisms, single-species populations, multispecies communities, and ecosystems. Risks are assessed at each level.
This discussion focuses on health risks; that is where risk assessment has been used the most to make environmental policy. But interest in understanding and assessing ecological risks has been growing. Increasingly, policy makers recognize what EPA's Science Advisory Board has described as "the vital links between human life and natural ecosystems." 6
It often is necessary to distinguish health from ecological risks for analytical purposes, but the connections between the two are strong. The next section looks at perceptions of risk. Following that is a closer look at the process of assessing health risks in environmental policy. How Do People Perceive Risks?
The study of attitudes toward risks, their acceptability, and people's behavior in response to what they think is harmful is the field of risk perception. Much of the research on risk perception focuses on psychological factors. Some of it also examines the sociological and cultural influences on risk perception, which are especially important from the perspective of public policy. These cultural factors help to explain many of the differences between the lay public's intuitive evaluations and the formal evaluations of risk by experts. 7
We will look first at individual perception of risk, then at social and cultural influences on risk perception.
As for individual perceptions of risks, experimental psychologists have compared statistical (based on quantitative studies) to perceived risks. In one study, people were asked to estimate the frequency of deaths that resulted from forty-one causes, among them, disease, natural disasters, accidents, homicides, and recreation (like mountain climbing). 8
The answers revealed differences between what people thought
was risky and what actually was
risky. People overestimated the risks of death from unusual, catastrophic, or lesser-known sources (such as nuclear power plants); they underestimated the risks of death from common, better-known, or discrete causes (such as driving). The public's negative views toward nuclear power are shaped by the tendency to attribute high risk to lesser-known problems that could have catastrophic effects. Similarly, rare causes of death attract more attention. Botulism, for example, accounts for about five deaths a year in the United States, but the respondents in one survey thought it caused five hundred. Because deaths from botulism are reported in the media, people are more aware of them and tend to exaggerate their occurrence.
People's perceptions of risks affect their views about the acceptability of different kinds of risk. Consider the differences between perceptions of voluntary and involuntary sources of risk. The risk perception research shows that people are more willing to tolerate risks they assume voluntarily than risks that are imposed on them by others without their consent. This explains why some people might oppose a decision to site a waste incinerator near their homes yet not be concerned with the greater statistical risks of smoking or not using seat belts. They smoke or do not use seat belts by choice. 9
People are also far more concerned about risks that are unknown, dreaded, or seen as catastrophic than risks that are better known or discrete (occurring in a large number of small events rather than in one major event). The public's intuition about catastrophic events may have a solid foundation. An example is a catastrophic accident in one community, whose effects led to what has become known as the "Buffalo Creek Syndrome." The collapse of a slag waste dam in Buffalo Creek, West Virginia, some years ago left 120 people dead and 4,000 homeless. Nearly two years later, when psychiatric evaluators studied the survivors, they found evidence of "disabling character changes" and the sense of a "loss of communality" among them, including a loss of direction and energy. 10
Studies showed similar effects after the Three-Mile Island accident, even though there was no apparent physical damage. So attitudes about risk may reflect concerns about social stability and cohesion, about effects on communities, not just individuals.
The lesson of much of the research is that people do not react to risk in a state of social or cultural isolation. Shared values and a sense of community come into play. For example, people's views about the acceptability of various kinds of risks reflect their judgments about the institutions that manage risks in society. 11
If people doubt the trust-worthiness of a corporation that is proposing to build a waste incinerator, or the objectivity of the government agency that will issue an operating permit, they will be skeptical about any evidence that they face negligible risks when the incinerator starts operating. People also evaluate risks on the basis of perceived fairness in their distribution. If a local government decides to build a landfill in a poor community and if the waste was produced mostly by a more affluent community in another part of town, we can expect more opposition based on the clear inequities of the decision. If a community views the result of a decision and the process for making it as fair, it is more likely to accept the result. 12
Why does all of this matter in a discussion of analyses? Because there is a gap between the products of the experts in quantitative risk assessment and the informal, more intuitive evaluations of risk by the lay public. As the next section shows, formal risk assessments follow a linear, quantitative path. Risk is defined as a probability of harm times its consequences. Public perceptions of risk, in contrast, are based more on people's attitudes regarding voluntariness, effects on the community, familiarity with the source of the risk, perceptions of fairness in the distribution of risk, and public confidence in the institutions that are managing the risks. The reaction of many experts is that the lay public is not acting rationally, that their intuitive evaluations are not as valid as the methodologically sophisticated risk assessments of the experts. Yet there may be more to the public's perceptions than the experts are willing to recognize. And in a democracy, we need to recognize the validity of these public perceptions when making risk decisions. 13 How Do Agencies Assess Risks to Human Health?
Agencies assess health risks for any of several purposes. Take a decision on setting emission limits for a chemical plant. The prudent course is to determine what substances are in the emissions, how dangerous they are, to how many and what kinds of people, with what effects—in short, to do a risk assessment. For this, we need two kinds of information. The first is on toxicity, a set of measures of the harm of a substance that relates doses to harmful effects—cancer, birth defects, and so on. Toxicity estimates the harm based on different levels and kinds of possible exposures. The second is information on exposure to the substance in the real world. Two chemicals may be equally toxic, but one is emitted in the middle of a city, the other in a lightly populated area. The second poses more risk, in terms of its effects on overall population. 14 Estimating Toxicity
In my example, the first task is to determine whether the emissions from the chemical plant are a hazard and, if so, to estimate their toxicity. At times, this is straightforward, especially in the case of acute hazards, where the link between exposures and effects is direct. An emergency release of ammonia from a chemical plant affects people right away if the dose is high enough, and it is not difficult to link the cause to the effects. For chemicals like dioxin or asbestos, our concern is also with chronic effects—low levels of exposure, over long periods, with a time lag before the effects (e.g., cancer) can be observed. This time lag between exposure and effects is called the latency period. Latency periods are associated most often with cancer, but there may be latency periods for other kinds of health effects as well.
So our decision about the toxicity of chronic hazards is based on scientific evidence about the effects of low exposures over long time periods, often decades. Ideally, scientists would find a group of people who have been exposed to a substance under such circumstances and study them. The field of epidemiology provides tools for such studies. Epidemiologists are concerned "with the patterns of disease in human populations and the factors that influence those patterns." 15
They identify an exposed population, compare it to a control group that was not exposed, account for extraneous factors (smoking or family history), and then draw conclusions about the health effects of exposure to a substance. This kind of evidence is appealing, because it draws on actual patterns of exposure and observed effects on humans.
The problem is that, for a variety of reasons, data on human exposures are unavailable or unreliable. Because exposures are long-term, it is difficult to go back and document patterns and levels of exposure, medical history, lifestyles, and the other information needed for a valid study. Epidemiological data exist mostly for occupational settings, where the exposure levels are higher, records more available, and the range of health effects narrower. For example, data on lung cancer among uranium miners were used to estimate the effects of prolonged exposure to radon. 16
Studies of asbestos drew on the experiences of shipyard workers during World War II. Occupational studies influenced OSHA's decision to set a benzene standard in the early 1980s. 17
Beyond the limited populations that are exposed at high levels in such settings, though, good human data are hard to come by. It often is difficult to extend the very different exposure conditions from occupational to everyday settings, as critics note for radon. 18
So human evidence is the first choice, but it simply is not feasible for evaluating most chronic exposures. Short of doing clinical studies on humans, the next best method is laboratory tests on animals. Animal bioassays (also called in vivo tests) are the basis for most risk assessments. Scientists administer high doses of substances to test animals, then observe them for tumors or other responses. The responses are extrapolated to draw conclusions about the possible effects on humans. The methods used to extrapolate from test animals to humans vary, depending on what assumptions and models are used. In the 1980s, for example, when assessing the health risks from exposure to asbestos, EPA relied on a linear multistage model. This assumes that the likelihood of cancer increases in proportion to the dose and that the tumors form in stages, known as initiation and promotion. Some risk agents (initiators) cause cancer by stimulating changes in cells; others (promoters) support tumor growth once the cell has been transformed to a precancerous state. Asbestos is associated with both kinds of effects, so it is termed a "complete" carcinogen.
The purpose of information on toxicity is to predict the dose-response relationship, to link dose levels with responses in the population. This relationship is presented in simplified form in figure 4. The horizontal axis gives the dose, the vertical axis the predicted response at that dose. A curve plots the relationship between the two, as the percentage of the exposed population that is predicted to show the adverse effect (cancer, birth defects, or others). At times, the curve is smooth; the likelihood of an effect increases in proportion to the dose, as in Line A. At other times, the curve starts out flat, then rises sharply at some level of exposure, suggesting the existence of a "threshold," as in Line B.
Figure 4. Two simplified dose-response curves.
Whether there are thresholds below which no effects occur is a major issue. Like other organisms, humans absorb low levels of many substances without ill effects, even substances that are dangerous at higher levels. Indeed, substances like sodium are toxic at very high doses but beneficial at low ones. Many dose-response curves display thresholds, known as no observed effect levels (NOELs), above which we try to avoid exposure and below which we decide there is no harm. For other substances, there are no thresholds, or none that are established, and we assume that any exposure poses some risk.
The policy of government agencies has been to assume that there is no threshold for carcinogens, that any level of exposure may be harmful. In effect, the threshold is zero. Agencies also assume that there is no safe exposure level for suspected causes of genetic damage. For most other effects, however, they have set thresholds, in the form of NOELs or levels of acceptable daily intake (ADI). As the evidence expands regarding the role of different substances in the formation or promotion of cancer, it may be possible to set thresholds for carcinogens, especially those (like dioxin) that scientists view more as promoters than initiators.
Scientists rely on animal in vivo studies not because they are even close to an ideal substitute for human data but because they are the best data available. Scientists use two other methods, short-term in vitro or tissue culture tests and structure-activity analysis, but neither is considered as valid as in vivo tests. 19
The debate about the validity of animal tests in predicting human effects turns on two critical extrapolations: (1) from high- to low-dose exposures and (2) from test animals (rats or mice) to humans. For chronic hazards, it is not feasible to duplicate the conditions of human exposure in animals. Even if the test animals lived long enough, such tests would be too costly and take far too long. In addition, the rate of cancer occurrence in humans is low enough that it is hard to detect it in animal studies. Lung cancer is the most common form of the disease, but it affects fewer than 8 in 10,000 people annually. Scientists would have to test huge numbers of animals to get a statistical probability of a response (i.e., malignant tumors) for a valid, two-year in vivo study.
To cope with these problems, scientists trade off time
of exposure. Many studies of chronic effects run for about two years. During this time, test animals are given much higher doses than a human would face in any normal course of events. Scientists usually administer a maximum tolerable dose (MTD), the highest dose test animals can receive without showing effects other than cancer. An issue in the debate over high- to low-dose extrapolation is the evidence that large amounts of many substances may cause changes in cells, making the cells more vulnerable to the formation of tumors. In lower doses, the same substances may have no cancer-causing effects. This has refueled controversy over whether there are thresholds below which some chemicals may not pose cancer risks. 20
The second extrapolation, from animals to humans, is also problematic. Many differences between test animals and humans are obvious: body weight and size, life span, variations within the species, among others. There also are pharmokinetic differences: metabolism or excretion patterns. Accounting for differences is difficult, given science's limited knowledge about the mechanisms behind the causes of cancer. What makes extrapolation even more difficult, however, is that even among rats and mice, or among animals of different sex within the same species, responses to chemicals vary. Estimating Exposure
Several issues arise in estimating exposure. One is whether to rely on monitoring or modeling. Ideally, personal or ambient monitoring gives direct evidence about exposures. Among personal monitoring methods, biomonitoring takes data from body fluids or tissue samples. More common are ambient data from a given site. They measure the amounts of contaminants people are exposed to but not what is reaching tissues, organs, or cells. Another strategy is to use mathematical models to simulate exposures; these can estimate the movement and fate of pollutants in the atmosphere, surface water, groundwater, and the food chain, for example. 21
Consider for a moment the task of estimating exposures to drinking water contaminants. In assessing the risks from lead, EPA had to decide what kind of people consumed how much water at what times of the day. For most assessments of exposure to drinking water contaminants, EPA assumed that men consume 2 liters a day, women and children 1.4. For lead, EPA wanted to be more specific. Information on children's intake was especially important, because lead may impair their development. The patterns in consumption also were important; much of the exposure comes from residential plumbing when lead from solder or pipes corrodes and contaminates the water. The longer the water sits in pipes, the higher the lead levels. Levels are highest when water is first drawn from the tap in the morning, or when many hours pass between uses of the tap. To estimate amounts and patterns of consumption, EPA asked a sample of households to keep a diary of when and how much water the adults and children in the household consumed. Based on the diaries, EPA estimated national consumption patterns for the different groups.
In the early 1990s, the special risks that environmental problems could pose to minority communities across the country emerged as a major concern in risk assessment. Some environmental problems affect minorities more than they affect other groups. Lead is one of them; black children who have been tested show elevated blood levels in higher proportions than do white children. Because they are more likely to live in urban areas, blacks and Hispanics also are exposed to more air pollution than are whites. Minorities also are more likely to live near uncontrolled waste sites and to suffer higher risks from pesticides exposures. For a variety of reasons, minority groups often are exposed to higher levels of risks than other groups in the population. In its Environmental Enquiry
report, issued in 1992, EPA outlined how it would adapt its risk assessment methods to better account for such special risks. 22 Characterizing Risk
Multiplying toxicity times exposure gives an estimate of health risk. The goal in a risk assessment is to describe risk in quantitative terms. The most common measure is excess individual risk, or the increased probability that anyone will experience an adverse effect from an exposure or activity. I noted above, for example, that the risk of someone in the United States dying in a traffic accident in a year was 2 in 10,000 (2 x 104
). When EPA issued its asbestos regulation in 1989, it estimated the population risk from asbestos in consumer products at about 1 in 1,000,000 (1 x 106
). For workers exposed at higher doses, the estimate was far higher—from 7 in 1,000 (7 x 103
) to 7 in 10,000 (7 x 104
). The first measure described the risks to a person exposed to asbestos at the level typical of the general population; the second measure described the risks to a person exposed to asbestos at the much higher levels typical of certain occupations.
There are several other ways to describe risks. One is to present population or societal risks—the number of cases expected to occur in the population in a year or other time period. EPA estimates that radon in homes causes 7,000 to 30,000 "excess" cases of lung cancer in the United States each year, for example. Another way to describe risk is with statements of relative risk. A 1990 study found, for example, that women who eat red meat daily are two and one-half times more likely to develop colon cancer than are women who eat it only a few times a month. 23
Risks also can be described as a loss in life expectancy; we can say, for example, that smoking reduces the life of a male by 6.2 years compared to nonsmokers.
Two issues are worth noting here. First, agency estimates of health risks tend to be worst-case or upper bound (the upper limit to what is statistically possible). This means that the actual risk is unlikely to be greater and is probably less than the estimates given by agencies. When FDA estimated cancer cases due to saccharin some years ago, it gave the number of expected cases as 1,200. In fact, the analysis gave a range of 0-1,200, but only the upper bound was generally cited in the media accounts. Second, individual and population risks should be seen together. Assume that benzene from a petroleum refinery poses an individual cancer risk of 4 x 103
(4 expected cases for every 1,000 people exposed) and that residues of a pesticide on apples pose a cancer risk of 1 x 106
(1 for each 1,000,000 exposed). Yet assume that only 1,000 people are exposed to the benzene and most of the U.S. population to residues on apples. The first gives an estimate of 4 excess cancer cases over a lifetime, the second of 200 or more. Conversely, it would be misleading to dismiss the benzene risk because the population looks small; for the 1,000 people exposed, benzene presents a risk.
What do agencies do with these risk estimates? Are there levels of risk at which agencies will always regulate a chemical? Two useful concepts in addressing these questions are de minimus
and de manifestis
risk. The first refers to risks too small or trivial to require a response, sometimes described as levels that are "below regulatory concern." The second describes risks so large that they require a response from any reasonable person. An analysis of 132 federal regulatory decisions for which an agency had done a cancer risk assessment found surprising consistency in how agencies responded to cancer risk estimates. Every chemical with an individual risk level above 4 x 103
(expected cancers in more than 4 of 1,000 people exposed) was regulated. Except in one case, no chemical posing an individual lifetime risk of less than 1 x 106
was regulated. These defined the de manifestis and de minimus risk levels. 24
What of risk levels that fell in between? Here the decision turned on cost-effectiveness, which is examined later in this chapter. Uncertainty and Risk Assessment
Often risk assessment is presented as the technical or value-free side of policy. But the process is full of uncertainty. Assumptions made at many steps may influence results by orders of magnitude. 25
In a study of agency risk assessments, the National Academy of Sciences listed fifty points where agencies had to choose from among scientifically plausible options. At each point, policy views could affect the methods or assumptions and influence the outcome. 26
Consider the following assumptions that agencies usually make in assessing health risks based on animal studies; that chemicals that cause cancer in animals also do so in humans; that humans and animals are equally susceptible to the effects of substances; that there are no thresholds below which substances do not cause cancer; that human exposures will be the highest that reasonably can be expected; that all substances cause cancer through one mechanism (genotoxicity), which leads them to predict high risk at low doses. Each assumption makes agency risk assessments more conservative, in that they may tend to overstate the actual risks to exposed people.
As a result, two critics of government risk assessment have concluded, agencies rely on "a series of assumptions and policy choices that are designed to overstate the degree of risk posed by carcinogens." 27
These assumptions and choices have a cascading effect, according to critics; each set of assumptions increases the estimates of risk that are obtained in the steps that follow.
EPA's policy is illustrative. In its Unfinished Business
report, the agency described the assumptions it made in evaluating cancer risks. One was that human sensitivity to a substance is as high as the most sensitive animal species. Another was that benign tumors should be counted as heavily as malignant ones in estimating cancer risk. A third was that the agency would rely on an upper bound estimate (the upper limit to what is statistically possible) of risk rather than a more realistic middle range estimate. The upper bound gives a very high estimate of likely risk; the true risk is unlikely to be higher and will probably be significantly lower. Although not "a realistic prediction of the risk," EPA observes, "it is a reasonable precaution in the absence of more knowledge of the mechanisms behind cancer." 28
So at nearly every step in a risk assessment, the policy of most agencies is to make the most protective assumption or policy choice. Agencies cope with uncertainty by being conservative in their choice of methods; if there is error, it is in the direction of overestimating, not underestimating, risk. The principle is that it is prudent public policy to assume the worst when there is uncertainty about the scientific basis of decisions about health.
Frances Lynn's study of relationships between the organizational affiliations of scientists and the assumptions they make in risk assessments sheds light on the role of values in risk assessment. In a survey of occupational health scientists in government, universities, and industry, she concluded that "there were links between political values, place of employment, and scientific beliefs." She found that "scientists employed by industry tended to be politically and socially more conservative than government and university scientists." 29
Industry scientists were more inclined to make choices or adopt assumptions that led to lower risk estimates. Government experts were most protective in their choices and assumptions; university experts were between the two. Industry experts were more skeptical of using animal tests to predict human risk and of the "no threshold" assumption for cancer; the other groups of scientists were more inclined to accept both.
My point is not to discredit accepted techniques for risk assessment. For all their limitations, they are the best tools available for evaluating the potential hazards of a variety of agents when human data are lacking. But in using the results of these studies, we should keep three points in mind. First, we cannot neatly isolate risk assessment as the purely technical, value-free side of policy. Even scientific analysis requires policy choices. Second, government risk assessments deal with uncertainty by adopting very cautious assumptions that tend to overestimate rather than underestimate potential risks. This may be a prudent policy, but it does introduce certain biases into decisions. In what they call the "perils of prudence," two critics argue that government risk assessment policies bias agencies toward regulating for cancer and away from regulating for other, more serious health risks. 30
Third, because of the uncertainty involved in risk assessment, it may make sense to use it more as a rough guide for setting priorities and comparing problems and less as an exact rule for decisions. I develop this point more in chapter 5.
At the same time, there are many reasons to think that risk assessments may not be conservative enough. They often focus only on one route of exposure for a given pollutant, not all the likely routes of exposure. Single-medium risk assessments may not account for the cumulative effects of multiple sources of exposure to, say, asbestors or lead. Different pollutants can have synergistic effects—interactions among them that make the overall risks more serious than just the sum of the risks of individual pollutants. In addition, groups in the population may be especially sensitive or vulnerable to different kinds of risks, requiring even more conservative assumptions in risk assessments. For example, in 1993, a panel of the National Academy of Sciences recommended that agencies modify their risk assessment practices to account for the special effects on children of pesticide residues on foods. 31 Notes: Note 1:
I want to clarify my use of the terms "risk analysis" and "risk assessment" in this discussion. I use risk analysis to describe a general approach to using risk in environmental decision making. I use risk assessment to describe the specific practice of studying and presenting quantitatively the harm that different substances, activities, or technologies may pose to human health or ecological resources. Risk assessment is thus a subcategory of risk analysis. Back. Note 2:
For an intriguing look at risk and risk taking from a psychological and philosophical perspective, see Ralph Keyes, Chancing It: Why We Take Risks
(Boston: Little, Brown, 1985). Back. Note 3:
There is a large literature on the meaning of risk, estimating risk, and the risks of various kinds of activities. An excellent introduction is Edmund A. C. Crouch and Richard Wilson, Risk/Benefit Analysis
(Cambridge: Ballinger, 1982). A good collection is Theodore S. Glickman and Michael Gough, eds., Readings in Risk
(Washington, D.C.: Resources for the Future, 1990). Back. Note 4:
From Richard Wilson, "Analyzing the Daily Risks of Life," Technology Review
81 (February 1979): 41-46. Back. Note 5:
James T. Patterson, The Dread Disease: Cancer and Modern American Culture
(Cambridge: Harvard University Press, 1987), 12. Back. Note 6:
U.S. Environmental Protection Agency, Science Advisory Board, Reducing Risk: Setting Priorities and Strategies for Environmental Protection
(Washington, D.C., September 1990), 9. Reducing Risk
comprises four volumes: the volume cited here, which contains the final report of the Relative Risk Reduction Strategies Committee of EPA's Science Advisory Board, and individual volumes for the reports from the committee's Strategic Options, Ecology and Welfare, and Human Health subcommittees. Back. Note 7:
Among the most useful works on risk perception are the following: Mary E. Douglas, Risk Acceptability According to the Social Sciences
(Beverly Hills, Calif.: Sage, 1985); Baruch Fischhoff, Paul Slovic, and Sarah Lichtenstein, "Lay Fables and Expert Foibles in Judgments About Risk," American Statistician
36 (1982): 240-255; Mary Douglas and Aaron Wildavsky, Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers
(Berkeley, Los Angeles, and London: University of California Press, 1982); Paul Slovic, "Perception of Risk," Science
, no. 236 (1987): 280-285; and Steve Rayner and Renee Cantor, "How Fair Is Safe Enough? The Cultural Approach to Social Technology Choice," Risk Analysis
7 (1987): 3-9. Back. Note 8:
Paul Slovic, Baruch Fischhoff, and Sarah Lichtenstein, "Rating the Risks," in Glickman and Gough, Readings in Risk
, 61-74. Back. Note 9:
For an excellent discussion of the waste issue, see Susan G. Hadden, "Public Perception of Hazardous Waste," Risk Analysis
11 (March 1991): 47-57. Back. Note 10:
J. D. Robinson, M. D. Higgins, and P. K. Bolyard, "Assessing Environmental Impacts on Health: A Role for Behavioral Science," Environmental Impact Assessment Review
4 (1983): 41-53. Back. Note 11:
Fiorino, "Technical and Democratic Values in Risk Analysis." Back. Note 12:
On waste facility siting, a good discussion is chap. 7 in Mazmanian and Morell, Beyond Superfailure
. Back. Note 13:
For a critique of risk assessment along these lines, see K. S. Shrader-Frechette, Risk Analysis and Scientific Method: Methodological and Ethical Problems with Evaluating Societal Hazards
(Boston: D. Reidel, 1985). Back. Note 14:
But not to the people who are exposed, for whom population density is not the issue. Small numbers of people may face serious risk that a quantitative analysis may not fully reflect. Back. Note 15:
John J. Cohrssen and Vincent T. Covello, Risk Analysis: A Guide to Principles and Methods for Analyzing Health and Environmental Risks
(Washington, D.C.: Council on Environmental Quality, 1989), 27. Back. Note 16:
See the summary in U.S. Environmental Protection Agency, Unfinished Business, Report of the Cancer Risk Work Group
(Washington, D.C., February 1987), B-6 and B-7. For a more detailed discussion, see EPA's "Final Rule for Radon-222 Emissions from Licensed Uranium Mill Tailings: Background Information Document," EPA 520/1-86-009 (August 1986). Back. Note 17:
Discussed in White et al., "A Quantitative Estimate of Leukemia Mortality Associated with Occupational Exposure to Benzene," Risk Analysis
2 (September 1982): 195-204. Back. Note 18:
Philip A. Abelson, "Radon Today: The Role of Flimflam in Public Policy," Regulation
(Fall 1991): 95-100. Back. Note 19:
On both tests and their uses, see Cohrssen and Covello, Risk Analysis
, 44-48. Back. Note 20:
Recent evidence suggests that at the huge doses given in animal tests, many chemicals destroy cells or chronically irritate tender tissues, causing other cells to divide to replace the lost ones. Dividing cells are more likely to experience the changes in genetic material that lead to cancer. So it may be excessive cell division due to the high doses that are causing the cancers rather than the chemicals. Gina Kolata, "Scientists Question Methods Used in Animal Cancer Tests," New York Times
, August 31, 1991, A1. Back. Note 21:
On the use of models in multimedia exposure assessments, with municipal waste incinerators as an example, see Jeffrey B. Stevens and Deborah Swackhamer, "Environmental Pollution: A Multimedia Approach to Modeling Human Exposure," Environmental Science and Technology
23, no. 10 (1989): 1180-1186. Back. Note 22:
U.S. Environmental Protection Agency, Environmental Equity: Reducing Risk for All Communities
(Washington, D.C., June 1992). The findings on differential risks are summarized in Vol. 1, Work Group Report to the Administrator
. Also see Ken Sexton and Yolanda Banks Anderson, "Equity and Environmental Health: Research Issues and Needs," Toxicology and Industrial Health
9, special issue (September/October 1993). Back. Note 23:
Susan Okie, "Colon Cancer Risk, Red Meat Linked," Washington Post
, December 13, 1990, A3. Back. Note 24:
Curtis C. Travis, Samantha A. Richter, Edmund A. C. Crouch, Richard Wilson, and Ernest D. Klema, "Cancer Risk Management: A Review of 132 Federal Regulatory Decisions," Environmental Science and Technology
21, no. 5 (1987): 415-420. Most of the decisions were EPA's but also included FDA, OSHA, and CPSC decisions. Back. Note 25:
For a critique of the argument that risk assessment is a "neutral" scientific process that can be separated from "policy" decisions on what agencies should do to reduce risk, see Alice S. Whittemore, "Facts and Values in Risk Analysis for Environmental Toxicants," Risk Analysis
3, no. 1 (1983): 23-33. Back. Note 26:
National Academy of Sciences, Risk Assessment in the Federal Government: Managing the Process
(Washington, D.C.: National Academy Press, 1983). The NAS study also introduced the distinction between the processes of risk assessment and risk management. I do not use that distinction here, because it tends to exaggerate the differences between the scientific and policy aspects of decision making. Back. Note 27:
Joseph Rodricks and Michael R. Taylor, "Application of Risk Assessment to Food Safety Decision Making," in Glickman and Gough, Readings in Risk
, 143-153, on 150. Back. Note 28:
U.S. Environmental Protection Agency, Unfinished Business, Report of the Cancer Risk Work Group
, A-2. For more detail on EPA's approach, see its "Guidelines for Carcinogenic Risk Assessment," 51 Federal Register
33992 (September 24, 1986). Back. Note 29:
Frances M. Lynn, "The Interplay of Science and Values in Assessing and Regulating Environmental Risks," Science, Technology, and Human Values
11 (Spring 1986): 40-50, on 41. Back. Note 30:
Albert L. Nichols and Richard J. Zeckhauser, "The Perils of Prudence: How Conservative Risk Assessments Distort Regulation," Regulation
(December/November 1986): 13-24. For OMB's criticisms, see Executive Office of the President, Regulatory Program of the United States Government, April 1, 1990-March 31, 1991
(Washington, D.C., 1990), 13-26. Back. Note 31:
National Academy of Sciences, Pesticides in the Diets of Infants and Children
(Washington, D.C.: National Academy Press, 1993). Back.