Assessing the risks
April [?] 1986
Illinois Issues introduction: Risk is an element of today's lifestyle. In the realm of toxics, little is known absolutely about the risks to public health and to an individual's health. Nevertheless, these risks must be and are assessed. In this second article on toxics, the author explains the theories, methods and applications of assessing risks from toxic substances. There are shortcomings to these assessments. There are enormous costs associated with eliminating risks from toxics. There is confusion by the public.
This article is one of a five-part series. For more, see "Toxics and risk" on the "Nature & environment" page.
During his most recent tenure as director of the U.S. Environmental Protection Agency, William D. Ruckelshaus described in a speech the process by which his agency and kindred others assess the risks to human health posed by environmental pollutants. Formal risk assessment, he said, is "the orderly exposition of the values we hold, and the reasoning that travels from some set of values and measurements to a decision" to regulate or not regulate. Critics of such procedures have been less kind, if more brief, often using words like "voodoo" to describe such risk assessments — to which risk assessors might reply that even voodoo can be improved by being made orderly.
Risk to human health from toxic substances in the environment exists, and it must be measured to be dealt with. Risk assessments are used to justify regulation, to set clean-up priorities, to guide research, even to help anxious consumers make more informed choices. The methods used to do it are ingenious and inadequate, and probably unavoidably necessary. Predictions of risk, be they from cancer or lightning bolts, are usually inferences about the future drawn from the past. When cause and effect (risk and consequence) are knowable and the record of past consequences sizeable, such predictions are straightforward enough. (The person who invented actuarial insurance deserves credit for his marketing sense, not his math skills.)
Estimating the risk from more exotic hazards such as toxic pollutants is less a science than an art—or science fiction—because these are risks whose causes are not mathematically describable to a certainty and whose occurrences are too rare to leave analysts with much comparable data. For example, insurance firms can predict the odds of a house fire to a profitable fineness based on the known number of houses in a given area, the reported number of fires over time and the official causes of the fires. They can predict how many houses will burn of the aggregate sample, but not which houses will burn—that would require a crystal ball instead of a computer.
Imagine, however, houses that burn without apparent cause, or that refuse to catch fire even when touched by smoldering cigarettes or electrical sparks, or that smolder for 70 years before producing an open flame, or whose flames extinguish themselves mysteriously for lack of sustenance, or whose flames were found to be touched off by a spark in one room and tinder in another. Such are the kinds of conundrums faced by assessors of toxic risk. The cause and effect of toxic diseases are not generally describable mathematically with much certainty, and their occurrences are too rare or their course too ambiguous to support conventional statistical analysis.
Instead of hard data, the process depends on estimation, analogy, extrapolation. Different procedures have been developed, but they all seek to combine and correlate disparate data from human epidemiological studies, animal tests, computer simulations of metabolic pathways, field monitoring results and the like to yield quantitative estimates of the likelihood of Exposure A to Substance B producing Condition C.
It used to be simpler. Although the vocabulary is rather new, risk reduction has always been the aim of environmental regulation going back to the pioneer public health reforms early in this century. The risks then were obvious—people were dying; and eventually, the causes became obvious. Once identified, risks from such diseases as typhoid fever were simply eliminated by the use of regulation (closure of unsafe wells, for example) and technology (chlorinated public water supply systems).
The success of these early risk reduction efforts can be measured not just in mortality statistics (water-borne communicable diseases no longer appear among the top ten causes of death in Illinois) but in the fact that succeeding generations of Illinoisans scarcely recognize the names of the diseases their great-grandparents feared the most.
Toxics have renewed the link between risk and regulation, and again blurred the lines separating environmental protection and public health programs. Cancer is now the No. 2 killer in Illinois (after heart disease), accounting in 1984 for nearly 23,000 of nearly 101,000 deaths. And most cancers, it is suspected, have environmental origins. Note that "environment" in the disease-hunter's lexicon includes the workplace and home as well as the ambient environment. Knowledge of the origin of nonfatal but debilitating conditions from birth defects is scant, but it seems reasonable to suspect exposure to environmental toxics as a contributing factor in many cases even if such exposure is not unambiguously a cause in all such cases.
It seems unlikely that the health risks from toxics will be eliminated as tidily as were their bacterial predecessors. Toxics are too diffuse, too persistent, too mobile. Dependable technologies to scrub the state clean do not exist, nor could they reach into such realms as groundwater supplies where toxics have penetrated. Achieving the goal of zero-risk from this toxic plague would in any event be enormously expensive, arguably ruinous, doable only at the cost of draining public purses or bankrupting private ones. Thus the agenda for the 1980s was set: What can't be eliminated must be ameliorated.
Life is a risky business. The odds that one will be ambushed by heart attack, stroke or cancer are shrinking but still high; the chances of being crumpled in a car crash, shot in a squabble or struck by lightning on a golf course are small but still real. Death is the ultimate risk. Three of every ten Illinoisans who died in 1984 died before their 65th birthday, when death begins to be less a risk than a response to age; but the chances of traversing one's mortal span without suffering some disabling accident or disease are so overwhelming as to amount to a near certainty.
Risk assessments as carried out by environmental and health agencies are more formal and more rational versions (if no less biased in their ways) of a similar process by which everyone balances risk against reward in everyday life. The terminology of risk assessment varies from the private sphere to the public, as does the rigor and explicitness with which the process is carried out. Even so, these basic steps are common to both:
Risk analysis The process by which a risk is identified and its potential for harm measured. In the case of cancer, which is the most studied toxic disease, that potential is typically expressed quantitatively as "y" (a certain number of cancers) per "x" (10,000 or 100,000 or one million population) or as an aggregate number of cancers nationwide.
Risk assessment The quantified risk considered in the context of costs, more specifically the balance between the costs the risk imposes on vulnerable populations and the costs imposed by reducing it. "Costs" in such cases do not necessarily include just economic costs but can also include social and political costs. At this stage, formal risk assessment ceases to be the sole province of technicians and becomes an administrative—meaning political— process. The fact that the term "risk assessment" is sometimes used to describe the more limited "risk analysis" confuses a process that doesn't need it.
Risk avoidance The result, in theory, of a risk assessment that concludes that the costs of enduring a risk are too much greater than those of curing it. In the public realm, avoidance can be accomplished by restrictive regulation, remedial cleanup, bans, etc.
Risk mitigation Risk avoidance in pieces, or minimizing the effects of risk that can't be entirely avoided. Regulatory decisions about toxics are often mixtures of both avoidance and mitigation. The phased-in ban on asbestos proposed in January by the U.S. Environmental Protection Agency (USEPA), for example, would avoid future risk from exposure to the substance by banning outright the use of asbestos in new products. It would mitigate the risk from the millions of tons of asbestos already at large in everything from auto brakes to pipe insulation by the regulation of its removal and disposal. The proposed ban would be fully in effect in ten years. The fact that the agency's program seeks as assiduously to mitigate the risk faced by the asbestos industry and its corporate customers discredits a process which doesn't need it.
Risk Management Speaking generally, the use of regulations, standards, enforcement procedures, inspection, compensation, education and so on to accomplish the avoidance and mitigation of risk. The actual reduction of risk thus achieved is seldom as comprehensive as governments suggest, or as strict as environmentalists outside government think the problems deserve. The very term "risk reduction" excites some environmentalists outside government to anger; they argue that it is the responsibility of government to eliminate risk when appropriate, not just manage it.
Weaknesses of methods
Of the several steps between risk and regulation, risk analysis is both the most crucial and the most vulnerable to error. Last fall, a top policy official of the USEPA confessed to a conference audience that the agency's risk analysis methods were not strong enough to support regulation.
Their weaknesses stem mainly from ignorance about the mechanisms of most toxic diseases, especially those resulting from low-dose, chronic exposures. Little is known about toxic effects at these levels, and much of what is known is surmised from tests on animals. Knowledge about the role of the human immune system, as well as the extent of human genetic predisposition to harm from certain toxics, is primitive. Toxics exist in most environments from city street corners to the family kitchen in combinations numbering in the dozens; near dump sites they may number in the hundreds, even thousands. Toxics differ in potency by as much as a million-fold, and they differ in amount by factors nearly as large; dioxin is roughly 10,000 times more toxic than cyanide molecule for molecule, but while the dioxin loose in Illinois may be measured in pounds, the cyanide around the state must be measured in the millions of pounds. The age of the vulnerable population, the size of the dose received, the duration of exposure, the route by which a suspect agent enters the body—all complicate risk estimates.
In the case of a few toxics, masses of epidemiological data record the relationship between human exposure and disease. Tobacco use and radiation are chief among them, along with a few studies of occupational diseases. Risk assessors can use these piles of data to jump to the effect of exposure to a toxic without hypothesizing about cause. (It is known that heavy cigarette smoking will lead to fatal lung cancer in one of five males; how it is caused is not known, but the epidemiologist doesn't need to know.) In a few other cases the disease "signature" of a given toxic is so eccentric that risk may be confidently attributed. (The obvious example is the pleural mesothelioma associated almost exclusively with the inhalation of asbestos fibers.)
For thousands of other toxics, however, such comprehensive evidence of harm is lacking. Risk analysts thus depend on the controversial extrapolation of data regarding dose, body weight and tissue sensitivity from test animals to humans. (Alluding to one of the famous controversies regarding the applicability of animal tests, one staffer of the Illinois Environmental Protection Agency (IEPA) remarks with a laugh, "There's one breed of mouse that gets liver cancer every time you take it out of its cage.")
Are risk assessments thus built of bricks made without straw? Janice Perino is the manager of the IEPA's Office of Chemical Safety, and she thinks not, at least not always. "The quantity and quality of nonhuman data on many of these chemicals is good," Perino insists, noting that research is hampered by the lack of a huge and heterogeneous sample of humans who, as cigarette smokers have done, voluntarily expose themselves over periods of decades to suspected carcinogens. "We probably can do a relatively good job of extrapolating from animals to humans. We at least know enough to say, 'We have a problem chemical here.' You don't have to blow it in people's faces to find out."
Establishing that a substance is toxic is one thing. Estimating how toxic, and to whom, is something else. What isn't known must be estimated. "At all stages of the procedure there are a lot of assumptions that must be made," Perino explains. "At each stage, it is possible to put statistical limits on your estimates."
Two and two adds up to four using risk analysis arithmetic, in short, but it can also add up to one, or six. Scott Koenk, environmental toxicologist for the IEPA, offers a simplified instance of analyzing risk from an airborne toxic such as benzene. "The average adult breathes roughly 20 cubic meters of air a day. That's a good number, derived from actual measurements. From that point on it gets tricky. If we say that the toxic is present in the air at concentrations of one microgram per cubic meter, then obviously 20 micrograms of benzene a day is entering that person's lungs." Here the assumptions come into play. Is the concentration of the toxic in the air one microgram at the point it enters those lungs? The concentration of airborne toxics varies with winds, with the time of day and distance (some degrade chemically under ultraviolet light). Body chemistry also is an issue. "How much of that 20 micrograms is actually entering the bloodstream?" Koenk poses. "We make the most conservative assumption, which is that 100 percent of it is. Of course, not all of it is entering the bloodstream. Some of it is being expelled with each breath."
Making the most conservative of the possible assumptions at each stage of such calculations sacrifices accuracy for safety. "People can shoot holes in these worst-cases numbers pretty quickly," Koenk adds. In the summer of 1985, the White House's Council on Environmental Quality proposed that federal agencies be relieved of the statutory obligation to prepare worst-case environmental impact analyses because they were usually based more on "conjecture" than "credible science."
Koenk continues, "As a result, we also go back and ask, in effect, 'What really happened?' " A second estimate is made using an absorption percentage that is assumed to reflect more closely the actual chemical transactions taking place in the lungs of the hypothetical exposee. "Thus we get a real-world, or most-probable case number." Similar calculations are made for other relevant factors.
In the absence of unequivocal data, risk is expressed in terms of a range that encompasses both the worst-case and the most-probable case estimates. Depending on how much isn't known, such ranges can be quite wide. The official maximum exposure to formaldehyde allowed under regulations of the federal Occupational Safety and Health Administration is estimated to subject workers to a risk of between 0.7 and 6.2 cancers per 10,000 workers over a 70-year "lifetime" of exposure. Former USEPA director William D. Ruckelshaus was fond of citing the National Academy of Sciences study that estimated between 0.22 and more than a million additional bladder cancers would result nationwide from people drinking 120 milligrams per day of the artificial sweetener saccharin for 70 years.
In those cases in which enough reliable data exist to support risk estimates, they are stated less equivocally. But here again the opportunity for confusion exists. Risk estimates for cancer, for example, are properly expressed not in terms of absolute risk, but in terms of the risk incurred over and above the routine or background risk of contracting the disease that everyone faces. Risk is thus expressed in terms of cancers in excess of these background levels.
The background level of all cancers among all U.S. citizens who live to old age is about three out of every ten. Background levels of specific cancers among younger adults, especially cancers thought to result from low-dose chronic exposures to toxics, are much lower. A substance that poses a risk of one excess cancer per million over a background that itself may include only one cancer per 100,000 thus constitutes a pretty small risk.
The distinction between excess risk and absolute risk is often lost in reporting about toxic diseases such as cancer. Indeed, it sometimes isn't made by the people making the estimates. There is not enough data about the cancer-causing tendencies of many more exotic toxics at low doses to define a background level. (Much of the available toxicity data comes from occupational exposures which often are at acute levels.) In estimating risk from a specific site—say, a petrochemical plant—the background risk may be said to be zero since the population at large does not live next door to that particular plant; background risk and excess risk for the neighborhood is essentially the same.
Besides, explains IEPA toxicologist Koenk, background estimates also are prey to error. The accepted background level of risk from benzene may well be understated; as Koenk puts it, that level does not allow for the fact that "most of us get a pretty good shot of the stuff at the gas station while filling our cars up with gasoline."
The conscientious scientist will be content to express risk in terms of wide ranges, but regulators are often obliged to be more specific content to express risk in terms of wide ranges, but regulators are often obliged to be more specific. IEPA director Richard Carlson cites the problem of deciding where on a given spectrum of risk it is appropriate to establish a standard. "In a way it makes no difference," he says, "because we don't know what difference it will make." Worst-case estimates are if anything less substantiable than real-world ones, but the adoption of sunnier ones in the absence of proof of their validity does risk exposing vulnerable populations to some vague but still disquieting risk. At the same time, setting control standards using a too cautious estimate of risk has its dangers, too, in higher consumer costs and possibly lost jobs—costs which may be as substantial and certainly are more immediately felt than those that the controls seek to avoid.
Compared to the risks encountered in other spheres of life, the absolute risk of disease from any one toxic pollutant is pretty slight with the necessary exception of certain acute occupational and accidental exposures. Ethylene oxide, 1,3-butadiene, and ethylene dichloride are three airborne toxics officially classed as probable human carcinogens but whose emissions are not yet regulated by the USEPA under the Clean Air Act. Agency estimates of the number of excess cancers nationwide caused by each per year are 58, 19 and 3, respectively.
Workplace exposures to toxics tend to be much higher. Asbestos is a known killer, accounting for anywhere from 3,300 to 12,000 deaths a year in the U.S. (Some asbestos-caused deaths are inaccurately attributed to smoking.) Yet only one asbestos worker in 15 will die. Such results are catastrophic for the individual but hardly noticed in a nation that tolerates 50,000 deaths a year in auto accidents.
Regulators are among the few people (epidemiologists and actuarialists are others) who consider toxic risk in terms of aggregate populations. Toxics and the risks they pose tend to be concentrated locally. Toxic risk is properly visualized not as a fog settling evenly upon the U.S. but as a series of blips on a radar screen, blips which correspond roughly with the location of dump sites, petrochemical plants and other toxic "hot spots."
For example, one major national firm has reported that air emissions of 1, 3-butadiene from its chemical plant in Ottawa, 111., total 279,000 pounds a year. Such emissions may be a modest part of the national pollutant load, but locally they can be quite significant. So, apparently, can the resulting risks of disease. Context is crucial: If a stranger in the next county gets cancer, it's a problem; when a neighbor takes ill, that's an epidemic; when you get it, it's a crisis.
The perception of risk and thus the politics of risk reduction follow from individuals' answers to one central question: Is the proper aim of regulation to protect populations or to protect people?
The relative magnitude of such risks is poorly grasped by most people. Official quantitative expressions of risk are "totally undecipherable" in the opinion of IEPA director Carlson. Casual references by official spokespeople and the press don't help. Risk is typically computed assuming continuous exposure over a hypothetical 70-year lifetime, a fact often not explained. And people worried at news that the use of Product ABC will double their chances of contracting, say, liver cancer, might be less worried if they knew that their estimated risk had merely gone from one chance in 100,000 to two chances.
The risks we choose to subject ourselves to, in fact, are often much graver than those we encounter at random in even a heavily industrialized environment. The Food and Drug Administration calculates that sport fishermen who regularly eat Lake Michigan fish contaminated by dioxin in concentrations many times the recommended "safe" level stand a one in 10,000 chance of cancer over a lifetime. In contrast, one driver in a hundred will die in a lifetime of driving automobiles, suggesting that motoring to and from Lake Michigan to go fishing may be one hundred times riskier than eating the tainted fish that were caught.
So provisional are the truths revealed by formal toxics risk assessments that many in the environmental community find them practically useless and politically suspect. At best they are, as toxicologist Clint Mudgett of the Illinois Department of Public Health concedes, "pretty much a matter of probabilities." Robert Ginsburg, head of research for the Chicago-based Citizens for a Better Environment (CBE), is more emphatic. "I don't support calculations of quantified risk assessment," he says. "That's black magic. There's too much subjectivity in that process." Much the same complaint, made from a much different perspective, comes from the present USEPA director, Lee Thomas, who has warned against over-reliance on what he called "mechanistic application" of risk formulas in making decisions about what to regulate and how much.
One problem in particular buttresses the argument against formal risk assessment: Assessors simply don't know enough yet. Science advisers to the USEPA, for instance, recently recommended to the agency that existing "safe" exposure levels for the radioactive gas radon were too high, that estimates of the number of cancer deaths were based on occupational exposure data that do not reflect either the intensity of radon exposure in contaminated homes or allow for the relatively greater risk endured by children in such homes.
Risk can be overstated on the basis of incomplete or inappropriate data as well as understated, of course; questions have been raised as to whether risk to humans from dioxin may have been exaggerated from dioxin's demonstrated toxic effects on mice. But if errors are to be made, critics insist, they should be made on the side of caution. Rather than reducing exposures to mathematically "safe" levels, the goal should be reducing them to the ultimate lowest levels that can be achieved by available technologies.
Assessing the health effects of even one single toxic chemical requires levels of information that are, in the judgment of Gilbert Zemansky, science chief for the Illinois Pollution Control Board, "horrendous." Assessing the combined effect of dozen—some of which may have synergistic effects—is that much more daunting.
The danger of multiple exposures was the subject of a petition for regulatory action filed with the USEPA in 1984 by CBE and Irondalers Against the Chemical Threat, a neighborhood group. Partial monitoring by the IEPA and the Department of Public Health of the Lake Calumet area on Chicago's southeast side had revealed concentrations of 39 different pesticides, heavy metals and organic compounds in the air, water and soil. The petition asserted that existing regulatory approaches take insufficient account of the additive or synergistic effects of exposures to multiple toxics via multiple media, and that official estimates of risk to nearby residents were understated as a result. Dan D'Auban of the IEPA's air pollution control division, who worked on that study, insists that no such assessment could be made because of lack of data. He points to New York's Love Canal as an example. The dump site there contained some 400 organic chemicals when it became the center of controversy; at the time reliable cancer data existed for 15 of them, and no toxicological data existed for 200 of them.
Not even the people who use quantitative risk assessments deny the substance of these complaints about their shortcomings. They acknowledge the faults, but add that they often are obliged by statute to make a judgment. "One way or another you have to do an assessment," explains Zemansky. The choice is between doing formal ones and informal ones. "The controversy is partly a semantics problem," adds Zemansky. At risk is confusion by the public.
Depending on differences in data or differences in statutes, risks of different magnitude may trigger regulatory response, which may add to the public's confusion. Some regulatory decisions may be conditioned by economic factors; others are not—such as product bans required under the very restrictive Delaney clause of the federal Food and Drug Act. Finally, a public not privy to the qualifications and conditions that are tied to risk assessments tends to give them too much weight. "I cringe when I hear people say that we can expect X number of cancers from this chemical or that one," says toxicologist Koenk.
One initiative of the Ruckelshaus administration was to make the USEPA's risk assessment procedures more explicit, so that the public might have a better understanding of the decisions being made in its behalf. The resultant airing of values, both scientific and social, that shape such assessments may be more admirable than efficient. IEPA staff have found that the process in selected cases literally requires sitting down at the kitchen table with people to explain risk assessment.
The problem, as identified by Roger Kasperson, a risk expert at Clark University, is that there is little reason to expect a consensus among individuals about what constitutes tolerable risk. "The current tendency is to set the standard at the level deemed appropriate by the expert," Kasperson writes in a 1983 paper, with an adjustment to reflect what has been called "the squawk factor," in the hope of thus satisfying both science and politics simultaneously. The process seldom satisfies either, concludes Kasperson, "leaving the public distrustful of the expert and the expert convinced of the public's irrationality."
Official standards for minimum acceptable risk, when set by relying on largely implicit social judgments, are more tenuous than the risk estimates that undergird them. They are based on available science, but they remain nevertheless essentially arbitrary. To err in the direction of an almost extravagant caution was the traditional approach in the past. The maximum allowable "safe" concentrations of suspect carcinogens in drinking water were set by federal regulators so that the public is exposed to no more than one chance of an excess cancer in a million from each of the suspected carcinogens. These extremely cautious "safe" concentrations were subsequently incorporated as baseline standards for other programs.
Carried to these extremes, such prudence has proven to have enormous costs. The official commitment to protecting the public from environmental health risks was never absolute even when, occasionally, legal language has spelled out that the commitment was. Officially acceptable levels of protection have always been constrained by political, technological, and economic limits, but affordability has become more crucial in toxics regulation. The ambitious one-in-a-million-chance maximum risk for carcinogens remains the standard for certain broad-based toxics regulations, such as those for drinking water. But in other instances the Reagan administration has shifted the issue from what is achievable to what is affordable. In a typical recent case in which a population at risk (from exposure to radioactive dust) was very small and the costs of controls very high, an official USEPA estimate of excess cancer risk of one in only 1,000 did not lead to the imposition of corrective controls.
Critics outside the agencies are especially uneasy about the standard-setting aspects of toxic risk assessment. They argue that the responsibility for setting standards of acceptable risk does not belong in the hands of regulators. The CBE's Ginsburg is a prominent critic of IEPA's toxics policies. "The idea that some government agency can establish acceptable risk," Ginsburg insists, "is not real world."
When so little is knowable, everybody can be right. Everybody can be wrong, too. Perino makes the point with a story. A mathematician and a philosopher, both men, are placed equal distances from an enticing woman. Each man is allowed to move toward the woman in steps no longer than one-half the distance remaining between him and her. Which man gets there first? "The mathematician never gets there," Perino explains, "and the philosopher says it doesn't make any difference. " ●
Cancer: What cause? What risk?
"Cancer," explains a staffer at the Illinois Environmental Protection Agency (IEPA) in the deadpan jargon characteristic of the bureaucrat, "is a serious end point." Of the many assaults on the human body of which toxic substances are capable, none is as feared as cancer. It is our century's plague, dreaded in its effects and sinister in its course; even when it can be cured, its victims often are left scarred physically, emotionally and financially.
Each year roughly 23,000 Illinoisans die of malignant neoplasms; those deaths account for nearly a quarter of all deaths in the state. Cancer, a conveniently general term for many different diseases, is the No. 2 killer in Illinois, second only to heart disease. At the turn of the century cancer ranked at or near the bottom of the list of top 10 killers, well below such familiar scourges as whooping cough, diphtheria and smallpox, a fact which may be misleading. An irony in which the public health community may take scant comfort is that the rise in cancer deaths may be attributed in part to their success in eradicating "traditional" diseases, especially killers of infants and children. Improved sanitation, new vaccines and antibiotics alone have made it possible for tens of thousands of Illinoisans to live long enough to develop cancers which members of their great-grandparents' generation carried with them, undetected, to early graves.
There are few topics about which people are less willing to be reasonable than cancer. Richard Carlson, IEPA director, notes, "The most politically troublesome issue is the regulation of carcinogens." Ira Markwood, former manager of the agency's public water supplies division, suggests one of the reasons in a 1983 professional journal: "The whole question of carcinogenicity is wrapped with emotion, so that it is not possible to stand back and take a completely logical look."
It would be a foolish expert who would describe his own look at cancer as completely logical. Compounding the problem is that experts and an anxious public tend to talk about cancer in slightly different languages. Jacob Dumelle, chairman of the Illinois Pollution Control Board, has noted that cancer is simply cancer to the public but not to an epidemiologist. Some carcinogens are strong, others quite weak. Some act only in the presence of other substances as chemical co-conspirators, in effect. Others may be quite dangerous when breathed but much less so when swallowed.
Another example: When the epidemiologist describes cancer as a result of "environmental" factors, he generally includes smoking, high-fat diets and factory fumes on his environmental list. The public is tempted to construe "environmental" in terms of the ambient environment, which is certainly a source of carcinogenic mishaps but is not presently thought to be the major one. One result of this imprecision in definition is that people perceive tumors as something one catches, like the flu.
People who are skeptical or uncomprehending of science's explanation of cancer, in short, tend to evolve their own theories consistent with their understanding of the world. A respected expert such as Dr. Samuel S. Epstein of the University of Illinois Medical Center and author of The Politics of Cancer and other works on toxics, may assert that cancer is for all intents and purposes a preventable disease. Large numbers of the public, however, persist in seeing cancer as a matter of chance. Actually, both views may be correct. Cancers are thought to be triggered by environmental insults, but emerging from recent research is that individuals apparently may have a vulnerability to those insults as the result of specific genetic predisposition; while such predisposition sometimes shows itself in broad terms among members of ethnic or other groups, it is still essentially unpredictable.
To concur with the cancer-is-preventable thesis is to acknowledge the numbers of cancers attributable to personal habits and to admit that one has a personal responsibility for cancer prevention. Much of the public, however, would rather leave the responsibility to the government. They thus achieve a comforting absolution of personal responsibility for cancer risk, but only at the cost of magnifying the dread with which they go about their business in the broader world.
Presently full-scale risk assessments for cancer have been performed on only a small number of the roughly 70,000 chemicals in commercial use in the United States. Assessments are usually undertaken on substances already assumed to have carcinogenic potential. Of those 70,000 chemicals, only about 95 (83 industrial chemicals and another 12 pesticides) in general use are officially classed by the U.S. Department of Health and Human Services as carcinogenic. Janice Perino, manager of IEPA's Office of Chemical Safety, says, "I am concerned when people throw up their hands and say, 'It's useless. Everything causes cancer.'"
That list of known cancer-causers is surely incomplete, but then so is everything else that is "known" about cancer. Consider one of the most basic unanswered questions, namely the relationship between dose and disease. Research suggests, but has not proven, that there is a direct and regular relationship between the amount of a carcinogen (the dose) a mammal is exposed to and the body's response (the initiation of a tumor). This linear dose-response relationship apparently has no lower limit. That means that a dose of any size, down to a hypothetical single molecule, can initiate a tumor; put another way, there is no "threshhold" amount below which a dose may be considered harmless.
The no-limit linear, dose-response relationship is consistent with the so-called "one hit" model of cancer causation, in which one toxic molecule acting on a single molecule of DNA is enough to cause cellular havoc. The only safe exposure level under such assumptions would be no exposure at all. "We can't prove it and we can't disprove it," says IEPA toxicologist Scott Koenk, "so we assume that no exposure to a carcinogen can be considered safe." In the jargon of both state and federal EPAs, cancer is a "nonthreshold event."
Such conservatism may yet prove to be at odds with science. The one-hit model may be too simple. "For one thing, the ubiquity of carcinogens in the environment would lead one to expect cancers to be as common as colds. They are not, which suggests that some preventive mechanism, perhaps the body's immune system, is at work. "Multi-hit" and "multi-stage" models for cancer causation are also in use; they take into account findings that suggest, for example, that some carcinogens are "potentiators" and some are "initiators" of tumors and that most cancers require the interaction of both to develop.
Until such mysteries are solved, the scientist's uncertainty will continue to feed the public's anxiety. Nervousness about cancer complicates policy. Indeed, it often shapes it. In its annual report for 1985, the Conservation Foundation noted that in many regulatory programs "the focus on carcinogens has dominated all other health and environmental concerns." Teratogenic effects of toxic exposure may be as devastating to the individual as carcinogenic effects; and mutagenic effects, because they are often inheritable, are longer lived. But a substance shown to have carcinogenic potential will draw—often must draw, because of statutory mandates—official attention. Most of the restrictions so far imposed on toxic substances derive from their presumed cancer-causing potential.
Gilbert Zemansky, head of the Pollution Control Board's science section, says, "Mutagenicity is a public health problem in its own right. But these days the mutagenicity of toxics is studied mainly because it is thought to be a screen for carcinogens." Zemansky, however, adds, "That's not as bad a problem as it could be. There are lots of mechanisms believed to be common to the processes of carcinogenicity, mutagenicity and teratogenicity."
So complex are cancer's causes, so varied its manifestations, and so tenuous the links between toxic cause and effect that it is natural for the uninitiated to conclude that the disease picks its own victims and does so haphazardly. The official view of cancer is less despairing. A significant number of the roughly 45,000 new cancers diagnosed each year in Illinois can reasonably be attributed to their victims' behavior (smoking, principally)followed, somewhat more controversially, by drinking and poor diet or from workplace exposure to certain suspect chemicals.
In spite of our bad habits, fewer than one of every four Illinois deaths are from cancer. That figure, unfortunately, may well rise as the population ages, and in the future cancer may be considered, in the phrase of Clint Mudgett of the state's Department of Public Health, a degenerative disease of old age. With a handful of conspicuous exceptions, cancer cure rates haven't improved much, even with generally earlier diagnosis.
Most of the new cancers diagnosed each year in the state are likely to remain stubbornly resistant to cure. The success of some cancer therapies is so high, however, that the doctors are confounding the epidemiologists. The cancer registry being set up by the Department of Public Health's new division of epidemiological studies will itemize each cancer and not simply each cancer death along with information relevant to its possible cause, such as the victim's lifestyle, occupation and family history. "If you look only at mortality data for certain cancer sites such as leukemia," explains Dr. Robert Spengler, the division's chief, "you only see ten percent of the picture because leukemia has become such a curable disease. To see 100 percent of the picture, you have to study its incidence as well."
Given time, Illinois public health officials are confident that more vigilant prevention and more clever cures will mean that "cancer" will join "smallpox" and "diphtheria" among the words Illinoisans will no longer have to fear. ●