Radiation is ubiquitous; an inescapable part of life on Earth. Background radiation reaches us from outer space, from the rocks and soils we walk on, and from naturally radioactive potassium in our own bodies. Through its entire history, organisms on Earth have been bombarded by radiation, and this will continue for as long as the Earth exists. Today, the average person in the US is exposed to about 300 mrem each year from natural background radiation – about 1 mrem a day – and this level of radiation exposure seems to have no ill effects. Of the estimated 600 or so mutations that occur in each of our cells each year (about 900 in those cells exposed to UV radiation), only about 5 are due to the effects of background radiation. In short, environmental radiation is a mutagen, but it is not a major source of DNA damage.
At higher levels, however, radiation can cause damage. Continual exposure to low levels of radiation may cause a mutation that can initiate cancer. Brief exposure to high levels of radiation can cause skin burns, radiation sickness, or a number of radiation-induced syndromes.
Radiation Damage to Cells
Radiation can damage cells by directly striking the DNA and causing damage such as single- or double-strand breaks or point mutations. It’s more likely, however, that the radiation will interact with molecules in the cell’s cytoplasm, splitting them apart and forming reactive molecules called free radicals. These free radicals, then, go on to cause DNA damage. Free radicals are caused by more than just radiation – our mitochondria leak free radicals all the time, metabolizing our food can create free radicals, and even dissolved oxygen in our cells can cause DNA damage. All of this damage is indistinguishable, with the exception of double-strand DNA breaks – we can’t “look” at a point mutation and tell if it was caused by radiation or mitochondrial free radicals.
When radiation passes through a cell the effects can range from non-existent to profound. There’s a chance, for example, that a gamma ray will pass right through a cell without interacting at all or that the free radicals produced will simply recombine or be scavenged before they can reach the DNA. If radiation (or the free radicals it produces) do interact with the DNA, there are only a few possibilities – either the DNA will be damaged or it won’t.
If the DNA is damaged, we have a few further possibilities – the damage may be beneficial (e.g., evolutionary advantage), harmful, or neutral (neutral damage is damage that has no effect on the cell – it may be in non-coding part of the DNA, or to a gene that’s not expressed in that particular cell, for example). If the damage kills the cell there’s no problem – in reality, the only way to cause problems is to have DNA damage that’s not fatal to the cell and that affects one of the handful of genes that can cause a cell to become cancerous.
However, the possibilities do not stop here, because our cells have DNA damage repair mechanisms – think of them as being like a spell checker; as long as they repair the damage properly then it’s as though it never occurred. Although these mechanisms are very effective, they are not perfect. This means that any bit of DNA damage may be repaired properly, may be repaired improperly, or might not be repaired at all. It is at this point that DNA damage may become a mutation – a mutation is what happens when damage to our DNA becomes “fixed” and is able to be passed on to the next generation of cells. As with DNA damage, mutations may be good, bad, or indifferent (neutral), and the detrimental mutations may be lethal or sublethal. And, as before, it is only the sublethal damage that’s of interest to us, and then, only if it can cause the cell to become cancerous.
I’ve taken several paragraphs to describe the different possibilities of radiation interacting in a cell. Part of this is for the sake of completeness, but it’s also to help drive home an important point – radiation is a weak carcinogen. If we sum up all the possibilities above, I count over 20 different possibilities. Of these, only 1 (sublethal damage that is misrepaired or unrepaired and causes a cell to become carcinogenic) have a chance of causing cancer. Radiation is a carcinogen, but it’s not a very good one – not compared to many of the chemicals we work with.
In the next few sections, I will talk a little more about the effects of both acute and chronic radiation exposure on the organism, instead of the individual cells.
If we are exposed to high levels of radiation in a short period of time, we will suffer from the effects of acute radiation exposure because the damage accumulates faster than it can be repaired. If the dose is to a limited part of our bodies, we may end up with skin burns, sometimes severe. There have been many instances requiring amputation of fingers, or even entire limbs. High levels of radiation exposure to the whole body can lead to radiation sickness or death.
The effects of high radiation dose to limited parts of the body may range from no observable effects (if the dose is low enough) to blistering, burns, or necrosis depending on the dose received. The effects of whole-body acute radiation exposure can be a bit more complex, and they are summarized in the following table.
|Acute whole body dose (rads)||Effect|
|1-10||Chromosomal changes (fragments, dicentric chromosomes, etc.)|
|25-50||Blood cell changes (depressed red and white cell counts)|
|100||Radiation sickness in about 10% of those exposed|
|~400||Lethal dose to 50% of the population without medical treatment|
|~800||Lethal dose to 50% of the population with medical treatment|
|1000||Lethal dose to 100% of the exposed population|
The primary concerns with chronic exposure to relatively low levels of radiation are that we will develop cancer. There are two competing hypotheses on this matter, and the matter is still far from being settled.
The linear, no-threshold (LNT) hypothesis suggests that all radiation exposure is potentially harmful (the “no-threshold” part), and that the risk of getting cancer from radiation is directly proportional to the dose received (the “linear” part). LNT is the most conservative radiation dose-response model in that it predicts the highest risk from a given amount of radiation exposure. This is one of the reasons that the LNT is the foundation of radiation regulations virtually everywhere in the world – since we really aren’t sure how we respond to low levels of radiation exposure, it makes sense to control dose (and risk) according to the most conservative model.
One problem with the LNT is that it can be used to predict cancer risks down to vanishingly small levels of exposure, and so it has been used to calculate expected cancer rates from exposure to radon, “dirty bombs,” and medical x-rays. For example, say that the risk of getting cancer from a given radiation exposure is 5 additional cancer deaths for every 10,000 person-rem. That means that exposing 10,000 people to 1 rem each should result in an extra 5 cancer deaths among those people. Or, exposing 1 million people to 10 mrem each should also lead to 5 added cancer deaths. It’s easy to see that we can use this model to predict added cancer deaths from any level of radiation exposure, no matter how trivial, if enough people are exposed. By analogy, we can also say that, since a 1000 kg rock will crush someone, throwing a million one-gram rocks at a million different people will crush someone.
This doesn’t make much sense, and both the Health Physics Society and the International Commission for Radiation Protection have advised against this misuse of the LNT model. In fact, we just don’t know what happens at such low levels of exposure, and we can’t make any such predictions for very small levels of exposure. According to the Health Physics Society, in two separate position papers (which can be found on the HPS web page at www.hps.org), we simply can’t calculate a numerical risk estimate from any exposure of less than 10 rem, so even the first calculation runs afoul of HPS recommendations. In a similar vein, the ICRP has suggested that, when looking at the risk from collective dose, if the most highly exposed individual receives a trivial dose, then everyone’s dose should be treated as trivial.
Virtually all harmful substances exhibit some level below which there are no apparent harmful effects. This is part of the idea behind the No Observable Adverse Effects Level (NOAEL) – below a threshold dose you simply don’t see any effects from exposure to a substance. There are those who feel that radiation probably behaves similarly – that there is a level of exposure below which there are observable effects from radiation exposure.
There are also those who think that exposure to low levels of radiation may be beneficial. This is called hormesis and, although it sounds implausible at first blush, there are plenty of examples of hormesis in the world. Two examples are vitamin D and selenium. Both of these substances are vital nutrients, and both are acutely toxic in sufficiently high doses. Low doses of aspirin can help to stave off heart disease (not to mention the beneficial effects on fever, pain, and inflammation), yet high doses of aspirin can be fatal, and people can also die of excessive salt intake or even water intoxication. In short, the idea of hormesis is not outlandish; only the application of hormesis to radiation exposure seems unusual because we are all so steeped in the idea that radiation is uniformly bad.
The idea behind assuming a threshold in our response to radiation exposure is that, given the variations in Earth’s background radiation field, it makes sense that our cells should be able to adequately repair DNA damage from slightly elevated levels of radiation. And, let’s face it; radiation is not one of the major environmental mutagens (it accounts for about 1%-5% of background DNA damage). Our biochemistry contains very effective mechanisms for repairing DNA damage, and it is thought that these mechanisms are able to accommodate some level of added damage, such as would result from exposure to low levels of radiation.
The thinking behind positing hormesis effects is that, by presenting a continuing challenge to our mutation repair and tumor suppression mechanisms, they are kept at peak operating efficiency. They are better able to contend with the ordinary, garden-variety damage that is always cropping up in our genome and, as such, our DNA is better protected than if this radiation exposure was removed.
The best way to test these hypotheses, of course, is to perform epidemiological studies of exposed populations, and many such studies have been performed with equivocal results. Researchers have looked at radiation workers, residents of natural high-background areas, radon concentrations versus lung cancer rates, radiologists, and atomic bomb survivors, among others. Some studies show that risks are slightly higher, some show no effects at all, and some show fewer cancers than expected in the study populations. Part of the problem is that the effects are often smaller than the error bars, and this makes it very difficult to pick out what is actually happening. Unfortunately, there is not yet a “gold-plated” study that everyone can point to and agree that it was properly done, controlled for all confounding factors, and shows a significant result.
Given this degree of uncertainty, many health physicists and most governments feel it is best to control radiation exposure under the risks of the highest-risk model, LNT. The thinking is that, if we maintain risks at a low and acceptable level under LNT, then whichever model is correct, we will be at no more risk than we have agreed we can accept. The only problem with this model is that, if one of the other models better represents reality, we will have spent a lot of time, effort, and money controlling illusory risks, and these resources will have been taken away from more effective risk-reduction measures. So this question needs to be answered, and we will hopefully be able to do so before too much longer.