Kevin Drum from Mother Jones has a fascinating new
article detailing the hypothesis that exposure to lead, particularly tetraethyl lead (TEL) explains the rise and fall of violent crime rates from the 1960s through the 1990s, after the compound was phased out of gasoline worldwide. It's a good bit of journalism on issues of public health compared to much of what you see, but I'd like to provide a little bit of epidemiology background to the article because there's so many studies listed that it's a really good intro to the types of study designs you'll see in public health. It also illustrates the concept of confirmation bias, and why regulatory agencies seem to drag their feet when we read such compelling stories as this one.
Drum correctly notes that simply looking at the correlation shown in the graph to the right is insufficient to draw any conclusions regarding causality. The investigator, Rick Nevin, was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it's been documented that values for a population
do not always match those of individuals when looking at cross-sectional data. This is the ecological fallacy, and it's a serious limitation in these types of studies. Finding a causal link in a complex behavior like violent crime, as opposed to something like a specific disease, with an environmental risk factor is exceptionally difficult, and the burden of proof is very high. We need several additional tests of our hypothesis using different study designs to really turn this into a viable theory. As Drum notes:
During the '70s and '80s, the introduction of the catalytic converter, combined with increasingly stringent Environmental Protection Agency rules, steadily reduced the amount of leaded gasoline used in America, but Reyes discovered that this reduction wasn't uniform. In fact, use of leaded gasoline varied widely among states, and this gave Reyes the opening she needed. If childhood lead exposure really did produce criminal behavior in adults, you'd expect that in states where consumption of leaded gasoline declined slowly, crime would decline slowly too. Conversely, in states where it declined quickly, crime would decline quickly. And that's exactly what she found.
Well that's interesting, so I looked a bit further at Reyes's study. In the study, she estimates prenatal and early childhood exposure to TEL based on population-wide figures, and accounts for potential migration from state to state, as well as other potential causes of violent crime to get a stronger estimate of the effect of TEL alone. After all of this, she found that the fall in TEL levels by state account for a very significant 56% of the reduction in violent crime. Again, though, this is essentially a measure of association on population-level statistics, estimated on the individual-level. It's well-thought out and heavily controlled for other factors, but we still need more than this. Drum goes on to describe
significant associations found at the city level in New Orleans. This is pretty good stuff too, but we really need a new type of study, specifically, a study measuring many individuals' exposure to lead, and to follow them over a long period of time to find out what happened to them. This type of design is called a prospective cohort study. Props again to Drum for directly addressing all of this.
|
The graph title pretty much says it all (Source) |
The article continues by discussing a cohort study done by researchers at the University of Cincinnati where 376 children were recruited at birth between 1979 and 1984 to test lead levels in their blood over time and to measure their risk of being arrested in general, and also specifically for violent crime. Ultimately, some of these babies were dropped from the study by the end, and 250 were selected for the results. The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixed picture than they let on. In prenatal blood lead, this effect was not significant. If these infants were to have no additional risk over the median exposure level among all prenatal infants, the ratio would be 1.0. They found that for their cohort, the risk ratio was 1.34. However, the sample size was small enough where the confidence interval for this rate was as low as 0.88 (paradoxically indicating that additional 5 µg/dl during this period of development would actually be protective), and as high as 2.03. This is not very helpful data for the hypothesis. For early childhood exposure, the risk is 1.30, but the sample size was higher, leading to a tighter confidence interval of 1.03-1.64. It's possible that the real effect is as little as a 3% increase in violent crime arrests, but this is still statistically significant. For 6-year-olds, it's a much more significant 1.48 (95% CI 1.15-1.89
). It seems unusual to me that lead would have such a more profound effect the older the child gets, but I need to look into it further. For a quick review of the concept of CI, see my previous post on it. It really matters.
Obviously, we can't take this a step further into experimental data to enhance the hypothesis. We can't expose some children to lead and not others on purpose to see the direct effects. This is the best we can do, and it's possibly quite meaningful, but perhaps not. There's no way to say with much authority one way or another at this point, not just because of the smallish sample size and the mixed results on significance. Despite an improved study design from cross-sectional studies, a cohort study is still measuring correlations, and we need more than one significant result. More cohort studies just like this, or perhaps done more quickly on previously collected blood samples and looking retrospectively at the connection, are absolutely necessary to draw any conclusion on causality. Right now, this all still amounts to a hypothesis without a clear mechanism for action, although it's a hypothesis that definitely deserves more investigation. There's a number of other studies mentioned in the article showing other negative cognitive and neurological effects that could certainly have an indirect impact on violent crime, such as ADHD, aggressiveness, and low IQ, but that's not going to cut it either. By all means, we should try to make a stronger case for government to actively minimize exposure to lead in children more than we currently do, but we really, really should avoid statements like this:
Needless to say, not every child exposed to lead is destined for a life of crime. Everyone over the age of 40 was probably exposed to too much lead during childhood, and most of us suffered nothing more than a few points of IQ loss. But there were plenty of kids already on the margin, and millions of those kids were pushed over the edge from being merely slow or disruptive to becoming part of a nationwide epidemic of violent crime. Once you understand that, it all becomes blindingly obvious (emphasis mine). Of course massive lead exposure among children of the postwar era led to larger numbers of violent criminals in the '60s and beyond. And of course when that lead was removed in the '70s and '80s, the children of that generation lost those artificially heightened violent tendencies.
Woah. That's, um, a bit overconfident. Still, it's beyond debate that lead can have terrible effects on people, and although there is no real scientific basis for calling this violent crime link closed with such strong language, it's a mostly benign case of confirmation bias, complete with putting blame of inaction on powerful interest groups. His motive is clearly to argue that we can safely add violent crime reduction to the cost-benefit analysis of lead abatement programs paid for by the government. I'd love to, but we just can't do that yet.
The $60B figure seems pretty contrived, but is a generally accepted way to quantify a benefit of removing of neurotoxins in wonk world. The $150B is almost completely contrived, and its very inclusion on the infographic is suspect. I certainly believe that spending $10B on cleaning up lead would be well worth it regardless, and even question the value of a cost-benefit analysis in situations like this, but that doesn't mean I'm willing to more or less pick numbers out of a hat. That's essentially what you're doing if you only have one study that aims to address the ecological fallacy.
The big criticism of appealing to evidence would obviously be that it moves at a snail's pace, and there's a possibility we could be hemming and hawing over and delaying what really is a dire public health threat. Even if that were the case, though, public policy often works at a snail's pace too. If you're going to go after it, you gotta have more than one cohort study and a bunch of cross-sectional associations. Hopefully this gives you a bit more insight onto how regulatory agencies like the EPA look at these issues. If this were to go up in front of them right now I can guarantee you they would not act on the solutions Drum presents based on this evidence, and instead of throwing your hands up, I figure it's better to have an understanding of why that would be the case. It's a bit more calming, at least.
Update: I reworded the discussion on the proper hypothesis of a cross-sectional study to make it more clear. Your initial hypothesis in any cross-sectional study should be that the exposure has no association to the outcome.
Update 2: An edited version of this blog now appears on
Discover's Crux blog. I'm amazed to see the response this entry got today, and I can't say enough about how refreshing it is to see Kevin Drum
respond and refine his position a little. In my mind, this went exactly how the relationship between journalism and science should work. Perhaps he should have some latitude to make overly strong conclusions if the goal is really to get scientists to seriously look at it.
Update 3: This just keeps going and going! Some good criticism from commenters at
Tyler Cowen's blog, as well as
Andrew Gelman regarding whether I'm fishing for insignificant values. You can find my responses to each in their comment sections. Perhaps I did focus on the lower bounds of CIs inappropriately, but I think the context makes it clear I'm not claiming there's no evidence, just that I'd like to see replication. In that case, I think it's arguably pretty fair.
Update 4!!! This thing is almost a year old now! I've thought a lot about everything since, and wanted to revisit.
Read if ya wanna.