I'll just be honest, I've never really cared much for writing. A good post covering the sort of issues I want to tackle on this blog takes a ton of time and effort if I don't want to leave low-hanging fruit to damage my credibility. I simply value other things I can do with my free time than that, so it's been months since I've written anything at all. I feel I have to revisit the post that took off, though, and write about what has and hasn't changed in my thinking since.
I started this blog as a place where friends could find evidence-based answers to questions that pop up all the time in the media, and hopefully then they'd direct others. After 7 posts, that was pretty much out the window, which still sort of blows my mind. It's very easy (and often justified) to be cynical of any institution, even science. It's also easy to instinctively fear something and assume you know enough about it to justify it. Critical thinking is something you really have to develop over a long period of time, but done right, it's probably the highest pinnacle of human ingenuity in history. That or maybe whiskey.
On the other hand, sometimes it's easy to slide closer to nihilism. Knowing something about randomness and uncertainty can lead into being overly critical of sensible hypotheses. There's the famous BMJ editorial mocking the idea that there's "no evidence" for parachutes being a good thing to have in a free fall. You always have to have that concept in the back of your mind, to remind you that some things really don't need to climb the hierarchy of good evidence. I've spent some of the last year worrying if maybe I didn't do enough to show that's not where my analysis of Kevin Drum's article came from.
I think most people saw that I was trying to illustrate the evidence linking lead and crime through the lens of evidence-based medicine. The response was resoundingly positive, and much criticism was centered around my half-assed statistical analysis, which I always saw as extremely tangential to the overall point. The best criticism forced me to rethink things and ultimately led to today.
Anyway, anecdotes and case reports are the lowest level of evidence in this lens. The ecological studies Drum spends much space describing by Rick Nevin (which I mistakenly identified as cross-sectional) are not much higher on the list. That's just not debatable in any way, shape, or form. A good longitudinal study is a huge improvement, as I think I effectively articulated, but if you read through some other posts of mine (on BPA, or on the evidence that a vegetarian diet reduces ischemia), you'll start to sense the problems those may present as well. Nevin's ecological studies practically scream "LOOK OVER HERE!" If I could identify the biggest weakness of my post, it's that I sort of gave lip-service to the a Bayesian thought-process suggesting that, given the circumstances, these studies might amount to more than just an association. I didn't talk about how simple reasoning and prior knowledge would suggest something stronger, and use this to illustrate the short-comings of frequentist analysis. I just said that in my own estimation, there's probably something to all of this. I don't know how convincing that was. I acknowledge one of my stated purposes saying how the present evidence would fail agency review may have come off as concern-trolling.
On the other hand, if there is indeed something more to this, it seems reasonable to expect a much stronger association in the cohort study than was found. Going back to Hill's criteria, strength of the association is the primary factor in determining the probability of an actual effect. When these study designs were first being developed to examine whether smoking caused lung cancer, the associations were literally thousands of times stronger than what was found for lead and violent crime. The lack of strength is not a result of small sample sizes or being underpowered, it's just a relatively small effect any way you look at it. It would have been crazy to use the same skeptical arguments I made in that instance, and history has judged those who did properly.
Ultimately, I don't know how well the lens of evidence-based medicine fits the sort of question being asked here. Cancer epidemiology is notoriously difficult because of the length of time between exposure and onset of disease, and the sheer complexity of the disease. It still had a major success with identifying tobacco smoke as a carcinogen, but this was due to consistent, strong, and unmistakable longitudinal evidence of a specific group of individuals. Here, we're talking about a complex behavior, which may be even more difficult to parse. My motivation was never to display prowess as a biostatistician, because I'm not. It was never to say that I'm skeptical of the hypothesis, either. It was simply to take a step back from saying we have identified a "blindingly obvious" primary cause of violent crime and we're doing nothing about it.
I think the evidence tells us, along with informed reasoning, that we have a "reasonably convincing" contributing cause of violent crime identified, and we're doing nothing about it. That's not a subtle difference, and whether one's intentions are noble or not, if I think evidence is being overstated, I'm going to say something about it. Maybe even through this blog again some time.
Tuesday, December 24, 2013
Tuesday, May 28, 2013
Risk, Odds, Hazard...More on The Language
For every 100 g of processed meat people eat, they are 16% more likely to develop colorectal cancer during their lives. For asthma sufferers, the odds of suffering an attack for those who took dupilumab in a recent trial were reduced 87% over a placebo. What does all this mean, and how do we contextualize it? What is risk, and how does it differ from hazard? Truthfully, there's several ways to compare the effects of an exposure to some drug or substance, and the only one that's entirely intuitive is the one that you're least likely to encounter unless you read the results section of a study.
When you see statistics like those above, and pretty much every story revealing results of a study in public health will have them, each way of comparing risk elicits a different kind of reaction in a reader. I'll go back to the prospective cohort study suggesting that vegetarians are 1/3rd less likely to suffer from ischemic heart disease (IHD) than those who eat meat, because I think it's such a great example of how widely the interpretations can seem based upon which metric you use. According to this study, IHD was a pretty rare event; only 2.7% out of over 44,500 individuals developed it at all. For the meat-eaters, 1.6% developed IHD vs. 1.1% in vegetarians. If you simply subtract 1.6% and 1.1%, you might intuitively sense that eating meat didn't really add that much risk. Another way of putting it is out of every 1,000 people, 16 people who eat meat will develop IHD vs. 11 for vegetarians. This could be meaningful if you were able to extrapolate these results to an entire population of say 300 million people, where 1.5 million less incidences of IHD would develop, but I think most epidemiologists would be very cautious in zooming out that far based upon one estimate from a single cohort study. Yet another way of looking at the effect is the "number needed to treat" (NNT), which refers to how many people would need to be vegetarian for one person to benefit. In this case, the answer is20 200 (oops!). That means 199 people who decide to change their diet to cut out meat entirely wouldn't even benefit in terms of developing IHD during their lifetime.
When you see statistics like those above, and pretty much every story revealing results of a study in public health will have them, each way of comparing risk elicits a different kind of reaction in a reader. I'll go back to the prospective cohort study suggesting that vegetarians are 1/3rd less likely to suffer from ischemic heart disease (IHD) than those who eat meat, because I think it's such a great example of how widely the interpretations can seem based upon which metric you use. According to this study, IHD was a pretty rare event; only 2.7% out of over 44,500 individuals developed it at all. For the meat-eaters, 1.6% developed IHD vs. 1.1% in vegetarians. If you simply subtract 1.6% and 1.1%, you might intuitively sense that eating meat didn't really add that much risk. Another way of putting it is out of every 1,000 people, 16 people who eat meat will develop IHD vs. 11 for vegetarians. This could be meaningful if you were able to extrapolate these results to an entire population of say 300 million people, where 1.5 million less incidences of IHD would develop, but I think most epidemiologists would be very cautious in zooming out that far based upon one estimate from a single cohort study. Yet another way of looking at the effect is the "number needed to treat" (NNT), which refers to how many people would need to be vegetarian for one person to benefit. In this case, the answer is
Wednesday, April 24, 2013
The EWG Dirty Dozen: Whoever Said Ignorance is Bliss Definitely Didn't Have Chemistry in Mind
Each year, the Environmental Working Group (EWG) compiles a "Dirty Dozen" list of the produce with the highest levels of pesticide residues on them. The 2013 version was just released this week, framed as a handy shopping guide that can help consumers reduce their exposure to pesticides. Although they do say front and center that it's not intended to steer consumers away from "conventional" produce if that's what they have access to, this strikes me a little as talking out of both sides of their mouth. How can you say that if you really believe that the uncertainties are meaningful enough to create the list, and to do so with the context completely removed? I'm pretty certain the Dirty Dozen preaches to the choir and doesn't change many people's behavior, but the underlying message behind it, while perhaps done with good intentions, to me does some genuine harm regardless. The "appeal to nature" fallacy and "chemophobia" overwhelm legitimate scientific debate, have the potential to polarize a nuanced issue, and tend to cause people stress and worry that's just not all that necessary. This is not hardly going to be a defense of pesticides, but a defense of evidence-based reasoning, and an examination of how sometimes evidence contradicts or complicates simplified narratives. You should eat any fruits and vegetables that you have access to, period, no asterisk.
Almost 500 years ago, the Renaissance physician Paracelsus established that the mere presence of a chemical is basically meaningless when he wrote, to paraphrase, "the dose makes the poison." The question we should really be asking is "how much of the chemical is there?" Unfortunately, this crucial context is not accessible from the Dirty Dozen list, because context sort of undermines the reason for this list's existence. When we are absorbing information, it comes down to which sources we trust. I understand why people trust environmental groups more than regulatory groups, believe me. However, one of the recurring themes on this blog is how evidence-based reasoning often doesn't give the answers the public is looking for, whether it's regarding the ability to reduce violent crime by cleaning up lead pollution, or banning BPA. I think a fair amount of the mistrust of agencies can be explained by this disconnect rather than a chronic inability to do their job. However true it may be that agencies have failed to protect people in the past, it's not so much because they're failing to legitimately assess risk, it's for reasons such as not sounding an alarm and looking the other way when we know that some clinical trials are based on biased or missing data. Calling certain produce dirty without risk assessment is sort of like putting me in a blindfold and yelling "FIRE!", without telling me if it's in a fireplace or whether the next room is going up in flames. When 2 scientists at UC Davis looked into the methodology used by the EWG for the 2010 list, they determined that it lacked scientific credibility, and decided to create their own model based upon established methods. Here's what they found:
This latte should not be so happy. It's full of toxins. (Source) |
Tuesday, April 2, 2013
The Antibody for CD47: How a Promising Treatment Can Get to Patients
Perhaps (I hope) you've heard that imagining all forms of cancer as a single disease is pretty misleading. Some tumors are liquid, some are solid. Some develop on tissue lining one of the many cavities within or on the outside of the body, and some show up on bone or connective tissue. Virtually in every instance cancer researchers have found that the differences have been far more pronounced than the similarities. For decades, efforts to find the one thing common among all of these tumors, in order to find a single, broad potential cure, has basically come up empty. This helps explain why progress on treating cancer has been so aggravatingly slow, and why drugs for breast cancer don't necessarily help a patient with lymphoma. On March 26, the Proceedings of the National Academy of Sciences (PNAS) published an article (some images from which are below) based on tissue cultures and experiments on mice that exploits one broad similarity between many types of tumors, which could potentially lead to a rather simple, single therapy that thus far shows no signs of unacceptable toxicity in the mice. Sounds great, right? So how does this work, what happens next, and how unprecedented is this?
Just look at what happens when you target CD47 |
Ultimately, drugs are molecules, and when they work, it's because there's a target that fits the unique shape and characteristics of the drug. Some early chemotherapeutic drugs, such as vincristine or vinblastine worked because they bonded to the ends of molecules that formed the components of a tiny skeletal framework which holds together virtually every cell in our bodies. Unable to support themselves, new cells across the board did not grow and divide, but since cancer cells grow faster than normal cells, they were the ones more affected. Hair and blood cells also grow rapidly, leading to the most familiar side affects of chemotherapy, hair loss and extreme fatigue. So while these drugs may have led to remission for some patients, it is never without excruciating side affects. The therapy described in PNAS takes a very different approach, where the target is a protein embedded on the surface of cells called CD47, which when expressed, prevents macrophages in the immune system from engulfing and destroying the cells. It does this by binding to a different protein expressed on the surface of the immune cell that happens to fit it quite well. When CD47 is bound to the protein on the macrophage, a signal is sent not to eat whatever cell it's attached to. It's an amazingly elegant system, and fatefully, according to the study, cancer cells happen to express a lot more CD47 than normal cells. The researchers used an antibody for CD47, yet another protein which could bind it in place of the protein on the surface of the macrophage. This prevents the signal that says "don't eat me," and allows the immune cell to do its normal function of destroying something it doesn't recognize. Previous studies had established that this antibody helps shrink leukemia, lymphoma, and bladder cancer in mice, so the PNAS study expanded upon this to look at ovarian, breast, and colon cancers, as well as glioblastoma. It effectively inhibited growth for each, sometimes outright eliminating smaller tumors. Larger tumors, the authors note, would likely still need surgical removal prior to antibody therapy. There's no question now, this needs to be tested in actual human patients.
The next step will be to organize what's called a phase I trial, which enrolls some brave (or desperately poor) individuals, perhaps up to 100, to help determine whether the drug is even safe enough to find out whether it works, and what dose can be tolerated. Often, for simplicity's sake, phase I is combined with phase II trials involving ideally a couple hundred more individuals, which appears to be the intention with the antibody therapy. Phase II trials answer the question "can it work?", with the assumption going in that it doesn't. For a refresher on how the future trial data will be analyzed, see my previous post on basic statistics. Should this phase II trial pan out, meaning sufficient biological activity is observed without unacceptable risks, and there's obviously no guarantee that this will happen, a new, more robust trial will be designed. Phase III trials answer the question everyone wants to know: does it work? The ideal phase III trial involves several thousand patients, which probably wouldn't be too difficult to find when the drug could save their life. In this stage, the new therapy would be compared to the best current therapy rather than placebos, because a placebo isn't a treatment, and would be unbelievably unethical to give to a cancer patient. Take a look at this page from the National Cancer Institute for more information specific to how cancer trials operate.
Oftentimes, unfortunately, the process isn't as smooth as I outlined. Trials are increasingly outsourced outside of the US or Europe, where regulations and ethical frameworks are not nearly as strong, and of course, a few thousand patients in a randomized control trial can't catch every potential adverse effect. And then there's the question of who funds and how they manage these trials, but I'm not going there. For every thousand poor critiques of Big Pharma you can find on the Internet, there's only one Ben Goldacre who does it right. I recommend Bad Pharma if you want to really know more about where this could all go completely off the rails.
That's a long, long road that takes up to a decade, and potentially billions of dollars spent before this potential drug could ever reach the general cancer patient. Given this, it's not really too surprising how much pharmaceutical companies spend on advertising once they beat the odds and get a new drug approved by the FDA. And that's all the more hopeful part of the CD47 story. Thousands of chemicals have been shown to kill cancer cells in vitro, and just a cursory search of the national registry of clinical trials for RCTs involving antibody therapy for cancer alone brings up nearly 1100 results in various stages, from withdrawn, suspended, or currently recruiting patients. This is just a tiny sampling of all clinical trials on cancer therapies, that all got to where they were because they were once so promising in test tubes and animal models. So when you read about this study in the media, it's natural to hope we've finally found a major breakthrough. Maybe we really have. The odds are certainly long, and hopefully this post will help you understand that there's perfectly legitimate reasons why. If it doesn't pan out, it's not gonna be because a trillion dollar industry held it down, it's gonna be because of unacceptable toxicity, or because the effectiveness simply doesn't translate to us, or because it's not significantly better than current treatments. I can't possibly conceive of a worldview where drug companies wouldn't want to get this to people ASAP.
4/25/2013 - UPDATE: I had heard about the FDA's recent Breakthrough Designation, which is intended to expedite the long process of getting drugs to patients with serious conditions, but it didn't come to mind for this post. A melanoma drug received breakthrough designation yesterday, after very preliminary trials showed a marked response in patients. Stay tuned to see if the CD47 antibody therapy joins the ranks.
4/25/2013 - UPDATE: I had heard about the FDA's recent Breakthrough Designation, which is intended to expedite the long process of getting drugs to patients with serious conditions, but it didn't come to mind for this post. A melanoma drug received breakthrough designation yesterday, after very preliminary trials showed a marked response in patients. Stay tuned to see if the CD47 antibody therapy joins the ranks.
Thursday, March 14, 2013
Allow Me to Curate the State of BPA Research For You
After reading two interesting articles on bisphenol A (BPA) in the past few weeks, I decided to spend a little time the other day looking through reviews and analyses to get my own sense of where the research stands. There's a pretty staggering amount of research out there with a variety of designs, which makes the conclusion all that much more frustrating: We don't really know what the deal with BPA is. That being said, it's probably time to rethink the approach to what to do about it.
BPA is considered an endocrine disruptor, meaning that its structure is very similar to a hormone, in this case estradiol. It's so similar, in fact, that hormone receptors for estradiol can be tricked by BPA into a response as if estradiol were bound to it, affecting potentially a number of biological activities. Observational studies have linked BPA to a variety of negative health impacts, especially obesity, neurological damage, cancer, and recently asthma. Considering that virtually everyone is exposed to BPA due to its widespread use in making plastic bottles, can linings, paper receipts, and epoxy resins, these associations naturally should be cause for some concern, particularly in how it may affect infants and children. There is considerable debate on precisely how much concern there should be, probably more than is reflected in most media accounts. The way scientists approach this is quite at odds with the type of information the public needs, which was put eloquently by Richard Sharpe, a researcher in the UK who takes a pretty skeptical stance on BPA's harmful effects.
responded that after years of review, these criteria had essentially not been fully met, and did not ban the substance, specifically on the basis of criteria 5, 7, and 9. One of the criticisms of the FDA response is that some evidence suggests even very low doses may have strong effects, and that a typical dose-response curve that rises steadily as the dose increases until it ultimately plateaus does not reflect how BPA works. Rather, BPA may have sort of an inverted U-shaped dose-response curve referred to as hormesis, in which high levels actually don't have an effect at all, or perhaps even the opposite effect.
Some studies used to support the petition relied on non-oral BPA exposure, which the FDA considered insufficient, as exposure from the skin misses some of the metabolic processes that quickly turn BPA into an inactive form called BPA-monoglucuronide. The only real exposure we'd really need to be concerned about is oral, since that's how we're predominantly exposed, and there's enough difference between how BPA acts orally vs. subcutaneously to doubt the significance of studies using the latter method.
Additional studies used by the NRDC were based upon experiments performed on isolated tissue samples, which bring up a similar concern, as well as being essentially limited to describing a potential mechanism for the chemical's effects and what sort of tissue it would ultimately effect. Another study showing an association with cardiovascular problems was cross-sectional, which takes a single measurement of exposure at one point in time and looks at whether the higher levels of exposure are associated with a disease. As I've mentioned before, this study design is limited to generating hypotheses, and are definitely not considered suitable for determining causation.
So we have a number of epidemiological associations, experimental data on tissue samples, plus some experimental data on primates, and rodents, all pointing to some negative health effects, sometimes even at small doses. Couldn't these all add up to more than the sum of their parts? Sure, there's really 2 major ways to validate that claim. One way, which the FDA apparently thinks highly of, is to use the data from other mammals and tissue samples to develop a mathematical model, which can be used to predict the effects found in humans. Another would be to approach the problem along the lines of, "given the data that shows such and such effect at this level in animals and tissue, we can assume that the probability of this translating to humans in real world exposure is X." Nobody has seemed to try this yet, and the level of subjectivity involved in determining that X makes some researchers uncomfortable. Recently, a researcher named Justin Teeguarden developed a model to predict the levels that should be typically found in humans, and presented his findings (yet to go through peer review) at the annual meeting of the American Association for the Advancement of Science. His research determined that the levels causing effects in animals and tissues are not plausibly found in humans.
Biologists and epidemiologists who have worked on the studies showing the harmful effects question the validity of the assumptions that went into his model, as well as the lack of predictive power in what the levels he suggest exist in humans actually does at either acute or chronic exposure. Tom Philpott at Mother Jones suggests that Teegaurden's past ties to the plastics industry makes his research suspect, a sentiment I don't entirely share, but is not completely irrelevant.
So what do we make of all of this? I think this is a perfect scenario for an ideological fight where two sides dig in immediately and reach a stalemate. Studies with inherent limitations get disseminated to the public as being probably more suggestive than they really are, feeding premature alarm, while industry unjustifiably dismisses the risk. If you read the FDA's response to NRDC, it appears to me that you're just not going to get far calling for an outright ban on a substance like BPA unless you have a good amount of longitudinal data plus experimental data on mammals using the same type of exposure as would be expected in humans. Is that the best way to go? If not, how can it be improved upon?
Perhaps the NRDC's petition might have been more effective if it were honest about the limitations of the studies supporting their argument and the uncertainties that exist (like the dose-response curve of BPA). In addition to calling for a ban of BPA using the precautionary principle, there should also be a focus on safer alternatives. In other words, don't just point to a problem, especially when it's not totally cut and dried, but demonstrate a workable solution, and pursue it with the same energy that's been used up trying to prove something that may not be provable, at least any time soon.
If there's a lesson time and time again from these sorts of things, there's little in the world that's purely black and white. I love exploring the shades of gray, but I can't expect everyone to. However I'd at least love for people to respect that they're out there, and that this is where reality tends to dwell.
BPA is considered an endocrine disruptor, meaning that its structure is very similar to a hormone, in this case estradiol. It's so similar, in fact, that hormone receptors for estradiol can be tricked by BPA into a response as if estradiol were bound to it, affecting potentially a number of biological activities. Observational studies have linked BPA to a variety of negative health impacts, especially obesity, neurological damage, cancer, and recently asthma. Considering that virtually everyone is exposed to BPA due to its widespread use in making plastic bottles, can linings, paper receipts, and epoxy resins, these associations naturally should be cause for some concern, particularly in how it may affect infants and children. There is considerable debate on precisely how much concern there should be, probably more than is reflected in most media accounts. The way scientists approach this is quite at odds with the type of information the public needs, which was put eloquently by Richard Sharpe, a researcher in the UK who takes a pretty skeptical stance on BPA's harmful effects.
What is never stressed enough is that scientists work at “the borders of ignorance” – what is in front of us is unknown and we try to find our way forward by making hypotheses based on what is known. This means that we are wrong most of the time, and because of this scientists have to be very cautious about interpretations that are based on our projected ideas about what is on front of us. What decision-makers, politicians and the public want is unequivocal guidance, not uncertainty. So this creates a dilemma for scientists.So far this is beautiful. Absolutely crucial to keep in mind. Sharpe continues:
Those who are more prepared to throw caution to the winds and make unequivocal statements are more likely to be heard, whereas a more cautious scientist saying “we’re not sure” will not be taken notice of.I honestly don't know whether this is the case with BPA or not. The uncertainties are many, and the value of the observational studies showing all these associations is controversial. There are a number of criteria that correlative studies must meet in order to determine whether that correlation actually equals causation, summarized from the link above.
responded that after years of review, these criteria had essentially not been fully met, and did not ban the substance, specifically on the basis of criteria 5, 7, and 9. One of the criticisms of the FDA response is that some evidence suggests even very low doses may have strong effects, and that a typical dose-response curve that rises steadily as the dose increases until it ultimately plateaus does not reflect how BPA works. Rather, BPA may have sort of an inverted U-shaped dose-response curve referred to as hormesis, in which high levels actually don't have an effect at all, or perhaps even the opposite effect.
Dose-response curve of hormesis (Source) |
Additional studies used by the NRDC were based upon experiments performed on isolated tissue samples, which bring up a similar concern, as well as being essentially limited to describing a potential mechanism for the chemical's effects and what sort of tissue it would ultimately effect. Another study showing an association with cardiovascular problems was cross-sectional, which takes a single measurement of exposure at one point in time and looks at whether the higher levels of exposure are associated with a disease. As I've mentioned before, this study design is limited to generating hypotheses, and are definitely not considered suitable for determining causation.
So we have a number of epidemiological associations, experimental data on tissue samples, plus some experimental data on primates, and rodents, all pointing to some negative health effects, sometimes even at small doses. Couldn't these all add up to more than the sum of their parts? Sure, there's really 2 major ways to validate that claim. One way, which the FDA apparently thinks highly of, is to use the data from other mammals and tissue samples to develop a mathematical model, which can be used to predict the effects found in humans. Another would be to approach the problem along the lines of, "given the data that shows such and such effect at this level in animals and tissue, we can assume that the probability of this translating to humans in real world exposure is X." Nobody has seemed to try this yet, and the level of subjectivity involved in determining that X makes some researchers uncomfortable. Recently, a researcher named Justin Teeguarden developed a model to predict the levels that should be typically found in humans, and presented his findings (yet to go through peer review) at the annual meeting of the American Association for the Advancement of Science. His research determined that the levels causing effects in animals and tissues are not plausibly found in humans.
Biologists and epidemiologists who have worked on the studies showing the harmful effects question the validity of the assumptions that went into his model, as well as the lack of predictive power in what the levels he suggest exist in humans actually does at either acute or chronic exposure. Tom Philpott at Mother Jones suggests that Teegaurden's past ties to the plastics industry makes his research suspect, a sentiment I don't entirely share, but is not completely irrelevant.
So what do we make of all of this? I think this is a perfect scenario for an ideological fight where two sides dig in immediately and reach a stalemate. Studies with inherent limitations get disseminated to the public as being probably more suggestive than they really are, feeding premature alarm, while industry unjustifiably dismisses the risk. If you read the FDA's response to NRDC, it appears to me that you're just not going to get far calling for an outright ban on a substance like BPA unless you have a good amount of longitudinal data plus experimental data on mammals using the same type of exposure as would be expected in humans. Is that the best way to go? If not, how can it be improved upon?
Perhaps the NRDC's petition might have been more effective if it were honest about the limitations of the studies supporting their argument and the uncertainties that exist (like the dose-response curve of BPA). In addition to calling for a ban of BPA using the precautionary principle, there should also be a focus on safer alternatives. In other words, don't just point to a problem, especially when it's not totally cut and dried, but demonstrate a workable solution, and pursue it with the same energy that's been used up trying to prove something that may not be provable, at least any time soon.
If there's a lesson time and time again from these sorts of things, there's little in the world that's purely black and white. I love exploring the shades of gray, but I can't expect everyone to. However I'd at least love for people to respect that they're out there, and that this is where reality tends to dwell.
Tuesday, February 26, 2013
Let's Talk About Gluten. Please. This Has Gotten A Little Out of Control
You certainly don't have to look very hard to find articles and blog posts on gluten and its purported association with a variety of health issues such as obesity, heart disease, arthritis, and non-celiac gluten sensitivity. While I don't really doubt that some people without celiac disease might legitimately be affected by gluten, I think the discussion around gluten and it's non-celiac ill effects have now crossed the line into fad. Wheat Belly, a diet book authored by cardiologist William Davis, is currently the #2 best selling health and fitness book on Amazon, which advocates eliminating wheat entirely from our diets, whole grain or not, largely based upon the premise that modern wheat is nothing like what our grandparents used to eat, and so it must be connected to these growing problems, much less the increased prevalence of celiac disease. Big Ag, essentially, has manipulated the genes of this staple beyond recognition, and the unintended consequences are vast and dire, amounting to "the most powerful disruptive factor in the health of modern humans than any other". While I haven't read the book, I'm familiar with a lot of the arguments it makes, and how they are perceived in the general public, especially the contention that Davis is referring to GMO wheat, which does not exist on the market. Unfortunately it's pretty difficult to find a good, genuine science-based take on it. To paraphrase Keith Kloor, the majority of what you'll find only has the veneer of science.
When I was in high school, and as an undergrad in the humanities, writing a research paper meant I started with a thesis statement and found evidence to support whatever it was I wanted to advocate for. In science-based medicine, you test a hypothesis by conducting a randomized control trial if possible, and ultimately by finding all available published reports and presenting the entire story (systematic review), or by combining the statistical analysis of multiple studies into a single large study (meta-analysis). This is not a subtle difference. Wheat Belly is a prime example of the former, which is not necessarily a bad thing, per se. People can make a compelling argument without a systematic review, but it is not acceptable as a last word in medicine, health, or nutrition, period. While there may be evidence to support the idea, it's easy to minimize or even completely overlook evidence to the contrary, especially since you're not really making a point to look for it. It seems pretty obvious that the discussion around wheat could use a little objectivity.
Take a look at this pretty balanced article recently published by the New York Times on the increasing diagnoses of celiac disease.
Now take a look at some of the massive coverage on the recent randomized control trial showing significant cardiovascular benefits to the Mediterranean diet. Here's a good analysis from The Harvard School of Public Health. The Mediterranean diet arm of the study were encouraged to liberally use olive oil, eat seafood, nuts, vegetables, and whole grains, including a specific recommendation that pasta could be dressed with sofrito (garlic, tomato, onions, and aromatic herbs). The control diet this ended up favorably compared to was quite similar, but specifically geared toward being low-fat. Both were discouraged from eating red meat, high-fat dairy products like cream and butter, commercially-produced bakery goods, and carbonated beverages.
The largest differences between the two diets centered around discouraging vegetable oils, including olive oil, and encouraging 3 or more pasta or starchy dishes per day in the control group. To me, this suggests that Wheat Belly lives in that sweet spot for widespread dissemination of being easily actionable, having some evidence to support it so that you get some good anecdotes and positive results, but is vastly oversimplified and not suitable or necessary for everyone. Remember, the Wheat Belly diet implicates even organic whole grains as being irredeemably manipulated. It's a completely wheat free diet, because modern wheat is the greatest negative factor in human health. Based on an actual experiment of almost 7,500 people, we have strong evidence that it's the amount of wheat people eat that is problematic. You can eat some whole grains daily and still vastly decrease your risk of heart disease and obesity as long as you don't eat them 3 or more times a day.
The appendix to the NEJM study indicates that some of the patients in the control diet complained about bloating and fullness, but nothing similar from the Mediterranean diet group. The implications seem fairly obvious: there is little basis to make a draconian decision to completely eliminate something with proven health benefits such as whole grains from your diet unless you genuinely suffer from celiac disease. If you're interested in losing weight, think maybe you have gluten sensitivity, or just want to eat healthier, try something like this diet first, and definitely don't put your gluten free diet pamphlets in my child's take-home folder at school. That wasn't cool.
Update: Take a look at this critical post about the RCT of the Mediterranean diet. There's some perspective on the magnitude of the effect they found, and some compliance issues with the recommended diets that I'm not convinced are as damning as he suggests. I also find this sentence a bit odd:
Jesus F'ing Christ (Source) |
Take a look at this pretty balanced article recently published by the New York Times on the increasing diagnoses of celiac disease.
BLAME for the increase of celiac disease sometimes falls on gluten-rich, modern wheat varietals; increased consumption of wheat, and the ubiquity of gluten in processed foods.The article goes on to suggest that exposure to different microbial environments is the biggest factor, but it's rather apparent that we have can't just point to a simple answer. The world is a complex place, our bodies are complex, nutrition and health are complex. This is pretty much what you'd expect, right?
Yet the epidemiology of celiac disease doesn’t always support this idea. One comparative study involving some 5,500 subjects yielded a prevalence of roughly one in 100 among Finnish children, but using the same diagnostic methods, just one in 500 among their Russian counterparts.
Differing wheat consumption patterns can’t explain this disparity. If anything, Russians consume more wheat than Finns, and of similar varieties.
Neither can genetics. Although now bisected by the Finno-Russian border, Karelia, as the study region is known, was historically a single province. The two study populations are culturally, linguistically and genetically related. The predisposing gene variants are similarly prevalent in both groups.
Now take a look at some of the massive coverage on the recent randomized control trial showing significant cardiovascular benefits to the Mediterranean diet. Here's a good analysis from The Harvard School of Public Health. The Mediterranean diet arm of the study were encouraged to liberally use olive oil, eat seafood, nuts, vegetables, and whole grains, including a specific recommendation that pasta could be dressed with sofrito (garlic, tomato, onions, and aromatic herbs). The control diet this ended up favorably compared to was quite similar, but specifically geared toward being low-fat. Both were discouraged from eating red meat, high-fat dairy products like cream and butter, commercially-produced bakery goods, and carbonated beverages.
The largest differences between the two diets centered around discouraging vegetable oils, including olive oil, and encouraging 3 or more pasta or starchy dishes per day in the control group. To me, this suggests that Wheat Belly lives in that sweet spot for widespread dissemination of being easily actionable, having some evidence to support it so that you get some good anecdotes and positive results, but is vastly oversimplified and not suitable or necessary for everyone. Remember, the Wheat Belly diet implicates even organic whole grains as being irredeemably manipulated. It's a completely wheat free diet, because modern wheat is the greatest negative factor in human health. Based on an actual experiment of almost 7,500 people, we have strong evidence that it's the amount of wheat people eat that is problematic. You can eat some whole grains daily and still vastly decrease your risk of heart disease and obesity as long as you don't eat them 3 or more times a day.
The appendix to the NEJM study indicates that some of the patients in the control diet complained about bloating and fullness, but nothing similar from the Mediterranean diet group. The implications seem fairly obvious: there is little basis to make a draconian decision to completely eliminate something with proven health benefits such as whole grains from your diet unless you genuinely suffer from celiac disease. If you're interested in losing weight, think maybe you have gluten sensitivity, or just want to eat healthier, try something like this diet first, and definitely don't put your gluten free diet pamphlets in my child's take-home folder at school. That wasn't cool.
Update: Take a look at this critical post about the RCT of the Mediterranean diet. There's some perspective on the magnitude of the effect they found, and some compliance issues with the recommended diets that I'm not convinced are as damning as he suggests. I also find this sentence a bit odd:
So while you might be less likely to have a heart attack or stroke, you're no less likely to die. This is why I'm so confused they ended the study early.I don't know. I'd rather just not have a heart attack or stroke. Nevertheless, it's a thoughtful and overall very thorough take on the study.
Sunday, February 17, 2013
Moderate Drinking Isn't Totally Risk-Free? Crap
Let's be honest, it's pretty ridiculous that it's been three decades since anyone bothered to look at an association between alcohol consumption and risk of cancer mortality in the U.S. Surely, even though there was basically no research to point to, few people would be totally surprised to be reminded that maybe there's some other potentially fatal conditions caused by drinking apart from loss of liver function. And certainly, it's foolish to just assume that, as a moderate drinker, you'd expect only the purported benefits without any of these potential consequences. Bringing this discussion back on the table is a good thing, but it's important to do so responsibly and in the proper context.
Earlier this week, researchers published a study aiming to do exactly that. An in-depth article on the research was featured here in the San Francisco Chronicle. The basic idea is that we know alcohol puts people at risk of developing certain types of cancer, including oral, esophageal, liver, and colon cancer, so the study used meta-analyses published since 2000 to calculate the effect alcohol has in developing these types of cancers, controlling for confounding variables. They then used data from health surveys and alcohol sales to estimate adult alcohol consumption, and then analyzed with mortality data from 2009 to estimate how many deaths might specifically be attributed to drinking using formulas established in other countries for similar purposes. This came out to range between 18,000 and 21,000 people, or about 3.5% of all cancer deaths. This is actually higher than the amount of deaths from melanoma, and considering how aware people are of the risks of extended exposure to the sun without sunscreen, risks of drinking alcohol could be unjustifiably underrated. The next step is to establish a dose-response curve, establishing how drinking more might affect this relationship.
Many of the stories on the article focus particularly on the quote that "there is no safe threshold" for alcohol consumption, and that roughly a third of these deaths represented individuals who consumed less than 1.5 drinks per day. Essentially, as many as 7,000 people in the U.S. who drank that amount per day die from cancer each year that they developed because of that consumption. I'm not really interested in poring through the data to question the validity of this number. It's fair to be very skeptical of how granular you can be in determining the risks for each individual based on an average obtained from surveys known to be quite limited, and ecological data like sales. Ultimately, without longitudinal follow-up of drinkers or a case-control study, this represents a fairly low level of evidence on the grand scheme of things. That's not to say to take the general conclusion with a grain of salt. Quite the contrary, actually. It's just that we can't safely interpret exactly how strong (or weak) this effect really is at this point, and need more robust study designs to get there.
Science-minded people like to blame the media for hyping up conclusions of studies, but here you see the investigators explicitly saying that there is no safe amount of drinking. The abstract itself declares alcohol to be a "major contributor to cancer mortality." What message is a journalist supposed to take from that? The headlines are right there, laid out on a platter for them. I don't think that the investigators necessarily egregiously overstated the conclusions, but it wasn't exactly brimming with context. Also, I don't really expect moderate drinkers to really alter their behavior based on this, but it sort of goes without saying that the proper conclusion wouldn't really grab as much attention, and you never know how things will be absorbed. So I'll try and lay one out myself:
Based upon this study, it appears that the the risk of death due to drinking has been underestimated. Even moderate drinking, which has some potential health benefits, may contribute to mortality from one of seven types of cancers largely understood to be associated with alcohol consumption. This is the first look at such an association in the United States in over 30 years, and as such, represents a building block from which to generate research ideas that more effectively establish this association and how different consumption patterns alter its effect.
Now if you'll excuse me, I'm going to the liquor store to buy some rye and a shaker. For real.
I can buy that Malort gives you cancer at least (Source) |
Many of the stories on the article focus particularly on the quote that "there is no safe threshold" for alcohol consumption, and that roughly a third of these deaths represented individuals who consumed less than 1.5 drinks per day. Essentially, as many as 7,000 people in the U.S. who drank that amount per day die from cancer each year that they developed because of that consumption. I'm not really interested in poring through the data to question the validity of this number. It's fair to be very skeptical of how granular you can be in determining the risks for each individual based on an average obtained from surveys known to be quite limited, and ecological data like sales. Ultimately, without longitudinal follow-up of drinkers or a case-control study, this represents a fairly low level of evidence on the grand scheme of things. That's not to say to take the general conclusion with a grain of salt. Quite the contrary, actually. It's just that we can't safely interpret exactly how strong (or weak) this effect really is at this point, and need more robust study designs to get there.
Science-minded people like to blame the media for hyping up conclusions of studies, but here you see the investigators explicitly saying that there is no safe amount of drinking. The abstract itself declares alcohol to be a "major contributor to cancer mortality." What message is a journalist supposed to take from that? The headlines are right there, laid out on a platter for them. I don't think that the investigators necessarily egregiously overstated the conclusions, but it wasn't exactly brimming with context. Also, I don't really expect moderate drinkers to really alter their behavior based on this, but it sort of goes without saying that the proper conclusion wouldn't really grab as much attention, and you never know how things will be absorbed. So I'll try and lay one out myself:
Based upon this study, it appears that the the risk of death due to drinking has been underestimated. Even moderate drinking, which has some potential health benefits, may contribute to mortality from one of seven types of cancers largely understood to be associated with alcohol consumption. This is the first look at such an association in the United States in over 30 years, and as such, represents a building block from which to generate research ideas that more effectively establish this association and how different consumption patterns alter its effect.
Now if you'll excuse me, I'm going to the liquor store to buy some rye and a shaker. For real.
Monday, February 4, 2013
New Proof That Being a Vegetarian is Healthier?
According to this blog post from ABC News, the recently published (behind paywall) results of a large prospective cohort study performed in the UK offers "further proof that eating meat can be hazardous to health." That strikes me as sort of an odd way of framing it, and I hope after reading this you'll understand why I think so. The proper conclusion really should be that we have more evidence that being a vegetarian is associated with a reduced risk of heart disease, but it's still difficult to tell how much other lifestyle choices play into this. In other words, as long as you live a healthy lifestyle in general and eat meat a few times a week or less, it's still difficult to assume you'd see a significant benefit by cutting that meat out of your diet.
The study began in 1993 and followed 44,561 men and women in England and Scotland, 34% of whom were vegetarians when the study began. After an average follow-up time of nearly 12 years, the incidence of developing ischemic heart disease (IHD) was compared between the vegetarian group and the meat-eating group. A subset of the study population was also used to measure risk factors related to heart disease such as body-mass index (BMI), non-HDL-cholesterol levels, and systolic blood pressure, and again the vegetarian group came out ahead. The investigators controlled for some obvious potential confounding variables such as age (the vegetarian group was 10 years younger on average), sex, and exercise habits. Overall, I agree with the quoted physician from the ABC post that it's a very good study, and does a pretty good job of trying to estimate the singular effect of whether a person eats meat or not.
This is essentially the type of design I was talking about being needed in the lead/crime post, and while it represents a pretty high level of evidence, there's still some very important limitations to be aware of. That's not to justify being skeptical of the entire claim that being a vegetarian is healthier for the heart than not, but it's not as simple as just saying "meat is hazardous to your health." The two groups being compared are not randomized, so it could very well be that the vegetarians had an overall healthier lifestyle, of course including exercise, but also dietary factors beyond just not eating meat. How would the results compare if you took a vegetarian that ate a lot of whole grains and vegetables vs. an omnivore that ate meat maybe once or twice a week but also ate a lot of whole grains and vegetables? My guess would be that the risks of IHD for the latter individual wouldn't be much higher, if at all. The central issue here is called selection bias, and it refers to the possibility that the two populations are different enough that drawing a firm cause and effect relationship to one specific difference between the groups is suspect.
The perils of reading too much into a cohort study are illustrated by the story of hormone-replacement therapy (HRT) for post-menopausal women. In 1991, a large cohort study based on a population of nurses came to the conclusion that HRT protected women from cardiovascular diseases. This was one out of a series of cohort studies for HRT that found a variety of purported benefits. Immediately, concerns about selection bias were raised, but were largely dismissed. HRT became very common throughout the rest of the decade based on these very well-designed cohort studies that seemed to provide really good evidence. Finally, in 2002, the first double-blinded randomized control trial for HRT vs. placebo was performed. Both the estrogen plus progestin and estrogen only arms of the study were halted early because the risks for heart disease, stroke, and pulmonary embolism were higher than in the placebo groups. The exact opposite of the results of the cohort study were found when the comparison groups were randomized. Again, this isn't to say I dispute that completely eliminating meat is a healthy choice, or that if an RCT were performed, the meat-eaters would turn out to be healthier. I'm just illustrating why you should be always be aware of the possibility of selection bias in non-randomized studies.
Another issue to look out for is how reduced risks are presented in studies. There are essentially two ways to do it: by subtracting the difference in risk in one group vs. the other, or by presenting a risk ratio. Looking at the numbers, only 2.7% of the entire study developed IHD, about 1.6% for meat-eaters vs. 1.1% for vegetarians. Another way of putting this is, "a vegetarian diet is associated with 0.5% less risk of developing IHD." That doesn't sound quite as impressive, but it's completely accurate. It's a lot more eye-opening to divide the 1.1% by the 1.6% and get your 32% reduced risk. I'm not saying anybody did anything inappropriate, but the raw numbers have a way of putting things like this into the proper perspective.
Ultimately, what does all this amount to? If I were at risk for IHD, I'd adopt a healthy lifestyle of a well-rounded diet with moderate to no meat, as well as to exercise regularly and stay active throughout the day. Hardly an earth shattering piece of advice, but that's what we've got. To put it bluntly, just eat like a person is supposed to eat, goddamnit. You know what I mean.
Wednesday, January 23, 2013
Wait, There's How Many Independently Funded Studies on GMOs?
I just came across this list of 126 independently-funded peer-reviewed articles on GMOs this morning, and I'm really surprised I hadn't seen it a long time ago. Clicking through some of the studies, they run the gamut of genomic analysis between conventional and genetically modified crops (particularly on unintended alterations of untargeted genes), the potential for transferring antibiotic resistance, the risk of allergenicity, analysis of tissue and metabolites in rats, amount of pesticide use, and the effect of Bt corn and GM soya on mouse testes. In all but a handful of studies, the investigators found no evidence that GM poses an additional risk over conventional farming. Again, these are the independently-funded studies so many critics of the technology have asked for for years. There's simply no excuse for ignoring it.
I've written before about systematic reviews and meta-analyses, and how if you don't look at the full picture, it's extremely easy to cherry-pick to find the results you're looking for. Without really looking too hard, you'll find some studies that contradict the consensus this list represents, as you would expect in just about any field. For instance, one study may say eating eggs provides nearly the same cardiovascular risks as smoking, while a meta-analysis of all prospective cohort studies, including the previous one, on eggs and cardiovascular health representing data from over 260,000 people, shows no additional risk. Which conclusion do you think holds more weight? I can't stress enough that science isn't a push and pull of individual studies floating in a vacuum, it is a systematic way of looking at an entire pool of evidence. It takes work to train yourself to do this. It's just not how we're wired to think, and even people who have been exposed to it still struggle with it, as I see in my everyday experience in evidence-based medicine. People naturally have their preferences, but if there's a way to minimize this effect it's inconceivable to me not to use it to guide our decisions, from adopting a new technology, abandoning existing ones that don't work as well as we hoped, and in the way we determine our public policies.
In my many, many conversations on GMOs, I've found that confirmation bias isn't the only barrier, there's also a significant amount of conflation going on between what's specific to GMOs and what is just poor agricultural practice. For example, it's quite clear that pesticide resistant weeds (aka superweeds) are popping up on farms around the U.S. Certainly, growing Roundup Ready corn would logically facilitate the overuse of a single pesticide on a single area, but resistance will occur anywhere there's over-reliance on a single pesticide, and it's up to the particular farmer to rotate their crops to avoid this. Just because many have failed to do it doesn't mean that GMOs are the primary problem. On the spectrum of possibilities of genetic modification, in my view pesticide resistance is securely on the bottom, but let me put it this way: if we just simply banned them, totally removed pesticide resistant crops from all farms in the U.S., would we solve the problem of superweeds for all time? Obviously not. For every potential risk you've heard of about GMOs, ask yourself the same question. You'll almost always find that the problem comes down to general issues of large-scale agriculture.
I didn't see any studies about cassette tape genes. I'd just steer clear for now. (Source) |
In my many, many conversations on GMOs, I've found that confirmation bias isn't the only barrier, there's also a significant amount of conflation going on between what's specific to GMOs and what is just poor agricultural practice. For example, it's quite clear that pesticide resistant weeds (aka superweeds) are popping up on farms around the U.S. Certainly, growing Roundup Ready corn would logically facilitate the overuse of a single pesticide on a single area, but resistance will occur anywhere there's over-reliance on a single pesticide, and it's up to the particular farmer to rotate their crops to avoid this. Just because many have failed to do it doesn't mean that GMOs are the primary problem. On the spectrum of possibilities of genetic modification, in my view pesticide resistance is securely on the bottom, but let me put it this way: if we just simply banned them, totally removed pesticide resistant crops from all farms in the U.S., would we solve the problem of superweeds for all time? Obviously not. For every potential risk you've heard of about GMOs, ask yourself the same question. You'll almost always find that the problem comes down to general issues of large-scale agriculture.
Thursday, January 17, 2013
Colony Collapse Disorder, Neonics, and The Precautionary Principle
Over the past decade, the strange mystery of declining bee populations and colony collapse disorder (CCD) have justifiably received a ton of media attention. There's plenty of resources out there for more background on what researchers are thinking, and why it matters if you need it. I'd suggest starting with this excellent post by Hannah Nordhaus for Boing Boing, written shortly after a batch of studies were published identifying a specific group of insecticides called neonicotinoids (aka neonics) as a potential primary cause of CCD. I don't really have much to add to the discussion beyond that article, but I'm particularly drawn to this issue because the the precautionary principle is now front and center, and the debate has thus far largely avoided the type of hyperbole and fear-mongering that only serve to distract evidence-based policy. This post is less about trying to being informative as it is me being hopeful that a discussion pops up in the comments.
Earlier this week, the European Food Safety Authority (EFSA) concluded that the evidence suggests use of neonics constitute an unacceptable risk to honeybees, essentially laying the groundwork for an EU-wide ban. As outlined in the Nordhaus post, the study that most clearly linked neonics with CCD has been harshly criticized, certainly by Bayer, the largest manufacturer of neonics, but also by some independent scientists particularly troubled by what they see as unrealistically high doses given to the bees. Glancing at the study it didn't appear that the doses were completely without merit, but it's pretty apparent that the EFSA is operating on the premise that the potential risks of using neonics is worse than any possible benefit to farmers in the EU, as opposed to documented risks supported by large amounts of evidence that has been systematically reviewed.
Normally, I don't find the precautionary principle very compelling, and clearly the U.S. government doesn't either. I think oftentimes potential risks are over-hyped, while real benefits are not fully considered or outright dismissed. It could be used to halt or delay literally any technological advance, and yeah...slippery slope. However, in this case I find myself being a bit more sympathetic to it than I usually am. I really don't know what to think about it. Pointing the finger at synthetic chemicals designed to kill insects obviously seems particularly intuitive, but equally obvious is that our intuition sometimes leads us astray by oversimplifying a complex phenomenon. The UK's Department for Environment, Food, and Rural Affairs recently took the skeptical view reflected by Nordhaus. The department commissioned another long-term study to look at the direct sub-lethal effects on bees, as well as asking researchers in the UK to prioritize this issue, after which they will take another look at some point this year. It's not like they said there's nothing to look at here. I'm tempted to think that this is perfectly acceptable.
Banning neonics isn't going to force farmers in the EU to just abandon insecticides. The alternative chemicals do in fact have much stronger evidence of genuine, tangible, and imminent environmental risks than neonics do. What do you think? How much uncertainty should we tolerate? On what issues do you find the precautionary principle to be appropriate?
Friday, January 11, 2013
Alltrials.net
Hi everyone, I'm really awed at the overwhelmingly positive response I received from my last blog post. Never in my wildest dreams did I figure that 7 posts into this thing I'd actually kinda maybe impact the discussion around an issue I wrote about. Hopefully many of you who visited recently will check back in occasionally. I do this in my spare time, so I can't be hugely prolific, but I'll try to keep it interesting and engaging.
As someone who works in evidence-based medicine, Ben Goldacre's Bad Pharma has been on my mind a lot recently. I couldn't possibly give you a sense of the many issues in the book with the eloquence and expertise that he does, so I encourage you to take a look and see if it interests you. The U.S. version is due on February 5.
Briefly, he covers the many reasons, across all the various stakeholders in medicine, why the entire system is seriously flawed. That's hardly an overstatement. Trials go missing, data are manipulated, regulators don't mount up, etc. and it paints a pretty horrifying picture of doctors making treatment decisions on biased and incomplete evidence, thus putting patients in unnecessary and entirely preventable harm. In this day and age of open-access journals and cheap and easy access to information, there really is no excuse for this state of affairs. I can't imagine where one moderately well-read blog post gives me any right to think I have any influence now, but I didn't want to just read this book and do absolutely nothing.
One step in the right direction is alltrials.net, an initiative to register all trials worldwide along with information on the methods used and results. Take a look at the site, get a sense of why it exists if you don't already, and hopefully you'll sign the petition. We deserve real evidence-based medicine, and I don't think this is your average petition. There's some good names behind it, and a charismatic and likable advocate who I really believe has a chance to get somewhere with this.
I'll get back to blogging about my usual topics soon. In the meantime, I'll always be open to suggestions. A couple of weeks ago I tried to set it up so my latest tweets would show up somewhere on my blog, but it didn't work with this design. Sort of a missed opportunity to suddenly have tens of thousands of page views without my username anywhere. Find me at @scottfirestone, and send me a link or say hi if you like.
As someone who works in evidence-based medicine, Ben Goldacre's Bad Pharma has been on my mind a lot recently. I couldn't possibly give you a sense of the many issues in the book with the eloquence and expertise that he does, so I encourage you to take a look and see if it interests you. The U.S. version is due on February 5.
Briefly, he covers the many reasons, across all the various stakeholders in medicine, why the entire system is seriously flawed. That's hardly an overstatement. Trials go missing, data are manipulated, regulators don't mount up, etc. and it paints a pretty horrifying picture of doctors making treatment decisions on biased and incomplete evidence, thus putting patients in unnecessary and entirely preventable harm. In this day and age of open-access journals and cheap and easy access to information, there really is no excuse for this state of affairs. I can't imagine where one moderately well-read blog post gives me any right to think I have any influence now, but I didn't want to just read this book and do absolutely nothing.
One step in the right direction is alltrials.net, an initiative to register all trials worldwide along with information on the methods used and results. Take a look at the site, get a sense of why it exists if you don't already, and hopefully you'll sign the petition. We deserve real evidence-based medicine, and I don't think this is your average petition. There's some good names behind it, and a charismatic and likable advocate who I really believe has a chance to get somewhere with this.
I'll get back to blogging about my usual topics soon. In the meantime, I'll always be open to suggestions. A couple of weeks ago I tried to set it up so my latest tweets would show up somewhere on my blog, but it didn't work with this design. Sort of a missed opportunity to suddenly have tens of thousands of page views without my username anywhere. Find me at @scottfirestone, and send me a link or say hi if you like.
Thursday, January 3, 2013
The Link Between Leaded Gasoline and Crime
Kevin Drum from Mother Jones has a fascinating new article detailing the hypothesis that exposure to lead, particularly tetraethyl lead (TEL) explains the rise and fall of violent crime rates from the 1960s through the 1990s, after the compound was phased out of gasoline worldwide. It's a good bit of journalism on issues of public health compared to much of what you see, but I'd like to provide a little bit of epidemiology background to the article because there's so many studies listed that it's a really good intro to the types of study designs you'll see in public health. It also illustrates the concept of confirmation bias, and why regulatory agencies seem to drag their feet when we read such compelling stories as this one.
Drum correctly notes that simply looking at the correlation shown in the graph to the right is insufficient to draw any conclusions regarding causality. The investigator, Rick Nevin, was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it's been documented that values for a population do not always match those of individuals when looking at cross-sectional data. This is the ecological fallacy, and it's a serious limitation in these types of studies. Finding a causal link in a complex behavior like violent crime, as opposed to something like a specific disease, with an environmental risk factor is exceptionally difficult, and the burden of proof is very high. We need several additional tests of our hypothesis using different study designs to really turn this into a viable theory. As Drum notes:
Well that's interesting, so I looked a bit further at Reyes's study. In the study, she estimates prenatal and early childhood exposure to TEL based on population-wide figures, and accounts for potential migration from state to state, as well as other potential causes of violent crime to get a stronger estimate of the effect of TEL alone. After all of this, she found that the fall in TEL levels by state account for a very significant 56% of the reduction in violent crime. Again, though, this is essentially a measure of association on population-level statistics, estimated on the individual-level. It's well-thought out and heavily controlled for other factors, but we still need more than this. Drum goes on to describe significant associations found at the city level in New Orleans. This is pretty good stuff too, but we really need a new type of study, specifically, a study measuring many individuals' exposure to lead, and to follow them over a long period of time to find out what happened to them. This type of design is called a prospective cohort study. Props again to Drum for directly addressing all of this.
The article continues by discussing a cohort study done by researchers at the University of Cincinnati where 376 children were recruited at birth between 1979 and 1984 to test lead levels in their blood over time and to measure their risk of being arrested in general, and also specifically for violent crime. Ultimately, some of these babies were dropped from the study by the end, and 250 were selected for the results. The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixed picture than they let on. In prenatal blood lead, this effect was not significant. If these infants were to have no additional risk over the median exposure level among all prenatal infants, the ratio would be 1.0. They found that for their cohort, the risk ratio was 1.34. However, the sample size was small enough where the confidence interval for this rate was as low as 0.88 (paradoxically indicating that additional 5 µg/dl during this period of development would actually be protective), and as high as 2.03. This is not very helpful data for the hypothesis. For early childhood exposure, the risk is 1.30, but the sample size was higher, leading to a tighter confidence interval of 1.03-1.64. It's possible that the real effect is as little as a 3% increase in violent crime arrests, but this is still statistically significant. For 6-year-olds, it's a much more significant 1.48 (95% CI 1.15-1.89). It seems unusual to me that lead would have such a more profound effect the older the child gets, but I need to look into it further. For a quick review of the concept of CI, see my previous post on it. It really matters.
Obviously, we can't take this a step further into experimental data to enhance the hypothesis. We can't expose some children to lead and not others on purpose to see the direct effects. This is the best we can do, and it's possibly quite meaningful, but perhaps not. There's no way to say with much authority one way or another at this point, not just because of the smallish sample size and the mixed results on significance. Despite an improved study design from cross-sectional studies, a cohort study is still measuring correlations, and we need more than one significant result. More cohort studies just like this, or perhaps done more quickly on previously collected blood samples and looking retrospectively at the connection, are absolutely necessary to draw any conclusion on causality. Right now, this all still amounts to a hypothesis without a clear mechanism for action, although it's a hypothesis that definitely deserves more investigation. There's a number of other studies mentioned in the article showing other negative cognitive and neurological effects that could certainly have an indirect impact on violent crime, such as ADHD, aggressiveness, and low IQ, but that's not going to cut it either. By all means, we should try to make a stronger case for government to actively minimize exposure to lead in children more than we currently do, but we really, really should avoid statements like this:
Woah. That's, um, a bit overconfident. Still, it's beyond debate that lead can have terrible effects on people, and although there is no real scientific basis for calling this violent crime link closed with such strong language, it's a mostly benign case of confirmation bias, complete with putting blame of inaction on powerful interest groups. His motive is clearly to argue that we can safely add violent crime reduction to the cost-benefit analysis of lead abatement programs paid for by the government. I'd love to, but we just can't do that yet.
The $60B figure seems pretty contrived, but is a generally accepted way to quantify a benefit of removing of neurotoxins in wonk world. The $150B is almost completely contrived, and its very inclusion on the infographic is suspect. I certainly believe that spending $10B on cleaning up lead would be well worth it regardless, and even question the value of a cost-benefit analysis in situations like this, but that doesn't mean I'm willing to more or less pick numbers out of a hat. That's essentially what you're doing if you only have one study that aims to address the ecological fallacy.
The big criticism of appealing to evidence would obviously be that it moves at a snail's pace, and there's a possibility we could be hemming and hawing over and delaying what really is a dire public health threat. Even if that were the case, though, public policy often works at a snail's pace too. If you're going to go after it, you gotta have more than one cohort study and a bunch of cross-sectional associations. Hopefully this gives you a bit more insight onto how regulatory agencies like the EPA look at these issues. If this were to go up in front of them right now I can guarantee you they would not act on the solutions Drum presents based on this evidence, and instead of throwing your hands up, I figure it's better to have an understanding of why that would be the case. It's a bit more calming, at least.
Update: I reworded the discussion on the proper hypothesis of a cross-sectional study to make it more clear. Your initial hypothesis in any cross-sectional study should be that the exposure has no association to the outcome.
Update 2: An edited version of this blog now appears on Discover's Crux blog. I'm amazed to see the response this entry got today, and I can't say enough about how refreshing it is to see Kevin Drum respond and refine his position a little. In my mind, this went exactly how the relationship between journalism and science should work. Perhaps he should have some latitude to make overly strong conclusions if the goal is really to get scientists to seriously look at it.
Update 3: This just keeps going and going! Some good criticism from commenters at Tyler Cowen's blog, as well as Andrew Gelman regarding whether I'm fishing for insignificant values. You can find my responses to each in their comment sections. Perhaps I did focus on the lower bounds of CIs inappropriately, but I think the context makes it clear I'm not claiming there's no evidence, just that I'd like to see replication. In that case, I think it's arguably pretty fair.
Update 4!!! This thing is almost a year old now! I've thought a lot about everything since, and wanted to revisit. Read if ya wanna.
Drum correctly notes that simply looking at the correlation shown in the graph to the right is insufficient to draw any conclusions regarding causality. The investigator, Rick Nevin, was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it's been documented that values for a population do not always match those of individuals when looking at cross-sectional data. This is the ecological fallacy, and it's a serious limitation in these types of studies. Finding a causal link in a complex behavior like violent crime, as opposed to something like a specific disease, with an environmental risk factor is exceptionally difficult, and the burden of proof is very high. We need several additional tests of our hypothesis using different study designs to really turn this into a viable theory. As Drum notes:
During the '70s and '80s, the introduction of the catalytic converter, combined with increasingly stringent Environmental Protection Agency rules, steadily reduced the amount of leaded gasoline used in America, but Reyes discovered that this reduction wasn't uniform. In fact, use of leaded gasoline varied widely among states, and this gave Reyes the opening she needed. If childhood lead exposure really did produce criminal behavior in adults, you'd expect that in states where consumption of leaded gasoline declined slowly, crime would decline slowly too. Conversely, in states where it declined quickly, crime would decline quickly. And that's exactly what she found.
Well that's interesting, so I looked a bit further at Reyes's study. In the study, she estimates prenatal and early childhood exposure to TEL based on population-wide figures, and accounts for potential migration from state to state, as well as other potential causes of violent crime to get a stronger estimate of the effect of TEL alone. After all of this, she found that the fall in TEL levels by state account for a very significant 56% of the reduction in violent crime. Again, though, this is essentially a measure of association on population-level statistics, estimated on the individual-level. It's well-thought out and heavily controlled for other factors, but we still need more than this. Drum goes on to describe significant associations found at the city level in New Orleans. This is pretty good stuff too, but we really need a new type of study, specifically, a study measuring many individuals' exposure to lead, and to follow them over a long period of time to find out what happened to them. This type of design is called a prospective cohort study. Props again to Drum for directly addressing all of this.
The graph title pretty much says it all (Source) |
Obviously, we can't take this a step further into experimental data to enhance the hypothesis. We can't expose some children to lead and not others on purpose to see the direct effects. This is the best we can do, and it's possibly quite meaningful, but perhaps not. There's no way to say with much authority one way or another at this point, not just because of the smallish sample size and the mixed results on significance. Despite an improved study design from cross-sectional studies, a cohort study is still measuring correlations, and we need more than one significant result. More cohort studies just like this, or perhaps done more quickly on previously collected blood samples and looking retrospectively at the connection, are absolutely necessary to draw any conclusion on causality. Right now, this all still amounts to a hypothesis without a clear mechanism for action, although it's a hypothesis that definitely deserves more investigation. There's a number of other studies mentioned in the article showing other negative cognitive and neurological effects that could certainly have an indirect impact on violent crime, such as ADHD, aggressiveness, and low IQ, but that's not going to cut it either. By all means, we should try to make a stronger case for government to actively minimize exposure to lead in children more than we currently do, but we really, really should avoid statements like this:
Needless to say, not every child exposed to lead is destined for a life of crime. Everyone over the age of 40 was probably exposed to too much lead during childhood, and most of us suffered nothing more than a few points of IQ loss. But there were plenty of kids already on the margin, and millions of those kids were pushed over the edge from being merely slow or disruptive to becoming part of a nationwide epidemic of violent crime. Once you understand that, it all becomes blindingly obvious (emphasis mine). Of course massive lead exposure among children of the postwar era led to larger numbers of violent criminals in the '60s and beyond. And of course when that lead was removed in the '70s and '80s, the children of that generation lost those artificially heightened violent tendencies.
Woah. That's, um, a bit overconfident. Still, it's beyond debate that lead can have terrible effects on people, and although there is no real scientific basis for calling this violent crime link closed with such strong language, it's a mostly benign case of confirmation bias, complete with putting blame of inaction on powerful interest groups. His motive is clearly to argue that we can safely add violent crime reduction to the cost-benefit analysis of lead abatement programs paid for by the government. I'd love to, but we just can't do that yet.
The $60B figure seems pretty contrived, but is a generally accepted way to quantify a benefit of removing of neurotoxins in wonk world. The $150B is almost completely contrived, and its very inclusion on the infographic is suspect. I certainly believe that spending $10B on cleaning up lead would be well worth it regardless, and even question the value of a cost-benefit analysis in situations like this, but that doesn't mean I'm willing to more or less pick numbers out of a hat. That's essentially what you're doing if you only have one study that aims to address the ecological fallacy.
The big criticism of appealing to evidence would obviously be that it moves at a snail's pace, and there's a possibility we could be hemming and hawing over and delaying what really is a dire public health threat. Even if that were the case, though, public policy often works at a snail's pace too. If you're going to go after it, you gotta have more than one cohort study and a bunch of cross-sectional associations. Hopefully this gives you a bit more insight onto how regulatory agencies like the EPA look at these issues. If this were to go up in front of them right now I can guarantee you they would not act on the solutions Drum presents based on this evidence, and instead of throwing your hands up, I figure it's better to have an understanding of why that would be the case. It's a bit more calming, at least.
Update: I reworded the discussion on the proper hypothesis of a cross-sectional study to make it more clear. Your initial hypothesis in any cross-sectional study should be that the exposure has no association to the outcome.
Update 2: An edited version of this blog now appears on Discover's Crux blog. I'm amazed to see the response this entry got today, and I can't say enough about how refreshing it is to see Kevin Drum respond and refine his position a little. In my mind, this went exactly how the relationship between journalism and science should work. Perhaps he should have some latitude to make overly strong conclusions if the goal is really to get scientists to seriously look at it.
Update 3: This just keeps going and going! Some good criticism from commenters at Tyler Cowen's blog, as well as Andrew Gelman regarding whether I'm fishing for insignificant values. You can find my responses to each in their comment sections. Perhaps I did focus on the lower bounds of CIs inappropriately, but I think the context makes it clear I'm not claiming there's no evidence, just that I'd like to see replication. In that case, I think it's arguably pretty fair.
Update 4!!! This thing is almost a year old now! I've thought a lot about everything since, and wanted to revisit. Read if ya wanna.
Subscribe to:
Posts (Atom)