Tuesday, December 24, 2013

Revisiting Lead and Violent Crime: A Year of Marinating On It

I'll just be honest, I've never really cared much for writing. A good post covering the sort of issues I want to tackle on this blog takes a ton of time and effort if I don't want to leave low-hanging fruit to damage my credibility. I simply value other things I can do with my free time than that, so it's been months since I've written anything at all. I feel I have to revisit the post that took off, though, and write about what has and hasn't changed in my thinking since.

I started this blog as a place where friends could find evidence-based answers to questions that pop up all the time in the media, and hopefully then they'd direct others. After 7 posts, that was pretty much out the window, which still sort of blows my mind. It's very easy (and often justified) to be cynical of any institution, even science. It's also easy to instinctively fear something and assume you know enough about it to justify it. Critical thinking is something you really have to develop over a long period of time, but done right, it's probably the highest pinnacle of human ingenuity in history. That or maybe whiskey.

On the other hand, sometimes it's easy to slide closer to nihilism. Knowing something about randomness and uncertainty can lead into being overly critical of sensible hypotheses. There's the famous BMJ editorial mocking the idea that there's "no evidence" for parachutes being a good thing to have in a free fall. You always have to have that concept in the back of your mind, to remind you that some things really don't need to climb the hierarchy of good evidence. I've spent some of the last year worrying if maybe I didn't do enough to show that's not where my analysis of Kevin Drum's article came from.

I think most people saw that I was trying to illustrate the evidence linking lead and crime through the lens of evidence-based medicine. The response was resoundingly positive, and much criticism was centered around my half-assed statistical analysis, which I always saw as extremely tangential to the overall point. The best criticism forced me to rethink things and ultimately led to today.

Anyway, anecdotes and case reports are the lowest level of evidence in this lens. The ecological studies Drum spends much space describing by Rick Nevin (which I mistakenly identified as cross-sectional) are not much higher on the list. That's just not debatable in any way, shape, or form. A good longitudinal study is a huge improvement, as I think I effectively articulated, but if you read through some other posts of mine (on BPA, or on the evidence that a vegetarian diet reduces ischemia), you'll start to sense the problems those may present as well. Nevin's ecological studies practically scream "LOOK OVER HERE!" If I could identify the biggest weakness of my post, it's that I sort of gave lip-service to the a Bayesian thought-process suggesting that, given the circumstances, these studies might amount to more than just an association. I didn't talk about how simple reasoning and prior knowledge would suggest something stronger, and use this to illustrate the short-comings of frequentist analysis. I just said that in my own estimation, there's probably something to all of this. I don't know how convincing that was. I acknowledge one of my stated purposes saying how the present evidence would fail agency review may have come off as concern-trolling.

On the other hand, if there is indeed something more to this, it seems reasonable to expect a much stronger association in the cohort study than was found. Going back to Hill's criteria, strength of the association is the primary factor in determining the probability of an actual effect. When these study designs were first being developed to examine whether smoking caused lung cancer, the associations were literally thousands of times stronger than what was found for lead and violent crime. The lack of strength is not a result of small sample sizes or being underpowered, it's just a relatively small effect any way you look at it. It would have been crazy to use the same skeptical arguments I made in that instance, and history has judged those who did properly.

Ultimately, I don't know how well the lens of evidence-based medicine fits the sort of question being asked here. Cancer epidemiology is notoriously difficult because of the length of time between exposure and onset of disease, and the sheer complexity of the disease. It still had a major success with identifying tobacco smoke as a carcinogen, but this was due to consistent, strong, and unmistakable longitudinal evidence of a specific group of individuals. Here, we're talking about a complex behavior, which may be even more difficult to parse. My motivation was never to display prowess as a biostatistician, because I'm not. It was never to say that I'm skeptical of the hypothesis, either. It was simply to take a step back from saying we have identified a "blindingly obvious" primary cause of violent crime and we're doing nothing about it.

I think the evidence tells us, along with informed reasoning, that we have a "reasonably convincing" contributing cause of violent crime identified, and we're doing nothing about it. That's not a subtle difference, and whether one's intentions are noble or not, if I think evidence is being overstated, I'm going to say something about it. Maybe even through this blog again some time.

Tuesday, May 28, 2013

Risk, Odds, Hazard...More on The Language

For every 100 g of processed meat people eat, they are 16% more likely to develop colorectal cancer during their lives. For asthma sufferers, the odds of suffering an attack for those who took dupilumab in a recent trial were reduced 87% over a placebo. What does all this mean, and how do we contextualize it? What is risk, and how does it differ from hazard? Truthfully, there's several ways to compare the effects of an exposure to some drug or substance, and the only one that's entirely intuitive is the one that you're least likely to encounter unless you read the results section of a study.

When you see statistics like those above, and pretty much every story revealing results of a study in public health will have them, each way of comparing risk elicits a different kind of reaction in a reader. I'll go back to the prospective cohort study suggesting that vegetarians are 1/3rd less likely to suffer from ischemic heart disease (IHD) than those who eat meat, because I think it's such a great example of how widely the interpretations can seem based upon which metric you use. According to this study, IHD was a pretty rare event; only 2.7% out of over 44,500 individuals developed it at all. For the meat-eaters, 1.6% developed IHD vs. 1.1% in vegetarians. If you simply subtract 1.6% and 1.1%, you might intuitively sense that eating meat didn't really add that much risk. Another way of putting it is out of every 1,000 people, 16 people who eat meat will develop IHD vs. 11 for vegetarians. This could be meaningful if you were able to extrapolate these results to an entire population of say 300 million people, where 1.5 million less incidences of IHD would develop, but I think most epidemiologists would be very cautious in zooming out that far based upon one estimate from a single cohort study. Yet another way of looking at the effect is the "number needed to treat" (NNT), which refers to how many people would need to be vegetarian for one person to benefit. In this case, the answer is 20 200 (oops!). That means 199 people who decide to change their diet to cut out meat entirely wouldn't even benefit in terms of developing IHD during their lifetime.


Wednesday, April 24, 2013

The EWG Dirty Dozen: Whoever Said Ignorance is Bliss Definitely Didn't Have Chemistry in Mind

Each year, the Environmental Working Group (EWG) compiles a "Dirty Dozen" list of the produce with the highest levels of pesticide residues on them. The 2013 version was just released this week, framed as a handy shopping guide that can help consumers reduce their exposure to pesticides. Although they do say front and center that it's not intended to steer consumers away from "conventional" produce if that's what they have access to, this strikes me a little as talking out of both sides of their mouth. How can you say that if you really believe that the uncertainties are meaningful enough to create the list, and to do so with the context completely removed? I'm pretty certain the Dirty Dozen preaches to the choir and doesn't change many people's behavior, but the underlying message behind it, while perhaps done with good intentions, to me does some genuine harm regardless. The "appeal to nature" fallacy and "chemophobia" overwhelm legitimate scientific debate, have the potential to polarize a nuanced issue, and tend to cause people stress and worry that's just not all that necessary. This is not hardly going to be a defense of pesticides, but a defense of evidence-based reasoning, and an examination of how sometimes evidence contradicts or complicates simplified narratives. You should eat any fruits and vegetables that you have access to, period, no asterisk.

This latte should not be so happy. It's full of toxins. (Source)
Almost 500 years ago, the Renaissance physician Paracelsus established that the mere presence of a chemical is basically meaningless when he wrote, to paraphrase, "the dose makes the poison." The question we should really be asking is "how much of the chemical is there?" Unfortunately, this crucial context is not accessible from the Dirty Dozen list, because context sort of undermines the reason for this list's existence. When we are absorbing information, it comes down to which sources we trust. I understand why people trust environmental groups more than regulatory groups, believe me. However, one of the recurring themes on this blog is how evidence-based reasoning often doesn't give the answers the public is looking for, whether it's regarding the ability to reduce violent crime by cleaning up lead pollution, or banning BPA. I think a fair amount of the mistrust of agencies can be explained by this disconnect rather than a chronic inability to do their job. However true it may be that agencies have failed to protect people in the past, it's not so much because they're failing to legitimately assess risk, it's for reasons such as not sounding an alarm and looking the other way when we know that some clinical trials are based on biased or missing data. Calling certain produce dirty without risk assessment is sort of like putting me in a blindfold and yelling "FIRE!", without telling me if it's in a fireplace or whether the next room is going up in flames. When 2 scientists at UC Davis looked into the methodology used by the EWG for the 2010 list, they determined that it lacked scientific credibility, and decided to create their own model based upon established methods. Here's what they found:

Tuesday, April 2, 2013

The Antibody for CD47: How a Promising Treatment Can Get to Patients

Perhaps (I hope) you've heard that imagining all forms of cancer as a single disease is pretty misleading. Some tumors are liquid, some are solid. Some develop on tissue lining one of the many cavities within or on the outside of the body, and some show up on bone or connective tissue. Virtually in every instance cancer researchers have found that the differences have been far more pronounced than the similarities. For decades, efforts to find the one thing common among all of these tumors, in order to find a single, broad potential cure, has basically come up empty. This helps explain why progress on treating cancer has been so aggravatingly slow, and why drugs for breast cancer don't necessarily help a patient with lymphoma. On March 26, the Proceedings of the National Academy of Sciences (PNAS) published an article (some images from which are below) based on tissue cultures and experiments on mice that exploits one broad similarity between many types of tumors, which could potentially lead to a rather simple, single therapy that thus far shows no signs of unacceptable toxicity in the mice. Sounds great, right? So how does this work, what happens next, and how unprecedented is this?

Just look at what happens when you target CD47
Ultimately, drugs are molecules, and when they work, it's because there's a target that fits the unique shape and characteristics of the drug. Some early chemotherapeutic drugs, such as vincristine or vinblastine worked because they bonded to the ends of molecules that formed the components of a tiny skeletal framework which holds together virtually every cell in our bodies. Unable to support themselves, new cells across the board did not grow and divide, but since cancer cells grow faster than normal cells, they were the ones more affected. Hair and blood cells also grow rapidly, leading to the most familiar side affects of chemotherapy, hair loss and extreme fatigue. So while these drugs may have led to remission for some patients, it is never without excruciating side affects. The therapy described in PNAS takes a very different approach, where the target is a protein embedded on the surface of cells called CD47, which when expressed, prevents macrophages in the immune system from engulfing and destroying the cells. It does this by binding to a different protein expressed on the surface of the immune cell that happens to fit it quite well. When CD47 is bound to the protein on the macrophage, a signal is sent not to eat whatever cell it's attached to. It's an amazingly elegant system, and fatefully, according to the study, cancer cells happen to express a lot more CD47 than normal cells. The researchers used an antibody for CD47, yet another protein which could bind it in place of the protein on the surface of the macrophage. This prevents the signal that says "don't eat me," and allows the immune cell to do its normal function of destroying something it doesn't recognize. Previous studies had established that this antibody helps shrink leukemia, lymphoma, and bladder cancer in mice, so the PNAS study expanded upon this to look at ovarian, breast, and colon cancers, as well as glioblastoma. It effectively inhibited growth for each, sometimes outright eliminating smaller tumors. Larger tumors, the authors note, would likely still need surgical removal prior to antibody therapy. There's no question now, this needs to be tested in actual human patients.

The next step will be to organize what's called a phase I trial, which enrolls some brave (or desperately poor) individuals, perhaps up to 100, to help determine whether the drug is even safe enough to find out whether it works, and what dose can be tolerated. Often, for simplicity's sake, phase I is combined with phase II trials involving ideally a couple hundred more individuals, which appears to be the intention with the antibody therapy. Phase II trials answer the question "can it work?", with the assumption going in that it doesn't. For a refresher on how the future trial data will be analyzed, see my previous post on basic statistics. Should this phase II trial pan out, meaning sufficient biological activity is observed without unacceptable risks, and there's obviously no guarantee that this will happen, a new, more robust trial will be designed. Phase III trials answer the question everyone wants to know: does it work? The ideal phase III trial involves several thousand patients, which probably wouldn't be too difficult to find when the drug could save their life. In this stage, the new therapy would be compared to the best current therapy rather than placebos, because a placebo isn't a treatment, and would be unbelievably unethical to give to a cancer patient. Take a look at this page from the National Cancer Institute for more information specific to how cancer trials operate.

Oftentimes, unfortunately, the process isn't as smooth as I outlined. Trials are increasingly outsourced outside of the US or Europe, where regulations and ethical frameworks are not nearly as strong, and of course, a few thousand patients in a randomized control trial can't catch every potential adverse effect. And then there's the question of who funds and how they manage these trials, but I'm not going there. For every thousand poor critiques of Big Pharma you can find on the Internet, there's only one Ben Goldacre who does it right. I recommend Bad Pharma if you want to really know more about where this could all go completely off the rails.

Tumors go bye-bye
That's a long, long road that takes up to a decade, and potentially billions of dollars spent before this potential drug could ever reach the general cancer patient. Given this, it's not really too surprising how much pharmaceutical companies spend on advertising once they beat the odds and get a new drug approved by the FDA. And that's all the more hopeful part of the CD47 story. Thousands of chemicals have been shown to kill cancer cells in vitro, and just a cursory search of the national registry of clinical trials for RCTs involving antibody therapy for cancer alone brings up nearly 1100 results in various stages, from withdrawn, suspended, or currently recruiting patients.  This is just a tiny sampling of all clinical trials on cancer therapies, that all got to where they were because they were once so promising in test tubes and animal models. So when you read about this study in the media, it's natural to hope we've finally found a major breakthrough. Maybe we really have. The odds are certainly long, and hopefully this post will help you understand that there's perfectly legitimate reasons why. If it doesn't pan out, it's not gonna be because a trillion dollar industry held it down, it's gonna be because of unacceptable toxicity, or because the effectiveness simply doesn't translate to us, or because it's not significantly better than current treatments. I can't possibly conceive of a worldview where drug companies wouldn't want to get this to people ASAP.

4/25/2013 - UPDATE: I had heard about the FDA's recent Breakthrough Designation, which is intended to expedite the long process of getting drugs to patients with serious conditions, but it didn't come to mind for this post. A melanoma drug received breakthrough designation yesterday, after very preliminary trials showed a marked response in patients. Stay tuned to see if the CD47 antibody therapy joins the ranks.