tag:blogger.com,1999:blog-63295710677646732392024-02-02T10:22:06.660-06:00His Science Is Too Tight!Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.comBlogger19125tag:blogger.com,1999:blog-6329571067764673239.post-88189307587730389552015-02-03T10:37:00.002-06:002015-02-03T10:37:49.564-06:00On the ProcessHere we are, in 2015 debating the safety and efficacy of vaccines. I've always shied away from writing about vaccines because there's so many people out there who can write about it so much better, and with a much bigger audience. But here we are, in 2015, and I guess every little bit can help.<br />
<br />
What non-scientists tend to not usually grasp about science is that it's not just a collection of facts you read in a book. It's the only process of gaining information about the universe that is willing, and in fact actively desires to prove itself wrong. You all know that the process starts with a hypothesis that needs to be tested, and that the results of the test then may lead to peer-review, but where I think the understanding diverges is that this peer-review process absolutely does not stop at publication.<br />
<br />
Authors have to defend their studies and their data to a collective body of tens of thousands of people around the world who have spent millions of cumulative hours studying the same field, subjecting themselves to some of the harshest criticism in any human endeavor. Sometimes they do this for decades, and if you're only casually paying attention, it seems like it goes back and forth and nobody knows anything, but that's almost never really the case. Sometimes conflicts of interest and fraudulent data are exposed that immediately discredit a study. Sometimes good ideas and compelling data really are suppressed, but the key is that this cannot be done by an entire field. Fifty years ago, scientists were certain that lead in gasoline was an environmental and human health disaster unfolding in slow motion. They knew that tobacco exponentially increased the risk of lung cancer. The "debate" was not scientific, it was entirely political. If a researcher is claiming persecution by some industry or another, see how that researcher's field overall sees it before you assume anything. The power dynamic alone tells you little, and if you're like me, that's a very difficult thing to accept, but it's true.<br />
<br />
So yes, the process is messy, but not so messy to make cynicism and nihilism the appropriate response. If this process eventually results in a consensus among all of these people that is completely overwhelming, then if you "disagree", the problem is your understanding of the topic. It's not Big Pharma. It's not Big Ag. It's not Big Government. It's you. You don't get to bypass this process because you can make a fancy webpage about "toxins". You don't get to bypass this process with superficial catchphrases like "treat the cause, not the symptoms", or lobbing around terms like "reductionist" or "scientism" that are often just catch-all terms to dismiss empirical evidence out of hand. You don't need to know how a vaccine works on a molecular level to know it works. A randomized control trial doesn't depend on a detailed understanding of biochemistry, it depends on simply comparing the final outcomes of two or more options. That's not reductionism, that's not arrogance, that's just the way we can tell if something works. If a new measles prevention therapy continually and undeniably works better than the MMR vaccine, and it's determined at some point that the reason has to do with chi or chakras or praying to Odin, then I guess it's time to unlearn everything I know.<br />
<br />
If you watch things like "Cosmos" meant to communicate how various fields coalesce into an overall understanding of how the universe works, virtually every single fact ever mentioned in the show has gone through some version of this process, whether it was physics, chemistry, biology, or whatever. Whenever a finding revolutionized a field, it was the result of years of build-up, and years of sometimes bitter debate. People have always tried to bypass this process, but they are always forgotten over time, because the process wins every time.<br />
<br />
You'll find a crank in every corner of every field that has the kind of credentials to adequately assess this same evidence, but for one reason or another decides to go against the grain. You might even find a Rand Paul or a Chris Christie to elevate these cranks and force dumbass pundits on cable news to debate things that have been settled for decades. But you'll know better, because now you understand the real meaning behind the phrase "extraordinary claims require extraordinary evidence."<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-64944090321076847572013-12-24T12:19:00.001-06:002013-12-24T13:32:06.730-06:00Revisiting Lead and Violent Crime: A Year of Marinating On ItI'll just be honest, I've never really cared much for writing. A good post covering the sort of issues I want to tackle on this blog takes a ton of time and effort if I don't want to leave low-hanging fruit to damage my credibility. I simply value other things I can do with my free time than that, so it's been months since I've written anything at all. I feel I have to revisit <a href="http://hisscienceistootight.blogspot.com/2013/01/the-link-between-leaded-gasoline-and.html" target="_blank">the post that took off</a>, though, and write about what has and hasn't changed in my thinking since.<br />
<br />
I started this blog as a place where friends could find evidence-based answers to questions that pop up all the time in the media, and hopefully then they'd direct others. After 7 posts, that was pretty much out the window, which still sort of blows my mind. It's very easy (and often justified) to be cynical of any institution, even science. It's also easy to instinctively fear something and assume you know enough about it to justify it. Critical thinking is something you really have to develop over a long period of time, but done right, it's probably the highest pinnacle of human ingenuity in history. That or maybe whiskey.<br />
<br />
On the other hand, sometimes it's easy to slide closer to nihilism. Knowing something about randomness and uncertainty can lead into being overly critical of sensible hypotheses. There's the famous <a href="http://www.bmj.com/content/327/7429/1459" target="_blank">BMJ editorial</a> mocking the idea that there's "no evidence" for parachutes being a good thing to have in a free fall. You always have to have that concept in the back of your mind, to remind you that some things really don't need to climb the hierarchy of good evidence. I've spent some of the last year worrying if maybe I didn't do enough to show that's not where my analysis of Kevin Drum's article came from.<br />
<br />
I think most people saw that I was trying to illustrate the evidence linking lead and crime through the lens of evidence-based medicine. The response was resoundingly positive, and much criticism was centered around my half-assed statistical analysis, which I always saw as extremely tangential to the overall point. The best criticism forced me to rethink things and ultimately led to today.<br />
<br />
Anyway, anecdotes and case reports are the lowest level of evidence in this lens. The ecological studies Drum spends much space describing by Rick Nevin (which I mistakenly identified as cross-sectional) are not much higher on the list. That's just not debatable in any way, shape, or form. A good longitudinal study is a huge improvement, as I think I effectively articulated, but if you read through some other posts of mine (<a href="http://hisscienceistootight.blogspot.com/2013/03/allow-me-to-curate-state-of-bpa.html" target="_blank">on BPA</a>, <a href="http://hisscienceistootight.blogspot.com/2013/02/new-proof-that-being-vegetarian-is.html" target="_blank">or on the evidence that a vegetarian diet reduces ischemia</a>), you'll start to sense the problems those may present as well. Nevin's ecological studies practically scream "LOOK OVER HERE!" If I could identify the biggest weakness of my post, it's that I sort of gave lip-service to the a Bayesian thought-process suggesting that, given the circumstances, these studies might amount to more than just an association. I didn't talk about how simple reasoning and prior knowledge would suggest something stronger, and use this to illustrate the short-comings of frequentist analysis. I just said that in my own estimation, there's probably something to all of this. I don't know how convincing that was. I acknowledge one of my stated purposes saying how the present evidence would fail agency review may have come off as concern-trolling.<br />
<br />
On the other hand, if there is indeed something more to this, it seems reasonable to expect a much stronger association in the cohort study than was found. Going back to <a href="http://en.wikipedia.org/wiki/Bradford_Hill_criteria" target="_blank">Hill's criteria</a>, strength of the association is the primary factor in determining the probability of an actual effect. When these study designs were first being developed to examine whether smoking caused lung cancer, the associations were literally thousands of times stronger than what was found for lead and violent crime. The lack of strength is not a result of small sample sizes or being underpowered, it's just a relatively small effect any way you look at it. It would have been crazy to use the same skeptical arguments I made in that instance, and history has judged those who did properly.<br />
<br />
Ultimately, I don't know how well the lens of evidence-based medicine fits the sort of question being asked here. Cancer epidemiology is notoriously difficult because of the length of time between exposure and onset of disease, and the sheer complexity of the disease. It still had a major success with identifying tobacco smoke as a carcinogen, but this was due to consistent, strong, and unmistakable longitudinal evidence of a specific group of individuals. Here, we're talking about a complex behavior, which may be <i>even more</i> difficult to parse. My motivation was never to display prowess as a biostatistician, because I'm not. It was never to say that I'm skeptical of the hypothesis, either. It was simply to take a step back from saying we have identified a "blindingly obvious" primary cause of violent crime and we're doing nothing about it.<br />
<br />
I think the evidence tells us, along with informed reasoning, that we have a "reasonably convincing" <i>contributing </i>cause of violent crime identified, and we're doing nothing about it. That's not a subtle difference, and whether one's intentions are noble or not, if I think evidence is being overstated, I'm going to say something about it. Maybe even through this blog again some time.Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-32629289401498315952013-05-28T11:18:00.002-06:002013-06-14T10:56:14.453-06:00Risk, Odds, Hazard...More on The LanguageFor every 100 g of processed meat people eat, they are <a href="http://www.wcrf.org/PDFs/Colorectal-cancer-CUP-report-2010.pdf" target="_blank">16% more likely</a> to develop colorectal cancer during their lives. For asthma sufferers, the odds of suffering an attack for those who took dupilumab in a <a href="http://www.forbes.com/sites/matthewherper/2013/05/21/new-asthma-drug-generates-excitement-for-patients-and-its-maker/" target="_blank">recent trial</a> were reduced 87% over a placebo. What does all this mean, and how do we contextualize it? What is risk, and how does it differ from hazard? Truthfully, there's several ways to compare the effects of an exposure to some drug or substance, and the only one that's entirely intuitive is the one that you're least likely to encounter unless you read the results section of a study.<br />
<br />
When you see statistics like those above, and pretty much every story revealing results of a study in public health will have them, each way of comparing risk elicits a different kind of reaction in a reader. I'll go back to the prospective cohort study suggesting that <a href="http://hisscienceistootight.blogspot.com/2013/02/new-proof-that-being-vegetarian-is.html" target="_blank">vegetarians are 1/3rd less likely</a> to suffer from ischemic heart disease (IHD) than those who eat meat, because I think it's such a great example of how widely the interpretations can seem based upon which metric you use. According to this study, IHD was a pretty rare event; only 2.7% out of over 44,500 individuals developed it at all. For the meat-eaters, 1.6% developed IHD vs. 1.1% in vegetarians. If you simply subtract 1.6% and 1.1%, you might intuitively sense that eating meat didn't really add <i>that</i> much risk. Another way of putting it is out of every 1,000 people, 16 people who eat meat will develop IHD vs. 11 for vegetarians. This could be meaningful if you were able to extrapolate these results to an entire population of say 300 million people, where 1.5 million less incidences of IHD would develop, but I think most epidemiologists would be very cautious in zooming out that far based upon one estimate from a single cohort study. Yet another way of looking at the effect is the "number needed to treat" (NNT), which refers to how many people would need to be vegetarian for one person to benefit. In this case, the answer is <strike>20</strike> 200 (oops!). That means 199 people who decide to change their diet to cut out meat entirely wouldn't even benefit in terms of developing IHD during their lifetime.<br />
<br />
<br />
<a name='more'></a><br />
<br />
When subtracting the two totals, the 0.5% difference is termed the "absolute risk reduction" of being a vegetarian based upon this study. However easy it is to comprehend, different measures of comparing risk are generally opted for, either relative risk or the odds ratio depending on the type of study design used.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcTmUW4DqmZjpFLil9_sxTOlijblTDLy4RPKDhZzszorNwXxWAXBpQ" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="194" src="https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcTmUW4DqmZjpFLil9_sxTOlijblTDLy4RPKDhZzszorNwXxWAXBpQ" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">RR1 = A/(A+B); RR2 = C/(C+D); The "RR" reported in the media = RR1/RR2. (<a href="http://www.wikihow.com/Calculate-Relative-Risk" target="_blank">Good Source for background on the maths!</a>)</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div style="text-align: left;">
Relative risk (RR) is still pretty intuitive, and is a valuable comparison in prospective cohort studies when you have follow-up data on the vast majority of your enrollees. If you use our study on vegetarians, the RR of being a vegetarian is 0.69. It's a bit awkward to interpret this as meaning that vegetarians have 69% of the risk of developing IHD as meat eaters do, so the RR is subtracted from 1 to reflect the difference in probability between the two groups. This changes the results to say that vegetarians are 31% less likely to develop IHD. This is what almost every headline ran with, but compare this value to the NNT. Which one do you think helps you visualize the expected impact more? It's two ways of interpreting the same exact data, but if you were running a drug company, and a drug you spent $300M developing just went through a clinical trial with the exact same results as the vegetarian study, which one would you use in marketing your drug? I don't think you'd want to highlight that 200 people could start using this treatment before any one of them would show the intended benefit. You get a pretty different response by saying their risk is cut by 1/3rd.</div>
<br />
One other way to compare the effect of a treatment or exposure is the odds ratio (OR), which does a good job of approximating the RR when the outcome you're measuring is rare. The RR is a probability, meaning that it's a ratio that falls somewhere between 0 and 1. The OR is also a ratio, but is not, strictly speaking, a probability. Odds for any event can go from 0 to <span style="background-color: white; color: #444444; font-family: arial, sans-serif; font-size: x-small; line-height: 16px;">∞</span>, and this difference becomes pronounced when the outcome is common. Case-control studies use the OR because you don't have that follow-up on specific individuals. For our study on vegetarians, using a case-control design would mean I started with data on actual cases of IHD, and went back to look at previous survey records to compare which of those people were vegetarian vs. those who ate meat. Because IHD is relatively rare, the OR found would likely be relatively similar to the RR found in the prospective cohort design. If I ran this case-control study and happened to find a difference in OR of 0.31, I'm not reducing risk by 31% by being a vegetarian like I found in the cohort study, I'm saying your odds of having IHD are 31% less if you are a vegetarian. For every 1 vegetarian that developed IHD, there are 3 meat eaters that did. It seems like it should be pretty close, but evidence pretty strongly suggests that as OR increases, it <a href="http://journals.lww.com/greenjournal/fulltext/2001/10000/an_odd_measure_of_risk__use_and_misuse_of_the_odds.28.aspx" target="_blank">diverges more and more</a> from the RR, and is <a href="http://www.bmj.com/content/316/7136/989" target="_blank">difficult to interpret</a>. You even see it confused for RR not just in the media, but even in academic journals<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://www.bmj.com/highwire/filestream/412908/field_highwire_fragment_image_m/0/F3.medium.gif" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://www.bmj.com/highwire/filestream/412908/field_highwire_fragment_image_m/0/F3.medium.gif" height="165" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The higher the OR, the less easily you can compare it to RR (BMJ)</td></tr>
</tbody></table>
I don't really expect everyone to understand all this perfectly. The main takeaway is that there are multiple ways to compare people in medical studies, and they even when they report more or less the same thing, each can have a very different impact on how you react to what you're reading. There's one other key point I want to make, which is that risk and odds are dependent on exposure, which is quite different than hazard, which does not. And let's face it, we do a terrible job of teaching people how to assess risk. Generally speaking, if I were to talk to a random person on the street and ask them what risk means, I'm certain the vast majority would give me the definition of hazard.<br />
<br />
To illustrate the difference, let's switch gears from our vegetarian study to my <a href="http://hisscienceistootight.blogspot.com/2013/04/the-ewg-dirty-dozen-whoever-said.html" target="_blank">most recent post</a> about the EWG's Dirty Dozen list. This is the <i>perfect</i> case of conflating risk and hazard. Pesticides are hazardous to people's health. Everyone acknowledges that. That is to say, if I worked on a farm and mixed the solutions of pesticides I spray on crops, I am doing something that is potentially dangerous to my health. How dangerous, though? Again, there are hazards to every pesticide we use, but the <i>probability </i>that glyphosate would negatively affect my health is much, much less than the probability that dieldrin would affect me. Likewise, if I were to eat an apple with a trace amount of imidacloprid, the probability of the substance's hazard affecting me is close to 0. That is risk, and it is absolutely essential to understanding this topic. Technically speaking, any time I leave my house, there is a hazard of me getting mugged (I do live in Chicago, after all). However, my risk changes depending on the time of day, the neighborhood, and whether I'm by myself or not. It's exactly the same when looking at dietary or environmental factors in your diet or your surroundings.<br />
<br />
If you can remember this not so subtle difference, and remember that relative risk without other ways of looking at the same data might mislead, I think you have all the tools you need to better navigate the confusing landscape of healthcare related media. It's pretty impossible to take an analysis seriously without this background, and it really did change the way I look at things for the better when I learned it. I hope it does the same for you.<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com2tag:blogger.com,1999:blog-6329571067764673239.post-10141792783805365552013-04-24T09:56:00.000-06:002013-04-26T08:31:12.631-06:00The EWG Dirty Dozen: Whoever Said Ignorance is Bliss Definitely Didn't Have Chemistry in MindEach year, the Environmental Working Group (EWG) compiles a "Dirty Dozen" list of the produce with the highest levels of pesticide residues on them. The <a href="http://www.ewg.org/foodnews/summary.php" target="_blank">2013 version</a> was just released this week, framed as a handy shopping guide that can help consumers reduce their exposure to pesticides. Although they do say front and center that it's not intended to steer consumers away from "conventional" produce if that's what they have access to, this strikes me a little as talking out of both sides of their mouth. How can you say that if you really believe that the uncertainties are meaningful enough to create the list, and to do so with the context completely removed? I'm pretty certain the Dirty Dozen preaches to the choir and doesn't change many people's behavior, but the underlying message behind it, while perhaps done with good intentions, to me does some genuine harm regardless. The "appeal to nature" fallacy and "<a href="http://en.wikipedia.org/wiki/Chemophobia" target="_blank">chemophobia</a>" overwhelm legitimate scientific debate, have the potential to polarize a nuanced issue, and tend to cause people stress and worry that's just not all that necessary. This is not hardly going to be a defense of pesticides, but a defense of evidence-based reasoning, and an examination of how sometimes evidence contradicts or complicates simplified narratives. You should eat any fruits and vegetables that you have access to, <i>period</i>, no asterisk.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="http://25.media.tumblr.com/tumblr_m2yb9v3p4T1rpehnco1_500.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://25.media.tumblr.com/tumblr_m2yb9v3p4T1rpehnco1_500.jpg" height="216" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">This latte should not be so happy. It's full of toxins. (<a href="http://25.media.tumblr.com/tumblr_m2yb9v3p4T1rpehnco1_500.jpg" target="_blank">Source</a>)</td></tr>
</tbody></table>
Almost 500 years ago, the Renaissance physician Paracelsus established that the mere presence of a chemical is basically meaningless when he wrote, to paraphrase, "the dose makes the poison." The question we should really be asking is "how much of the chemical is there?" Unfortunately, this crucial context is not accessible from the Dirty Dozen list, because context sort of undermines the reason for this list's existence. When we are absorbing information, it comes down to which sources we trust. I understand why people trust environmental groups more than regulatory groups, believe me. However, one of the recurring themes on this blog is how evidence-based reasoning often doesn't give the answers the public is looking for, whether it's regarding the ability to<a href="http://hisscienceistootight.blogspot.com/2013/01/the-link-between-leaded-gasoline-and.html" target="_blank"> reduce violent crime</a> by cleaning up lead pollution, or <a href="http://hisscienceistootight.blogspot.com/2013/03/allow-me-to-curate-state-of-bpa.html" target="_blank">banning BPA</a>. I think a fair amount of the mistrust of agencies can be explained by this disconnect rather than a chronic inability to do their job. However true it may be that agencies have failed to protect people in the past, it's not so much because they're failing to legitimately assess risk, it's for reasons such as not sounding an alarm and looking the other way when we know that some clinical trials are based on biased or missing data. Calling certain produce dirty without risk assessment is sort of like putting me in a blindfold and yelling "FIRE!", without telling me if it's in a fireplace or whether the next room is going up in flames. When 2 scientists at UC Davis looked into the methodology used by the EWG for the 2010 list, they determined that it lacked scientific credibility, and decided to create <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3135239/" target="_blank">their own model</a> based upon established methods. Here's what they found:<br />
<br />
<a name='more'></a><br />
Out of the 120 items tested, only <i>one </i>even reached 1% of the FDA's imposed limit, and it was only at 2%. Only seven (5.8%) exceeded 0.1% of the limit. Literally, 94% of the "dirty dozen" was below 1/1000th of the FDA limit. Over 75% of the samples contained less than 0.01%, which corresponds to 1,000,000 times below the chronic No Observable Adverse Effect Levels from animal toxicology studies), and 40.8% had exposure estimates below 0.001% of the FDA limit. If you're concerned that perhaps the FDA limits are arbitrary or too conservative, please understand that every pesticide, by regulation under FIFRA, requires extensive acute and chronic toxicity testing, and the residue limit is set at least 100 times less than the established "no effect" level.<span style="font-family: Georgia, Times New Roman, serif;"><span style="font-size: 15px; line-height: 21px;"> T</span></span>he EPA periodically reviews new evidence to see if their toxicity classification is still justified, and can pull a pesticide off the market if there's quality experimental evidence that supports making that decision. So yeah, although we discover new things all the time about how chemicals interact with our bodies, when exposures are so negligible, it's sort of a parallel issue. It's just not worth worrying about.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhljt_6zng74ldCmK4qLMQq2b6UCGegdPixH6J1YQe50cWOmwJd0KeHLMrPCO2ANt-YZudL5XdFQqDzV7UMxxXTdK4PRhBmi7QmLXmii1ccDO0ssAHDEaK25EifIU7WsFw7kjES23pMKs4/s1600/Capture.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhljt_6zng74ldCmK4qLMQq2b6UCGegdPixH6J1YQe50cWOmwJd0KeHLMrPCO2ANt-YZudL5XdFQqDzV7UMxxXTdK4PRhBmi7QmLXmii1ccDO0ssAHDEaK25EifIU7WsFw7kjES23pMKs4/s1600/Capture.PNG" height="307" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Pesticide residues found on the 2011 "most dirty" produce: celery. Reference dose is the FDA limit. Notice how exceedingly small the exposure is. We're talking nanograms per kilogram of your body weight. (Source: UC Davis study)</td></tr>
</tbody></table>
<br />
On its face, it seems logical that mere presence could matter. After all, these chemicals are designed to kill things. However, as with pretty much everything in life, the reality is a tad bit more complex. Regular table salt kills slugs, but we know that up to a point it enhances flavor without additional health risk. Drinking too much water can friggin <a href="http://www.scientificamerican.com/article.cfm?id=strange-but-true-drinking-too-much-water-can-kill" target="_blank">kill people</a>. It takes 10 times less <a href="http://npic.orst.edu/ingred/cuso4.html" target="_blank">copper sulfate</a>, a chemical sometimes used as pest control in organic farming to kill 50% of a test population in rats than the synthetic pesticide <a href="http://iaspub.epa.gov/apex/pesticides/f?p=CHEMICALSEARCH:3:0::NO:1,3,31,7,12,25:P3_XCHEMICAL_ID:2477" target="_blank">glyphosate</a>, so it's possible that using the Dirty Dozen list to buy organic actually points you in the wrong direction from a health perspective. This measurement is called to the <a href="http://en.wikipedia.org/wiki/LD50" target="_blank">LD<span style="font-size: xx-small;">50</span></a>, the standard of acute toxicity. If you want to understand risk better, take a look at the table of various chemicals from the LD<span style="font-size: xx-small;">50</span> link above, which includes natural and synthetic ones, to get a sense of how we determine how toxic something is. Read about the limitations of the test and make your own informed decision. Using the "appeal to nature" line of reasoning, in which synthetic is inherently bad at any level, is sort of the mirror image of climate skeptics scoffing at the risk of increased carbon dioxide in the atmosphere by proclaiming it "plant food". Context is everything. If you're interested in convincing fence-sitters about the dangers of applying pesticides, preying on chemophobia has the opposite effect you're looking for. If you're really interested in improving the food system or making a positive impact on public health, it's of vital importance to stick to evidence and be up-front about what we do and don't know. People don't respond to alarm very well even when the science totally justifies it, so it stands to reason that people won't respond to alarm when science doesn't really justify it.<br />
<br />
There are, of course, a wide variety of pesticides on the market. Some are used because they're specifically good at killing insects, some because they kill weeds, and some are used to prevent fungal infections. Furthermore, each type of pesticide has several distinct classes within them. For instance, within insecticides, there are organochlorides, organophosphates, and neonicotinoids, which all have different biochemical mechanisms for their lethal effects on their targets. So in order to be heed my advice on being up-front about the uncertainties that do exist, I should acknowledge that while the FDA sets limits for individual pesticide residues, it doesn't limit <i>how many</i> pesticides may be used. Often, farmers will apply more than one pesticide on their fields, and we really don't know so much whether small exposures of each simultaneously combines to create more of a health issue than each one individually. <a href="http://www.ncbi.nlm.nih.gov/pubmed/23165155" target="_blank">Here's</a> a study that suggests an antagonistic effect between DDT and dieldrin, greater than what would have been predicted by just adding the two individually. Both chemicals have been banned in most developed countries for decades, so I'm wondering about how rigorous this study really is by claiming it as part of the French diet less than 10 years ago, but the implication that individually assessing exposures might be insufficient to accurately determine actual health risks in real world exposures is worth noting. This doesn't mean that .0000843 micrograms/kilogram of imidacloprid and 0.000809 of malathion would necessarily combine to create a health hazard, but scientists have not historically looked much into testing it. Prior plausibility would suggest that it's exceedingly unlikely, but it's a legitimate uncertainty.<br />
<br />
On the other hand, glyphosate is the most widely used herbicide in the US, and is considered by the EPA to be a toxicity class III pesticide, "slightly toxic" on a scale where class IV is the least harmful. It's only really harmful if you drink a large amount of it. It doesn't bioaccumulate, and it is not harmful to birds, amphibians, and fish in comparison with older pesticides, particularly if it's a formulation without a surfactant called <a href="http://en.wikipedia.org/wiki/Polyethoxylated_tallow_amine" target="_blank">POEA</a>. It kills a wide variety of plants by disrupting a protein synthesis pathway that is specific to plants, and thus does not exist in mammals. Chronic effects have not been observed in animal studies at levels nearly 300 times the FDA limit. Animal studies thus far suggest that it's not carcinogenic, and the EPA considers it to be as such. It is absorbed poorly through the skin, and roughly 30-36% absorbed through ingestion. Two people poisoned by glyphosate (presumably from attempting suicide) had undetectable levels in their bodies after 12 hours. Small exposures do not add up, and large exposures literally go away in a relatively short time frame. If you're familiar with toxicology and/or science-minded, <a href="http://npic.orst.edu/factsheets/glyphotech.pdf" target="_blank">here's</a> everything you'd ever want to know about it, produced by the National Pesticide Information Center at Oregon State University. The FDA doesn't even monitor glyphosate residue on produce because its level of toxicity doesn't warrant it, but if a test were to show my apple has 0.01% of the FDA level of glyphosate on it, I'm confident that it's going to be just fine. I'd rather we have a food system where we don't use it at all, but there's plenty of more pressing health risks that we ignore than those presented by glyphosate.<br />
<br />
Pesticides are an artifact of a larger, unsustainable system, in which a combination of better crop management and alternative pest control methods could and hopefully will help reduce our reliance on them for the environment's sake, as well as the health of farmers, their workers, and their neighbors. Some accumulate in the environment and can cause genuine ecological harm. In large concentrations at high doses, some are <a href="http://psep.cce.cornell.edu/Tutorials/core-tutorial/module04/index.aspx" target="_blank">very toxic</a>, and can cause some permanent conditions in farm workers who mix and apply them. If you're an expecting mother or have an infant, I think it's perfectly reasonable for physicians to recommend avoiding conventional produce as much as possible just to be safe. Infants don't have fully developed systems for detoxifying compounds like adults do, and it's sound advice to be more aware of pesticide residues.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSAFvVAAFHobgmf7ck1c7PWK7B-y42ON2DW36dkn9vzWjxuqPh-Pw" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSAFvVAAFHobgmf7ck1c7PWK7B-y42ON2DW36dkn9vzWjxuqPh-Pw" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"> (<a href="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSAFvVAAFHobgmf7ck1c7PWK7B-y42ON2DW36dkn9vzWjxuqPh-Pw" target="_blank">Source</a>)</td></tr>
</tbody></table>
I understand the compelling contrast in imagery, with workers with hazmat suits applying poisons in a massive monoculture field vs. free-range animals juxtaposed against a diverse field and an idyllic farmer. It's really difficult to look at images such as the one to the left and accept that, as the consumer, the evidence clearly and definitively points to negligible health risks by eating the end product. I'd much prefer a smarter, ecologically-friendly food system that more efficiently gets nutritious and safe food to more people, and I'm encouraged that people are increasingly interested in heading there. I go to my neighborhood farmer's market every week during the summer and fall to get organic produce from local farmers for almost entirely the same reasons anyone else does. Once winter rolls around and I want some cheap summer squash, I'm as aware as anyone of all the inputs that went into getting it to the store. Where I differ from most is in my assessment of risk, informed as it is by graduate level toxicology and biochemistry. I'm hardly an expert, but I know a fair amount, and I think it's of vital importance to share it with people. We don't do a very good job of educating people on these subjects, much less science in general, so I almost feel compelled to highlight where our intuition leads us to unwarranted conclusions.<br />
<br />
I'm convinced that people respect and respond to intellectual honesty more than they do fear, because fear so often comes from a not-so-rational place with crucial pieces of the puzzle missing. This opens itself up to an avalanche of conflicting information, which just confuses people and drives them to dig in to their preferred version of reality. People aren't robots, but the tools to avoid all of this are out there. It'd be totally boss if more people were aware of them, and we could confront misconceptions that arise because of how poorly we educate people in chemistry. Now, if we could just do something about all that <a href="http://en.wikipedia.org/wiki/Vitamin_C" target="_blank">2-Oxo-L-threo-hexano-1,4-lactone-2,3-enediol</a><span style="background-color: white; color: #333333; line-height: 17px;"><span style="font-family: inherit;"> in citrus fruits.</span></span><br />
<br />
<b>UPDATE 4/26/13</b>: So, this <a href="http://www.mdpi.com/1099-4300/15/4/1416" target="_blank">thing</a> is going around a lot lately. It's a paper that claims glyphosate might be behind a whole host of common diseases like inflammatory bowel disease, obesity, depression, ADHD, autism, Alzheimer's disease, Parkinson's disease, ALS, MS, cancer, etc., I guess because it's a "textbook example of exogenous semiotic entropy" (WTF?!). This should raise red flags anyway, but you should really download it just for fun to see if you can make any sense of it. I guarantee you, it won't come easy. It's possibly the most bizarre academic paper I've ever read. Unfortunately, <a href="http://mobile.reuters.com/article/idUSL2N0DC22F20130425?irpc=932" target="_blank">Reuters</a> uncritically covered it, and even called it a "study", which it most definitely is not. It's a review full of pseudoscientific assertions, and although the authors say they did a "systematic review", they clearly don't quite grasp what that means. Which, I suppose, is probably why a paper involving issues of toxicology and biochemistry ends up in a fairly obscure physics and information science journal.<br />
<br />
In a word, "no". Just "no".Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com4tag:blogger.com,1999:blog-6329571067764673239.post-49681501310293786762013-04-02T16:29:00.000-06:002013-04-25T08:52:33.914-06:00The Antibody for CD47: How a Promising Treatment Can Get to PatientsPerhaps (I hope) you've heard that imagining all forms of cancer as a single disease is pretty misleading. Some tumors are liquid, some are solid. Some develop on tissue lining one of the many cavities within or on the outside of the body, and some show up on bone or connective tissue. Virtually in every instance cancer researchers have found that the differences have been far more pronounced than the similarities. For decades, efforts to find the one thing common among all of these tumors, in order to find a single, broad potential cure, has basically come up empty. This helps explain why progress on treating cancer has been so aggravatingly slow, and why drugs for breast cancer don't necessarily help a patient with lymphoma. On March 26, the Proceedings of the National Academy of Sciences (PNAS) published an <a href="http://www.pnas.org/content/109/17/6662.full">article</a> (some images from which are below) based on tissue cultures and experiments on mice that exploits one broad similarity between many types of tumors, which could potentially lead to a rather simple, single therapy that thus far shows no signs of unacceptable toxicity in the mice. Sounds great, right? So how does this work, what happens next, and how unprecedented is this?<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGpMtU81KRBzxR-RDdAWFmjChAJ6Y9cdPsFJv6folvaV7bZz_FbqnrZAOdCiINt7X_xZIvDWNpyhWKdqtck02XvrXIzRXdTZZMdVY2qxvzAwFWtZ0q8AQUZpZXhKtAsBevmep-5FFups0/s1600/CD47.PNG" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGpMtU81KRBzxR-RDdAWFmjChAJ6Y9cdPsFJv6folvaV7bZz_FbqnrZAOdCiINt7X_xZIvDWNpyhWKdqtck02XvrXIzRXdTZZMdVY2qxvzAwFWtZ0q8AQUZpZXhKtAsBevmep-5FFups0/s1600/CD47.PNG" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Just look at what happens when you target CD47</td></tr>
</tbody></table>
<div style="text-align: justify;">
Ultimately, drugs are molecules, and when they work, it's because there's a target that fits the unique shape and characteristics of the drug. Some early chemotherapeutic drugs, such as vincristine or vinblastine worked because they bonded to the ends of molecules that formed the components of a tiny skeletal framework which holds together virtually every cell in our bodies. Unable to support themselves, new cells across the board did not grow and divide, but since cancer cells grow faster than normal cells, they were the ones more affected. Hair and blood cells also grow rapidly, leading to the most familiar side affects of chemotherapy, hair loss and extreme fatigue. So while these drugs may have led to remission for some patients, it is never without excruciating side affects. The therapy described in PNAS takes a very different approach, where the target is a protein embedded on the surface of cells called <a href="http://en.wikipedia.org/wiki/CD47">CD47</a>, which when expressed, prevents macrophages in the immune system from engulfing and destroying the cells. It does this by binding to a different protein expressed on the surface of the immune cell that happens to fit it quite well. When CD47 is bound to the protein on the macrophage, a signal is sent not to eat whatever cell it's attached to. It's an amazingly elegant system, and fatefully, according to the study, cancer cells happen to express a lot more CD47 than normal cells. The researchers used an antibody for CD47, yet another protein which could bind it in place of the protein on the surface of the macrophage. This prevents the signal that says "don't eat me," and allows the immune cell to do its normal function of destroying something it doesn't recognize. Previous studies had established that this antibody helps shrink leukemia, lymphoma, and bladder cancer in mice, so the PNAS study expanded upon this to look at ovarian, breast, and colon cancers, as well as glioblastoma. It effectively inhibited growth for each, sometimes outright eliminating smaller tumors. Larger tumors, the authors note, would likely still need surgical removal prior to antibody therapy. There's no question now, this needs to be tested in actual human patients.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
The next step will be to organize what's called a <a href="http://clinicaltrials.gov/ct2/help/glossary/phase">phase I trial</a>, which enrolls some brave (or desperately poor) individuals, perhaps up to 100, to help determine whether the drug is even safe enough to find out whether it works, and what dose can be tolerated. Often, for simplicity's sake, phase I is combined with phase II trials involving ideally a couple hundred more individuals, which appears to be the intention with the antibody therapy. Phase II trials answer the question "can it work?", with the assumption going in that it doesn't. For a refresher on how the future trial data will be analyzed, see my previous post on <a href="http://hisscienceistootight.blogspot.com/2012/12/the-language.html">basic statistics</a>. Should this phase II trial pan out, meaning sufficient biological activity is observed without unacceptable risks, and there's obviously no guarantee that this will happen, a new, more robust trial will be designed. Phase III trials answer the question everyone wants to know: does it work? The ideal phase III trial involves several thousand patients, which probably wouldn't be too difficult to find when the drug could save their life. In this stage, the new therapy would be compared to the best current therapy rather than placebos, because a placebo isn't a treatment, and would be unbelievably unethical to give to a cancer patient. Take a look at <a href="http://www.cancer.gov/cancertopics/factsheet/Information/clinical-trials">this page</a> from the National Cancer Institute for more information specific to how cancer trials operate.</div>
<div style="text-align: justify;">
<br /></div>
<div style="text-align: justify;">
Oftentimes, unfortunately, the process isn't as smooth as I outlined. Trials are increasingly outsourced outside of the US or Europe, where regulations and ethical frameworks are not nearly as strong, and of course, a few thousand patients in a randomized control trial can't catch every potential adverse effect. And then there's the question of who funds and how they manage these trials, but I'm not going there. For every thousand poor critiques of Big Pharma you can find on the Internet, there's only one Ben Goldacre who does it right. I recommend <a href="http://www.amazon.com/Bad-Pharma-Companies-Mislead-Patients/dp/0865478007/ref=sr_1_1?s=books&ie=UTF8&qid=1364921206&sr=1-1&keywords=bad+pharma">Bad Pharma</a> if you want to really know more about where this could all go completely off the rails.</div>
<div style="text-align: justify;">
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJEX3HVlLJLdkJBACYus50lwV6LlvAW7DzeNrsxK8-81YcLEmXaQQqw0d0-UJC17N6xaUS0bFxUUS6vSUR7cG9Gzf5n3zjYeVPhyMcSh4bzPix4ydDUwBiVYJfdbpdCmfLE1svdS0cpSE/s1600/CD47+-+2.PNG" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJEX3HVlLJLdkJBACYus50lwV6LlvAW7DzeNrsxK8-81YcLEmXaQQqw0d0-UJC17N6xaUS0bFxUUS6vSUR7cG9Gzf5n3zjYeVPhyMcSh4bzPix4ydDUwBiVYJfdbpdCmfLE1svdS0cpSE/s1600/CD47+-+2.PNG" height="275" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Tumors go bye-bye</td></tr>
</tbody></table>
</div>
<div style="text-align: justify;">
That's a long, long road that takes up to a decade, and potentially billions of dollars spent before this potential drug could ever reach the general cancer patient. Given this, it's not really too surprising how much pharmaceutical companies spend on advertising once they beat the odds and get a new drug approved by the FDA. And that's all the more hopeful part of the CD47 story. Thousands of chemicals have been shown to kill cancer cells in vitro, and just a cursory search of the national registry of clinical trials for RCTs involving antibody therapy for cancer alone brings up nearly <a href="http://clinicaltrials.gov/ct2/results?term=cancer+&recr=&rslt=&type=Intr&cond=&intr=antibody+therapy&titles=&outc=&spons=&lead=&id=&state1=&cntry1=&state2=&cntry2=&state3=&cntry3=&locn=&gndr=&rcv_s=&rcv_e=&lup_s=&lup_e=">1100 results</a> in various stages, from withdrawn, suspended, or currently recruiting patients. This is just a tiny sampling of all clinical trials on cancer therapies, that all got to where they were because they were once so promising in test tubes and animal models. So when you read about this study in the <a href="http://www.huffingtonpost.com/2013/03/28/cancer-drug-shrinks-tumors_n_2972708.html">media</a>, it's natural to hope we've finally found a major breakthrough. Maybe we really have. The odds are certainly long, and hopefully this post will help you understand that there's perfectly legitimate reasons why. If it doesn't pan out, it's not gonna be because a trillion dollar industry held it down, it's gonna be because of unacceptable toxicity, or because the effectiveness simply doesn't translate to us, or because it's not significantly better than current treatments. I can't possibly conceive of a worldview where drug companies <i>wouldn't</i> want to get this to people ASAP.<br />
<br />
<b>4/25/2013 - UPDATE:</b> I had heard about the FDA's recent <a href="http://www.fda.gov/RegulatoryInformation/Legislation/FederalFoodDrugandCosmeticActFDCAct/SignificantAmendmentstotheFDCAct/FDASIA/ucm341027.htm" target="_blank">Breakthrough Designation</a>, which is intended to expedite the long process of getting drugs to patients with serious conditions, but it didn't come to mind for this post. <a href="http://blogs.nature.com/spoonful/2013/04/melanoma-drug-joins-breakthrough-club.html" target="_blank">A melanoma drug</a> received breakthrough designation yesterday, after very preliminary trials showed a marked response in patients. Stay tuned to see if the CD47 antibody therapy joins the ranks.</div>
Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com2tag:blogger.com,1999:blog-6329571067764673239.post-65100295409425256832013-03-14T11:20:00.000-06:002013-03-17T08:25:40.768-06:00Allow Me to Curate the State of BPA Research For YouAfter reading two <a href="http://www.forbes.com/sites/trevorbutterworth/2013/02/26/anti-bpa-crusade-discrediting-science-and-environmental-health-says-leading-independent-expert/">interesting</a> <a href="http://www.scientificamerican.com/article.cfm?id=do-low-doses-of-bpa-harm">articles</a> on bisphenol A (BPA) in the past few weeks, I decided to spend a little time the other day looking through reviews and analyses to get my own sense of where the research stands. There's a pretty staggering amount of research out there with a variety of designs, which makes the conclusion all that much more frustrating: We don't really know what the deal with BPA is. That being said, it's probably time to rethink the approach to what to do about it.<br />
<br />
BPA is considered an endocrine disruptor, meaning that its structure is very similar to a hormone, in this case estradiol. It's so similar, in fact, that hormone receptors for estradiol can be tricked by BPA into a response as if estradiol were bound to it, affecting potentially a number of biological activities. Observational studies have linked BPA to a variety of negative health impacts, especially obesity, neurological damage, cancer, and recently asthma. Considering that virtually everyone is exposed to BPA due to its widespread use in making plastic bottles, can linings, paper receipts, and epoxy resins, these associations naturally should be cause for some concern, particularly in how it may affect infants and children. There is considerable debate on precisely how much concern there should be, probably more than is reflected in most media accounts. The way scientists approach this is quite at odds with the type of information the public needs, which was put eloquently by Richard Sharpe, a researcher in the UK who takes a pretty skeptical stance on BPA's harmful effects.<br />
<blockquote class="tr_bq">
<span style="line-height: 24px;"><span style="font-family: inherit;">What is never stressed enough is that scientists work at “the borders of ignorance” – what is in front of us is unknown and we try to find our way forward by making hypotheses based on what is known. This means that we are wrong most of the time, and because of this scientists have to be very cautious about interpretations that are based on our projected ideas about what is on front of us. What decision-makers, politicians and the public want is unequivocal guidance, not uncertainty. So this creates a dilemma for scientists. </span></span></blockquote>
So far this is beautiful. Absolutely crucial to keep in mind. Sharpe continues:<br />
<blockquote class="tr_bq">
<span style="line-height: 24px;">Those who are more prepared to throw caution to the winds and make unequivocal statements are more likely to be heard, whereas a more cautious scientist saying “we’re not sure” will not be taken notice of.</span> </blockquote>
I honestly don't know whether this is the case with BPA or not. The uncertainties are many, and the value of the observational studies showing all these associations is controversial. There are a number of <a href="http://www.sciencebasedmedicine.org/index.php/causation-and-hills-criteria/">criteria</a> that correlative studies must meet in order to determine whether that correlation actually equals causation, summarized from the link above.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgmEzLJEj2L4CegV5vO676aInsM2xGeLuqnxk8JKPawmFwH1AnRFWqx4qtMdUXrhGRJXjkl-9vm3rkkIpBfF4_wA9PwJLGK5ALIUyhAE7SpmppBfG4i1kvj-qnuAD_XGadxsGkroDyJeI/s1600/Hill's+Criteria.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgmEzLJEj2L4CegV5vO676aInsM2xGeLuqnxk8JKPawmFwH1AnRFWqx4qtMdUXrhGRJXjkl-9vm3rkkIpBfF4_wA9PwJLGK5ALIUyhAE7SpmppBfG4i1kvj-qnuAD_XGadxsGkroDyJeI/s1600/Hill's+Criteria.JPG" height="457" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Hill's Criteria applied to a chiropractic term "subluxation", which refers to an abnormality in the spine that supposedly causes a number of diseases. Subluxation is undetectable by X-ray, but nevertheless is considered treatable by some chiropractors through realignment.</td></tr>
</tbody></table>
In 2008, the Natural Resources Defense Council petitioned the FDA to ban BPA largely on the basis of all these correlative studies. The FDA <a href="http://docs.nrdc.org/health/files/hea_12033001a.pdf">responded</a> that after years of review, these criteria had essentially not been fully met, and did not ban the substance, specifically on the basis of criteria 5, 7, and 9. One of the criticisms of the FDA response is that some evidence suggests even very low doses may have strong effects, and that a typical dose-response curve that rises steadily as the dose increases until it ultimately plateaus does not reflect how BPA works. Rather, BPA may <a href="http://blogs.plos.org/bodypolitic/2010/10/19/if-bpa-exposure-is-so-low-why-should-we-be-worried/">have</a> sort of an inverted U-shaped dose-response curve referred to as hormesis, in which high levels actually don't have an effect at all, or perhaps even the opposite effect.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/Hormesis_dose_response_graph.svg/500px-Hormesis_dose_response_graph.svg.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/Hormesis_dose_response_graph.svg/500px-Hormesis_dose_response_graph.svg.png" height="131" width="200" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Dose-response curve of hormesis (<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/Hormesis_dose_response_graph.svg/500px-Hormesis_dose_response_graph.svg.png">Source</a>)</td></tr>
</tbody></table>
Some studies used to support the petition relied on non-oral BPA exposure, which the FDA considered insufficient, as exposure from the skin misses some of the metabolic processes that quickly turn BPA into an inactive form called BPA-monoglucuronide. The only real exposure we'd really need to be concerned about is oral, since that's how we're predominantly exposed, and there's enough difference between how BPA acts orally vs. subcutaneously to doubt the significance of studies using the latter method.<br />
<br />
Additional studies used by the NRDC were based upon experiments performed on isolated tissue samples, which bring up a similar concern, as well as being essentially limited to describing a potential mechanism for the chemical's effects and what sort of tissue it would ultimately effect. Another study showing an association with cardiovascular problems was cross-sectional, which takes a single measurement of exposure at one point in time and looks at whether the higher levels of exposure are associated with a disease. As I've mentioned before, this study design is limited to generating hypotheses, and are definitely not considered suitable for determining causation.<br />
<br />
So we have a number of epidemiological associations, experimental data on tissue samples, plus some experimental data on primates, and rodents, all pointing to some negative health effects, sometimes even at small doses. Couldn't these all add up to more than the sum of their parts? Sure, there's really 2 major ways to validate that claim. One way, which the FDA apparently thinks highly of, is to use the data from other mammals and tissue samples to develop a mathematical model, which can be used to predict the effects found in humans. Another would be to approach the problem along the lines of, "given the data that shows such and such effect at this level in animals and tissue, we can assume that the probability of this translating to humans in real world exposure is X." Nobody has seemed to try this yet, and the level of subjectivity involved in determining that X makes some researchers uncomfortable. Recently, a researcher named Justin Teeguarden developed a model to predict the levels that should be typically found in humans, and <a href="http://aaas.confex.com/aaas/2013/webprogram/Paper8720.html">presented</a> his findings (yet to go through peer review) at the annual meeting of the American Association for the Advancement of Science. His research determined that the levels causing effects in animals and tissues are not plausibly found in humans.<br />
<br />
Biologists and epidemiologists who have worked on the studies showing the harmful effects question the validity of the assumptions that went into his model, as well as the lack of predictive power in what the levels he suggest exist in humans actually does at either acute or chronic exposure. Tom Philpott at Mother Jones <a href="http://www.motherjones.com/tom-philpott/2013/03/bpa-harmless-not-so-fast">suggests</a> that Teegaurden's past ties to the plastics industry makes his research suspect, a sentiment I don't entirely share, but is not completely irrelevant.<br />
<br />
So what do we make of all of this? I think this is a perfect scenario for an ideological fight where two sides dig in immediately and reach a stalemate. Studies with inherent limitations get disseminated to the public as being probably more suggestive than they really are, feeding premature alarm, while industry unjustifiably dismisses the risk. If you read the FDA's response to NRDC, it appears to me that you're just not going to get far calling for an outright ban on a substance like BPA unless you have a good amount of longitudinal data plus experimental data on mammals using the same type of exposure as would be expected in humans. Is that the best way to go? If not, how can it be improved upon?<br />
<br />
Perhaps the NRDC's petition might have been more effective if it were honest about the limitations of the studies supporting their argument and the uncertainties that exist (like the dose-response curve of BPA). In addition to calling for a ban of BPA using the precautionary principle, there should also be a focus on safer <a href="http://www.oeconline.org/our-work/healthier-lives/tinyfootprints/toxic-prevention/safer-alternatives-to-bisphenol-a-bpa">alternatives</a>. In other words, don't just point to a problem, especially when it's not totally cut and dried, but demonstrate a workable solution, and pursue it with the same energy that's been used up trying to prove something that may not be provable, at least any time soon.<br />
<br />
If there's a lesson time and time again from these sorts of things, there's little in the world that's purely black and white. I love exploring the shades of gray, but I can't expect everyone to. However I'd at least love for people to respect that they're out there, and that this is where reality tends to dwell.Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com2tag:blogger.com,1999:blog-6329571067764673239.post-37073679115436284652013-02-26T12:17:00.001-06:002013-03-03T10:11:31.132-06:00Let's Talk About Gluten. Please. This Has Gotten A Little Out of ControlYou certainly don't have to look very hard to find articles and blog posts on gluten and its purported association with a variety of health issues such as obesity, heart disease, arthritis, and <a href="http://well.blogs.nytimes.com/2013/02/04/gluten-free-whether-you-need-it-or-not/">non-celiac gluten sensitivity</a>. While I don't really doubt that some people without celiac disease might legitimately be affected by gluten, I think the discussion around gluten and it's non-celiac ill effects have now crossed the line into fad. <i><a href="http://www.amazon.com/Wheat-Belly-Lose-Weight-Health/dp/1609611543/ref=sr_1_1?ie=UTF8&qid=1361895142&sr=8-1&keywords=wheat+belly">Wheat Belly</a>,</i> a diet book authored by cardiologist William Davis, is currently the #2 best selling health and fitness book on Amazon, which advocates eliminating wheat entirely from our diets, whole grain or not, largely based upon the premise that modern wheat is nothing like what our grandparents used to eat, and so it must be connected to these growing problems, much less the increased prevalence of celiac disease. Big Ag, essentially, has manipulated the genes of this staple beyond recognition, and the unintended consequences are vast and dire, amounting to "<a href="http://www.wheatbellyblog.com/2013/02/chef-pete-evans-goes-wheat-free/">the most powerful disruptive factor in the health of modern humans than any other</a>". While I haven't read the book, I'm familiar with a lot of the arguments it makes, and how they are perceived in the general public, especially the contention that Davis is referring to GMO wheat, which does not exist on the market. Unfortunately it's pretty difficult to find a good, genuine science-based take on it. To paraphrase <a href="http://blogs.discovermagazine.com/collideascape/2012/06/20/look-beyond-the-scientific-veneer-of-a-gmo-report/#.USzg-jBazfJ">Keith Kloor</a>, the majority of what you'll find only has the veneer of science.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://graphics8.nytimes.com/images/2013/02/05/health/05well_gluten/05well_gluten-tmagArticle.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://graphics8.nytimes.com/images/2013/02/05/health/05well_gluten/05well_gluten-tmagArticle.jpg" height="247" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Jesus F'ing Christ (<a href="http://well.blogs.nytimes.com/2013/02/04/gluten-free-whether-you-need-it-or-not/">Source</a>)</td></tr>
</tbody></table>
When I was in high school, and as an undergrad in the humanities, writing a research paper meant I started with a thesis statement and found evidence to support whatever it was I wanted to advocate for. In science-based medicine, you test a hypothesis by conducting a randomized control trial if possible, and ultimately by finding all available published reports and presenting the entire story (systematic review), or by combining the statistical analysis of multiple studies into a single large study (meta-analysis). This is not a subtle difference. <i>Wheat Belly</i> is a prime example of the former, which is not necessarily a bad thing, per se. People can make a compelling argument without a systematic review, but it is not acceptable as a last word in medicine, health, or nutrition, <i>period</i>. While there may be evidence to support the idea, it's easy to minimize or even completely overlook evidence to the contrary, especially since you're not really making a point to look for it. It seems pretty obvious that the discussion around wheat could use a little objectivity.<br />
<br />
Take a look at <a href="http://www.nytimes.com/2013/02/24/opinion/sunday/what-really-causes-celiac-disease.html?pagewanted=all">this</a> pretty balanced article recently published by the New York Times on the increasing diagnoses of celiac disease.<br />
<blockquote class="tr_bq">
BLAME for the increase of celiac disease sometimes falls on gluten-rich, modern wheat varietals; increased consumption of wheat, and the ubiquity of gluten in processed foods.<br />
<br />
Yet the epidemiology of celiac disease doesn’t always support this idea. One <a href="http://www.ncbi.nlm.nih.gov/pubmed/18382888">comparative study</a> involving some 5,500 subjects yielded a prevalence of roughly one in 100 among Finnish children, but using the same diagnostic methods, just one in 500 among their Russian counterparts.<br />
<br />
Differing wheat consumption patterns can’t explain this disparity. If anything, Russians consume more wheat than Finns, and of similar varieties.<br />
<br />
Neither can <a href="http://health.nytimes.com/health/guides/specialtopic/genetics/overview.html?inline=nyt-classifier">genetics</a>. Although now bisected by the Finno-Russian border, Karelia, as the study region is known, was historically a single province. The two study populations are culturally, linguistically and genetically related. The predisposing gene variants are similarly prevalent in both groups.</blockquote>
The article goes on to suggest that exposure to different microbial environments is the biggest factor, but it's rather apparent that we have can't just point to a simple answer. The world is a complex place, our bodies are complex, nutrition and health are complex. This is pretty much what you'd expect, right?<br />
<br />
Now take a look at some of the <a href="http://health.usnews.com/health-news/articles/2013/02/26/mediterranean-diet-helps-those-at-risk-of-heart-disease">massive</a> <a href="http://www.usatoday.com/story/news/nation/2013/02/25/mediterranean-diet-cuts-risk-of-heart-attack-stroke-death/1943305/">coverage</a> on the recent <a href="http://www.nejm.org/doi/full/10.1056/NEJMoa1200303">randomized control trial</a> showing significant cardiovascular benefits to the Mediterranean diet. <a href="http://www.health.harvard.edu/blog/study-supports-heart-benefits-from-mediterranean-style-diets-201302255930?utm_source=twitter&utm_medium=socialmedia&utm_campaign=022513-pjs1_tw">Here's</a> a good analysis from The Harvard School of Public Health. The Mediterranean diet arm of the study were encouraged to liberally use olive oil, eat seafood, nuts, vegetables, and <i>whole grains</i>, including a specific recommendation that pasta could be dressed with sofrito (garlic, tomato, onions, and aromatic herbs). The control diet this ended up favorably compared to was quite similar, but specifically geared toward being low-fat. Both were discouraged from eating red meat, high-fat dairy products like cream and butter, commercially-produced bakery goods, and carbonated beverages.<br />
<br />
The largest differences between the two diets centered around discouraging vegetable oils, including olive oil, and encouraging 3 or more pasta or starchy dishes per day in the control group. To me, this suggests that <i>Wheat Belly</i> lives in that sweet spot for widespread dissemination of being easily actionable, having some evidence to support it so that you get some good anecdotes and positive results, but is vastly oversimplified and not suitable or necessary for everyone. Remember, the <i>Wheat Belly </i>diet implicates even organic whole grains as being irredeemably manipulated. It's a completely wheat free diet, because modern wheat is the greatest negative factor in human health. Based on an actual experiment of almost 7,500 people, we have strong evidence that it's the <i>amount</i> of wheat people eat that is problematic. You can eat some whole grains daily and still vastly decrease your risk of heart disease and obesity as long as you don't eat them 3 or more times a day.<br />
<br />
The appendix to the NEJM study indicates that some of the patients in the control diet complained about bloating and fullness, but nothing similar from the Mediterranean diet group. The implications seem fairly obvious: there is little basis to make a draconian decision to completely eliminate something with proven health benefits such as whole grains from your diet unless you genuinely suffer from celiac disease. If you're interested in losing weight, think maybe you have gluten sensitivity, or just want to eat healthier, try something like this diet first, and definitely don't put your gluten free diet pamphlets in my child's take-home folder at school. That wasn't cool.<br />
<br />
<b>Update:</b> Take a look at <a href="http://scienceblogs.com/denialism/2013/03/01/dont-switch-to-the-mediterranean-diet-just-yet/#comment-24413">this</a> critical post about the RCT of the Mediterranean diet. There's some perspective on the magnitude of the effect they found, and some compliance issues with the recommended diets that I'm not convinced are as damning as he suggests. I also find this sentence a bit odd:<br />
<blockquote class="tr_bq">
So while you might be less likely to have a heart attack or stroke, you're no less likely to die. This is why I'm so confused they ended the study early.</blockquote>
I don't know. I'd rather just not have a heart attack or stroke. Nevertheless, it's a thoughtful and overall very thorough take on the study.<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com2tag:blogger.com,1999:blog-6329571067764673239.post-24622492917434187432013-02-17T15:47:00.001-06:002013-02-17T15:47:22.991-06:00Moderate Drinking Isn't Totally Risk-Free? CrapLet's be honest, it's pretty ridiculous that it's been three decades since anyone bothered to look at an association between alcohol consumption and risk of cancer mortality in the U.S. Surely, even though there was basically no research to point to, few people would be totally surprised to be reminded that maybe there's some other potentially fatal conditions caused by drinking apart from loss of liver function. And certainly, it's foolish to just assume that, as a moderate drinker, you'd expect only the purported benefits without any of these potential consequences. Bringing this discussion back on the table is a good thing, but it's important to do so responsibly and in the proper context.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://www.drinkspirits.com/wp-content/uploads/2012/11/IMG_8105.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://www.drinkspirits.com/wp-content/uploads/2012/11/IMG_8105.jpg" height="213" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">I can buy that Malort gives you cancer at least (<a href="http://www.drinkspirits.com/wp-content/uploads/2012/11/IMG_8105.jpg">Source</a>)</td></tr>
</tbody></table>
Earlier this week, researchers published a <a href="http://ajph.aphapublications.org/doi/abs/10.2105/AJPH.2012.301199">study</a> aiming to do exactly that. An in-depth article on the research was featured <a href="http://www.sfgate.com/health/article/Alcohol-said-to-have-big-role-in-cancer-4280659.php">here</a> in the San Francisco Chronicle. The basic idea is that we know alcohol puts people at risk of developing certain types of cancer, including oral, esophageal, liver, and colon cancer, so the study used meta-analyses published since 2000 to calculate the effect alcohol has in developing these types of cancers, controlling for confounding variables. They then used data from health surveys and alcohol sales to estimate adult alcohol consumption, and then analyzed with mortality data from 2009 to estimate how many deaths might specifically be attributed to drinking using formulas established in other countries for similar purposes. This came out to range between 18,000 and 21,000 people, or about 3.5% of all cancer deaths. This is actually higher than the amount of deaths from melanoma, and considering how aware people are of the risks of extended exposure to the sun without sunscreen, risks of drinking alcohol could be unjustifiably underrated. The next step is to establish a dose-response curve, establishing how drinking more might affect this relationship.<br />
<br />
Many of the stories on the article focus particularly on the quote that "there is no safe threshold" for alcohol consumption, and that roughly a third of these deaths represented individuals who consumed less than 1.5 drinks per day. Essentially, as many as 7,000 people in the U.S. who drank that amount per day die from cancer each year that they developed because of that consumption. I'm not really interested in poring through the data to question the validity of this number. It's fair to be very skeptical of how granular you can be in determining the risks for each individual based on an average obtained from surveys known to be quite limited, and ecological data like sales. Ultimately, without longitudinal follow-up of drinkers or a case-control study, this represents a fairly low level of evidence on the grand scheme of things. That's not to say to take the general conclusion with a grain of salt. Quite the contrary, actually. It's just that we can't safely interpret exactly how strong (or weak) this effect really is at this point, and need more robust study designs to get there.<br />
<br />
Science-minded people like to blame the media for hyping up conclusions of studies, but here you see the investigators explicitly saying that there is no safe amount of drinking. The abstract itself declares alcohol to be a "major contributor to cancer mortality." What message is a journalist supposed to take from that? The headlines are right there, laid out on a platter for them. I don't think that the investigators necessarily <i>egregiously </i>overstated the conclusions, but it wasn't exactly brimming with context. Also, I don't really expect moderate drinkers to really alter their behavior based on this, but it sort of goes without saying that the proper conclusion wouldn't really grab as much attention, and you never know how things will be absorbed. So I'll try and lay one out myself:<br />
<br />
Based upon this study, it appears that the the risk of death due to drinking has been underestimated. Even moderate drinking, which has some potential health benefits, may contribute to mortality from one of seven types of cancers largely understood to be associated with alcohol consumption. This is the first look at such an association in the United States in over 30 years, and as such, represents a building block from which to generate research ideas that more effectively establish this association and how different consumption patterns alter its effect.<br />
<br />
Now if you'll excuse me, I'm going to the liquor store to buy some rye and a shaker. For real.Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-54556197312136971842013-02-04T11:12:00.001-06:002013-05-27T09:55:39.157-06:00New Proof That Being a Vegetarian is Healthier?According to this <a href="http://abcnews.go.com/blogs/health/2013/02/04/vegetarians-have-lower-heart-disease-risk-study-finds/">blog</a> post from ABC News, the <a href="http://ajcn.nutrition.org/content/early/2013/01/30/ajcn.112.044073">recently published</a> (behind paywall) results of a large prospective cohort study performed in the UK offers "further proof that eating meat can be hazardous to health." That strikes me as sort of an odd way of framing it, and I hope after reading this you'll understand why I think so. The proper conclusion really should be that we have more evidence that being a vegetarian is associated with a reduced risk of heart disease, but it's still difficult to tell how much other lifestyle choices play into this. In other words, as long as you live a healthy lifestyle in general and eat meat a few times a week or less, it's still difficult to assume you'd see a significant benefit by cutting that meat out of your diet.<br />
<div>
<br /></div>
<div>
The study began in 1993 and followed 44,561 men and women in England and Scotland, 34% of whom were vegetarians when the study began. After an average follow-up time of nearly 12 years, the incidence of developing ischemic heart disease (IHD) was compared between the vegetarian group and the meat-eating group. A subset of the study population was also used to measure risk factors related to heart disease such as body-mass index (BMI), non-HDL-cholesterol levels, and systolic blood pressure, and again the vegetarian group came out ahead. The investigators controlled for some obvious potential confounding variables such as age (the vegetarian group was 10 years younger on average), sex, and exercise habits. Overall, I agree with the quoted physician from the ABC post that it's a very good study, and does a pretty good job of trying to estimate the singular effect of whether a person eats meat or not.</div>
<div>
<br /></div>
<div>
This is essentially the type of design I was talking about being needed in the lead/crime post, and while it represents a pretty high level of evidence, there's still some very important limitations to be aware of. That's not to justify being skeptical of the entire claim that being a vegetarian is healthier for the heart than not, but it's not as simple as just saying "meat is hazardous to your health." The two groups being compared are not randomized, so it could very well be that the vegetarians had an overall healthier lifestyle, of course including exercise, but also dietary factors beyond just not eating meat. How would the results compare if you took a vegetarian that ate a lot of whole grains and vegetables vs. an omnivore that ate meat maybe once or twice a week but also ate a lot of whole grains and vegetables? My guess would be that the risks of IHD for the latter individual wouldn't be much higher, if at all. The central issue here is called selection bias, and it refers to the possibility that the two populations are different enough that drawing a firm cause and effect relationship to one specific difference between the groups is suspect.</div>
<div>
<br /></div>
<div>
The perils of reading too much into a cohort study are illustrated by the story of <a href="http://en.wikipedia.org/wiki/Hormone_replacement_therapy_(menopause)">hormone-replacement therapy</a> (HRT) for post-menopausal women. In 1991, a large <a href="http://www.ncbi.nlm.nih.gov/pubmed/1870648">cohort study</a> based on a population of nurses came to the conclusion that HRT protected women from cardiovascular diseases. This was one out of a series of cohort studies for HRT that found a variety of purported benefits. Immediately, concerns about selection bias were raised, but were largely dismissed. HRT became very common throughout the rest of the decade based on these very well-designed cohort studies that seemed to provide really good evidence. Finally, in 2002, the first double-blinded randomized control trial for HRT vs. placebo was performed. Both the estrogen plus progestin and estrogen only arms of the study were halted early because the risks for heart disease, stroke, and pulmonary embolism were higher than in the placebo groups. The exact opposite of the results of the cohort study were found when the comparison groups were randomized. Again, this isn't to say I dispute that completely eliminating meat is a healthy choice, or that if an RCT were performed, the meat-eaters would turn out to be healthier. I'm just illustrating why you should be always be aware of the possibility of selection bias in non-randomized studies.</div>
<div>
<br /></div>
<div>
Another issue to look out for is how reduced risks are presented in studies. There are essentially two ways to do it: by subtracting the difference in risk in one group vs. the other, or by presenting a risk ratio. Looking at the numbers, only 2.7% of the entire study developed IHD, about 1.6% for meat-eaters vs. 1.1% for vegetarians. Another way of putting this is, "a vegetarian diet is associated with 0.5% less risk of developing IHD." That doesn't sound quite as impressive, but it's completely accurate. It's a lot more eye-opening to divide the 1.1% by the 1.6% and get your 32% reduced risk. I'm not saying anybody did anything inappropriate, but the raw numbers have a way of putting things like this into the proper perspective. </div>
<div>
<br /></div>
<div>
Ultimately, what does all this amount to? If I were at risk for IHD, I'd adopt a healthy lifestyle of a well-rounded diet with moderate to no meat, as well as to exercise regularly and stay active throughout the day. Hardly an earth shattering piece of advice, but that's what we've got. To put it bluntly, just eat like a person is supposed to eat, goddamnit. You know what I mean.</div>
Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-2403843575687811982013-01-23T10:30:00.000-06:002013-01-23T10:30:35.045-06:00Wait, There's How Many Independently Funded Studies on GMOs?I just came across this <a href="http://www.biofortified.org/genera/studies-for-genera/independent-funding/">list</a> of 126 independently-funded peer-reviewed articles on GMOs this morning, and I'm really surprised I hadn't seen it a long time ago. Clicking through some of the studies, they run the gamut of genomic analysis between conventional and genetically modified crops (particularly on unintended alterations of untargeted genes), the potential for transferring <a href="http://www.ncbi.nlm.nih.gov/pubmed/11751781">antibiotic resistance</a>, the risk of <a href="http://www.ncbi.nlm.nih.gov/pubmed/16433863">allergenicity</a>, <a href="http://www.ncbi.nlm.nih.gov/pubmed/18191319">analysis</a> of tissue and metabolites in rats, amount of <a href="http://www.sciencedirect.com/science/article/pii/S0308521X11000151">pesticide use</a>, and the <a href="http://www.ncbi.nlm.nih.gov/pubmed/15053558">effect</a> of Bt corn and <a href="http://www.ncbi.nlm.nih.gov/pubmed/14630127">GM soya</a> on mouse testes. In all but a handful of studies, the investigators found no evidence that GM poses an additional risk over conventional farming. Again, these are the independently-funded studies so many critics of the technology have asked for for years. There's simply no excuse for ignoring it.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://eatdrinkbetter.com/files/2010/04/GM-Amflora-or-Frankenstein-potato-approved-for-cultivation-in-Europe.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://eatdrinkbetter.com/files/2010/04/GM-Amflora-or-Frankenstein-potato-approved-for-cultivation-in-Europe.jpg" height="226" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">I didn't see any studies about cassette tape genes. I'd just steer clear for now. (<a href="http://eatdrinkbetter.com/files/2010/04/GM-Amflora-or-Frankenstein-potato-approved-for-cultivation-in-Europe.jpg">Source</a>)</td></tr>
</tbody></table>
I've written before about systematic reviews and meta-analyses, and how if you don't look at the full picture, it's extremely easy to <a href="http://www.naturalnews.com/038792_GMO_toxicity_digestion_cancer.html">cherry-pick</a> to find the results you're looking for. Without really looking too hard, you'll find some <a href="http://www.enveurope.com/content/24/1/24/abstract">studies</a> that contradict the consensus this list represents, as you would expect in just about any field. For instance, one <a href="http://www.outsideonline.com/news-from-the-field/Study-Egg-Yolks-Almost-As-Bad-As-Smoking.html">study</a> may say eating eggs provides nearly the same cardiovascular risks as smoking, while a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3538567/">meta-analysis</a> of all prospective cohort studies, including the previous one, on eggs and cardiovascular health representing data from over 260,000 people, shows no additional risk. Which conclusion do you think holds more weight? I can't stress enough that science isn't a push and pull of individual studies floating in a vacuum, it is a systematic way of looking at an entire pool of evidence. It takes work to train yourself to do this. It's just not how we're wired to think, and even people who have been exposed to it still struggle with it, as I see in my everyday experience in evidence-based medicine. People naturally have their preferences, but if there's a way to minimize this effect it's inconceivable to me not to use it to guide our decisions, from adopting a new technology, abandoning existing ones that don't work as well as we hoped, and in the way we determine our public policies.<br />
<br />
In my many, many conversations on GMOs, I've found that confirmation bias isn't the only barrier, there's also a significant amount of conflation going on between what's specific to GMOs and what is just poor agricultural practice. For example, it's quite clear that pesticide resistant weeds (aka superweeds) are popping up on farms around the U.S. Certainly, growing Roundup Ready corn would logically facilitate the overuse of a single pesticide on a single area, but resistance will occur anywhere there's over-reliance on a single pesticide, and it's up to the particular farmer to rotate their crops to avoid this. Just because many have failed to do it doesn't mean that GMOs are the primary problem. On the spectrum of possibilities of genetic modification, in my view pesticide resistance is securely on the bottom, but let me put it this way: if we just simply banned them, totally removed pesticide resistant crops from all farms in the U.S., would we solve the problem of superweeds for all time? Obviously not. For every potential risk you've heard of about GMOs, ask yourself the same question. You'll almost always find that the problem comes down to general issues of large-scale agriculture.<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-15468406028765594612013-01-17T11:23:00.000-06:002013-02-08T15:57:39.120-06:00Colony Collapse Disorder, Neonics, and The Precautionary Principle<div class="separator" style="clear: both; text-align: center;">
<a href="http://t2.gstatic.com/images?q=tbn:ANd9GcSnzgKnxkv6cY7M5JCLIVA7l4dB4zhg1h7QxvtN1GXwJo_F1IpYe1wJ6T4DIA" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://t2.gstatic.com/images?q=tbn:ANd9GcSnzgKnxkv6cY7M5JCLIVA7l4dB4zhg1h7QxvtN1GXwJo_F1IpYe1wJ6T4DIA" height="293" width="400" /></a></div>
<br />
Over the past decade, the strange mystery of declining bee populations and colony collapse disorder (CCD) have justifiably received a ton of media attention. There's plenty of resources out there for more background on what researchers are thinking, and why it matters if you need it. I'd suggest starting with this <a href="http://boingboing.net/2012/05/07/the-honeybees-are-still-dying.html">excellent post</a> by Hannah Nordhaus for Boing Boing, written shortly after a batch of studies were published identifying a specific group of insecticides called neonicotinoids (aka neonics) as a potential primary cause of CCD. I don't really have much to add to the discussion beyond that article, but I'm particularly drawn to this issue because the the precautionary principle is now front and center, and the debate has thus far largely avoided the type of hyperbole and fear-mongering that only serve to distract evidence-based policy. This post is less about trying to being informative as it is me being hopeful that a discussion pops up in the comments.<br />
<br />
Earlier this week, the European Food Safety Authority (EFSA) <a href="http://www.guardian.co.uk/environment/2013/jan/16/insecticide-unacceptable-danger-bees">concluded</a> that the evidence suggests use of neonics constitute an unacceptable risk to honeybees, essentially laying the groundwork for an EU-wide ban. As outlined in the Nordhaus post, the <a href="http://stream.loe.org/images/120406/Lu%20final%20proof.pdf">study</a> that most clearly linked neonics with CCD has been harshly criticized, certainly by Bayer, the largest manufacturer of neonics, but also by some independent scientists particularly troubled by what they see as unrealistically high doses given to the bees. Glancing at the study it didn't appear that the doses were completely without merit, but it's pretty apparent that the EFSA is operating on the premise that the <i>potential </i>risks of using neonics is worse than any possible benefit to farmers in the EU, as opposed to documented risks supported by large amounts of evidence that has been systematically reviewed.<br />
<div>
<br />
Normally, I don't find the precautionary principle very compelling, and clearly the U.S. government doesn't either. I think oftentimes potential risks are over-hyped, while real benefits are not fully considered or outright dismissed. It could be used to halt or delay literally any technological advance, and yeah...slippery slope. However, in this case I find myself being a bit more sympathetic to it than I usually am. I really don't know what to think about it. Pointing the finger at synthetic chemicals designed to kill insects obviously seems particularly intuitive, but equally obvious is that our intuition sometimes leads us astray by oversimplifying a complex phenomenon. The UK's Department for Environment, Food, and Rural Affairs recently <a href="http://www.defra.gov.uk/environment/quality/chemicals/pesticides/insecticides-bees/">took</a> the skeptical view reflected by Nordhaus. The department commissioned another long-term study to look at the direct sub-lethal effects on bees, as well as asking researchers in the UK to prioritize this issue, after which they will take another look at some point this year. It's not like they said there's nothing to look at here. I'm tempted to think that this is perfectly acceptable.<br />
<br />
Banning neonics isn't going to force farmers in the EU to just abandon insecticides. The alternative chemicals do in fact have much stronger evidence of genuine, tangible, and imminent environmental risks than neonics do. What do you think? How much uncertainty should we tolerate? On what issues do you find the precautionary principle to be appropriate?</div>
Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-88900465066071946842013-01-11T06:48:00.002-06:002013-01-11T06:48:32.732-06:00Alltrials.netHi everyone, I'm really awed at the overwhelmingly positive response I received from my last blog post. Never in my wildest dreams did I figure that 7 posts into this thing I'd actually kinda maybe impact the discussion around an issue I wrote about. Hopefully many of you who visited recently will check back in occasionally. I do this in my spare time, so I can't be hugely prolific, but I'll try to keep it interesting and engaging.<br />
<br />
As someone who works in evidence-based medicine, Ben Goldacre's <a href="http://www.amazon.com/Bad-Pharma-Companies-Mislead-Patients/dp/0865478007/ref=sr_1_1?s=books&ie=UTF8&qid=1357847379&sr=1-1&keywords=bad+pharma">Bad Pharma</a> has been on my mind a lot recently. I couldn't possibly give you a sense of the many issues in the book with the eloquence and expertise that he does, so I encourage you to take a look and see if it interests you. The U.S. version is due on February 5.<br />
<br />
Briefly, he covers the many reasons, across all the various stakeholders in medicine, why the entire system is seriously flawed. That's hardly an overstatement. Trials go missing, data are manipulated, regulators don't mount up, etc. and it paints a pretty horrifying picture of doctors making treatment decisions on biased and incomplete evidence, thus putting patients in unnecessary and entirely preventable harm. In this day and age of open-access journals and cheap and easy access to information, there really is no excuse for this state of affairs. I can't imagine where one moderately well-read blog post gives me any right to think I have any influence now, but I didn't want to just read this book and do absolutely nothing.<br />
<br />
One step in the right direction is <a href="http://alltrials.net/">alltrials.net</a>, an initiative to register all trials worldwide along with information on the methods used and results. Take a look at the site, get a sense of why it exists if you don't already, and hopefully you'll sign the petition. We deserve real evidence-based medicine, and I don't think this is your average petition. There's some good names behind it, and a charismatic and likable advocate who I really believe has a chance to get somewhere with this.<br />
<br />
I'll get back to blogging about my usual topics soon. In the meantime, I'll always be open to suggestions. A couple of weeks ago I tried to set it up so my latest tweets would show up somewhere on my blog, but it didn't work with this design. Sort of a missed opportunity to suddenly have tens of thousands of page views without my username anywhere. Find me at <a href="http://twitter.com/scottfirestone">@scottfirestone</a>, and send me a link or say hi if you like.Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-30580796635472699592013-01-03T12:59:00.000-06:002013-12-24T12:22:40.917-06:00The Link Between Leaded Gasoline and Crime<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.motherjones.com/files/Lead_Crime_325.gif" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><br /><img border="0" src="http://www.motherjones.com/files/Lead_Crime_325.gif" height="320" width="284" /></a></div>
Kevin Drum from Mother Jones has a fascinating new <a href="http://www.motherjones.com/environment/2013/01/lead-crime-link-gasoline">article</a> detailing the hypothesis that exposure to lead, particularly tetraethyl lead (TEL) explains the rise and fall of violent crime rates from the 1960s through the 1990s, after the compound was phased out of gasoline worldwide. It's a good bit of journalism on issues of public health compared to much of what you see, but I'd like to provide a little bit of epidemiology background to the article because there's so many studies listed that it's a really good intro to the types of study designs you'll see in public health. It also illustrates the concept of confirmation bias, and why regulatory agencies seem to drag their feet when we read such compelling stories as this one.<br />
<br />
Drum correctly notes that simply looking at the correlation shown in the graph to the right is insufficient to draw any conclusions regarding causality. The investigator, Rick Nevin, was simply looking at associations, and saw that the curves were heavily correlated, as you can quite clearly see. When you look at data involving large populations, such as violent crime rates, and compare with an indirect measure of exposure to some environmental risk factor such as levels of TEL in gasoline during that same time, the best you can say is that your alternative hypothesis of there being an association (null hypothesis always being no association) deserves more investigation. This type of design is called a cross-sectional study, and it's been documented that values for a population <a href="http://jratcliffe.net/research/ecolfallacy.htm">do not always match</a> those of individuals when looking at cross-sectional data. This is the ecological fallacy, and it's a serious limitation in these types of studies. Finding a causal link in a complex behavior like violent crime, as opposed to something like a specific disease, with an environmental risk factor is exceptionally difficult, and the burden of proof is very high. We need several additional tests of our hypothesis using different study designs to really turn this into a viable theory. As Drum notes:<br />
<br />
<blockquote class="tr_bq">
<span style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;">During the '70s and '80s, the introduction of the catalytic converter, combined with increasingly stringent Environmental Protection Agency rules, steadily reduced the amount of leaded gasoline used in America, but Reyes discovered that this reduction wasn't uniform. In fact, use of leaded gasoline varied widely among states, and this gave Reyes the opening she needed. If childhood lead exposure really did produce criminal behavior in adults, you'd expect that in states where consumption of leaded gasoline declined slowly, crime would decline slowly too. Conversely, in states where it declined quickly, crime would decline quickly. And that's </span><a href="http://www.nber.org/papers/w13097" style="background-color: white; border-bottom-color: rgb(0, 0, 0); border-bottom-style: dotted; border-bottom-width: 1px; color: black; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px; text-decoration: initial;" target="_blank">exactly what she found</a><span style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;">.</span></blockquote>
<br />
Well that's interesting, so I looked a bit further at Reyes's study. In the study, she estimates prenatal and early childhood exposure to TEL based on population-wide figures, and accounts for potential migration from state to state, as well as other potential causes of violent crime to get a stronger estimate of the effect of TEL alone. After all of this, she found that the fall in TEL levels by state account for a very significant 56% of the reduction in violent crime. Again, though, this is essentially a measure of association on population-level statistics, estimated on the individual-level. It's well-thought out and heavily controlled for other factors, but we still need more than this. Drum goes on to describe <a href="http://www.sciencedirect.com/science/article/pii/S0160412012000566">significant associations</a> found at the city level in New Orleans. This is pretty good stuff too, but we really need a new type of study, specifically, a study measuring many individuals' exposure to lead, and to follow them over a long period of time to find out what happened to them. This type of design is called a prospective cohort study. Props again to Drum for directly addressing all of this.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEim3A8uux34b4Xf1p1cR-B2rmQNOAMG6psF3spIeQ9u_N5DuQah32dAS7i8NVYrCPdeAKL8s7tq5C247E24IcCDz3IX1ZcvQGUnhOQLkMpVn-_rffkHpqlQfNbzM_VEYuFnc2ODYGwHAh8/s1600/Lead.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEim3A8uux34b4Xf1p1cR-B2rmQNOAMG6psF3spIeQ9u_N5DuQah32dAS7i8NVYrCPdeAKL8s7tq5C247E24IcCDz3IX1ZcvQGUnhOQLkMpVn-_rffkHpqlQfNbzM_VEYuFnc2ODYGwHAh8/s1600/Lead.JPG" height="277" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The graph title pretty much says it all (<a href="http://www.nber.org/papers/w13097.pdf?new_window=1">Source</a>)</td></tr>
</tbody></table>
The article continues by discussing a cohort study done by researchers at the University of Cincinnati where 376 children were recruited at birth between 1979 and 1984 to test lead levels in their blood over time and to measure their risk of being arrested in general, and also specifically for violent crime. Ultimately, some of these babies were dropped from the study by the end, and 250 were selected for the results. The researchers found that for each increase of 5 micrograms of lead per deciliter of blood, there was a higher risk for being arrested for a violent crime, but a further look at the numbers shows a more mixed picture than they let on. In prenatal blood lead, this effect was not significant. If these infants were to have no additional risk over the median exposure level among all prenatal infants, the ratio would be 1.0. They found that for their cohort, the risk ratio was 1.34. However, the sample size was small enough where the confidence interval for this rate was as low as 0.88 (paradoxically indicating that additional 5 µg/dl during this period of development would actually be protective), and as high as 2.03. This is not very helpful data for the hypothesis. For early childhood exposure, the risk is 1.30, but the sample size was higher, leading to a tighter confidence interval of 1.03-1.64. It's possible that the real effect is as little as a 3% increase in violent crime arrests, but this is still statistically significant. For 6-year-olds, it's a much more significant 1.48 (95% CI 1.15-1.89<span style="line-height: 18px;"><span style="color: #333333; font-family: arial, sans-serif; font-size: x-small;">)</span><span style="color: #333333; font-family: Times, Times New Roman, serif; font-size: x-small;">. </span></span><span style="line-height: 18px;">It seems unusual to me that lead would have such a more profound effect the older the child gets, but I need to look into it further. </span><span style="font-family: inherit; line-height: 18px;">For a quick review of the concept of CI, see my </span><a href="http://hisscienceistootight.blogspot.com/2012/12/the-language.html" style="font-family: inherit; line-height: 18px;">previous</a><span style="font-family: inherit; line-height: 18px;"> post on it. It really matters. </span><br />
<span style="line-height: 18px;"><span style="font-family: inherit;"><br /></span></span>
<span style="line-height: 18px;"><span style="font-family: inherit;">Obviously, we can't take this a step further into experimental data to enhance the hypothesis. We can't expose some children to lead and not others on purpose to see the direct effects. This is the best we can do, and it's possibly quite meaningful, but perhaps not. There's no way to say with much authority one way or another at this point, not just because of the smallish sample size and the mixed results on significance. Despite an improved study design from cross-sectional studies, a cohort study is still measuring correlations, and we need more than one significant result. </span></span><span style="font-family: inherit; line-height: 18px;">More cohort studies just like this, or perhaps done more quickly on previously collected blood samples and looking retrospectively at the connection, are absolutely necessary to draw any conclusion on causality. Right now, this all still amounts to a hypothesis without a clear mechanism for action, although it's a hypothesis that definitely deserves more investigation. There's a number of other studies mentioned in the article showing other negative cognitive and neurological effects that could certainly have an indirect impact on violent crime, such as ADHD, aggressiveness, and low IQ, but that's not going to cut it either. By all means, we should try to make a stronger case for government to actively minimize exposure to lead in children more than we currently do, but we really, really should avoid statements like this:</span><br />
<span style="color: #333333; font-family: inherit; line-height: 18px;"><br /></span>
<br />
<blockquote class="tr_bq">
<span style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;">Needless to say, not every child exposed to lead is destined for a life of crime. Everyone over the age of 40 was probably exposed to too much lead during childhood, and most of us suffered nothing more than a few points of IQ loss. But there were plenty of kids already on the margin, and millions of those kids were pushed over the edge from being merely slow or disruptive to becoming part of a nationwide epidemic of violent crime. Once you understand that, it all becomes <b>blindingly obvious </b>(emphasis mine). </span><em style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;">Of course</em><span style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;"> massive lead exposure among children of the postwar era led to larger numbers of violent criminals in the '60s and beyond. And </span><em style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;">of course </em><span style="background-color: white; font-family: Verdana, Tahoma, Arial, Helvetica, 'Bitstream Vera Sans', sans-serif; font-size: 13px; line-height: 24px;">when that lead was removed in the '70s and '80s, the children of that generation lost those artificially heightened violent tendencies.</span></blockquote>
<br />
Woah. That's, um, a bit overconfident. Still, it's beyond debate that lead can have terrible effects on people, and although there is no real scientific basis for calling this violent crime link closed with such strong language, it's a mostly benign case of confirmation bias, complete with putting blame of inaction on powerful interest groups. His motive is clearly to argue that we can safely add violent crime reduction to the cost-benefit analysis of lead abatement programs paid for by the government. I'd love to, but we just can't do that yet.<br />
<br />
<img src="http://www.motherjones.com/files/Lead_ROI_630.gif" /><br />
<br />
The $60B figure seems pretty contrived, but is a generally accepted way to quantify a benefit of removing of neurotoxins in wonk world. The $150B is almost completely contrived, and its very inclusion on the infographic is suspect. I certainly believe that spending $10B on cleaning up lead would be well worth it regardless, and even question the value of a cost-benefit analysis in situations like this, but that doesn't mean I'm willing to more or less pick numbers out of a hat. That's essentially what you're doing if you only have one study that aims to address the ecological fallacy.<br />
<br />
The big criticism of appealing to evidence would obviously be that it moves at a snail's pace, and there's a possibility we could be hemming and hawing over and delaying what really is a dire public health threat. Even if that were the case, though, public policy often works at a snail's pace too. If you're going to go after it, you gotta have more than one cohort study and a bunch of cross-sectional associations. Hopefully this gives you a bit more insight onto how regulatory agencies like the EPA look at these issues. If this were to go up in front of them right now I can guarantee you they would not act on the solutions Drum presents based on this evidence, and instead of throwing your hands up, I figure it's better to have an understanding of why that would be the case. It's a bit more calming, at least.<br />
<br />
<b>Update:</b> I reworded the discussion on the proper hypothesis of a cross-sectional study to make it more clear. Your initial hypothesis in any cross-sectional study should be that the exposure has no association to the outcome.<br />
<br />
<b>Update 2: </b>An edited version of this blog now appears on <a href="http://blogs.discovermagazine.com/crux/2013/01/08/does-lead-exposure-cause-violent-crime-the-science-is-still-out#.UO11QG80WSo">Discover's </a>Crux blog. I'm amazed to see the response this entry got today, and I can't say enough about how refreshing it is to see Kevin Drum <a href="http://www.motherjones.com/kevin-drum/2013/01/lead-and-crime-assessing-evidence">respond</a> and refine his position a little. In my mind, this went exactly how the relationship between journalism and science should work. Perhaps he should have some latitude to make overly strong conclusions if the goal is really to get scientists to seriously look at it.<br />
<br />
<b>Update 3: </b>This just keeps going and going! Some good criticism from commenters at <a href="http://marginalrevolution.com/marginalrevolution/2013/01/what-is-the-critical-view-on-the-lead-crime-correlation.html">Tyler Cowen's</a> blog, as well as <a href="http://andrewgelman.com/2013/01/the-difference-between-significant-and-non-significant-is-not-itself-statistically-significant/">Andrew Gelman</a> regarding whether I'm fishing for insignificant values. You can find my responses to each in their comment sections. Perhaps I did focus on the lower bounds of CIs inappropriately, but I think the context makes it clear I'm not claiming there's no evidence, just that I'd like to see replication. In that case, I think it's arguably pretty fair.<br />
<br />
<b>Update 4!!! </b>This thing is almost a year old now! I've thought a lot about everything since, and wanted to revisit. <a href="http://hisscienceistootight.blogspot.com/2013/12/revisiting-lead-and-violent-crime-year.html" target="_blank">Read if ya wanna</a>.<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com26tag:blogger.com,1999:blog-6329571067764673239.post-10184400180577560602012-12-31T14:53:00.000-06:002013-04-29T06:24:54.734-06:00GMOs Part 2: How The Digestive System Works<br />
In my <a href="http://hisscienceistootight.blogspot.com/2012/12/what-to-talk-about-when-youre-talking.html">last post</a>, I gave an overview of why there's no scientific basis for claiming that GMOs by definition are harmful for human health, focusing mostly on the molecular biology angle. Now, I'd like to add on to that by describing how food gets digested on a molecular level. If GMOs are going to be harmful, it's going to happen on this scale, and it's going to be unique to the specific GMO in question. Hopefully, both pieces together will help you spot what the real issues are and what is BS in the GMO debate. Not coincidentally, a major concept here is that DNA is DNA, just like in the molecular biology discussion.<br />
<br />
Everything we ingest is essentially made up of four basic types of macromolecules: sugars, fats (or lipids), proteins, and nucleic acids. We're only concerned with the latter two, because all life, genetically modified or not uses the same range of sugars and fats. Let's start with how proteins are digested. Remember, the point of a GMO is to get an organism to code for a specific different, or set of different proteins that it wouldn't otherwise produce. Proteins are fascinating molecules, and I would totally have enjoyed an entire course on just them alone. Depending on its sequence of amino acids, a protein can be an enzyme, a hormone, or used for structural purposes, like collagen or the exoskeleton of a crustacean. Our digestive system is full of hundreds of different enzymes that each break down a single macromolecule, including other proteins, and no matter what type of macromolecule it is, they are broken down largely by the same type of reaction, called hydrolysis.<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="http://library.thinkquest.org/28751/media/review/figure/peptide.gif" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://library.thinkquest.org/28751/media/review/figure/peptide.gif" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The dehydration synthesis reaction for two amino acids, in this case glycine on the left, and alanine on the right. (<a href="http://library.thinkquest.org/28751/review/biochem/6.html">Source</a>)</td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
The proteins we ingest are made up of a combination of 20 different amino acids, each with a unique structure giving its own properties, like whether it is able to be in contact with water or not. They also vary widely in size. Both of these properties are very important in determining the structure and function of the protein by determining how it folds up. Each amino acid is bonded to the next by removing a hydrogen ion (H+ or proton) from one, and a hydroxide ion from another (OH-). The H and OH- spontaneously attract each other to form water, thus giving the reaction the name of dehydration synthesis. The reaction is easily reversed by adding a water molecule back into the bond to cut it (root of hydrolysis = water cutting). Hydrolysis is a thermodynamically favorable reaction, meaning it happens fairly efficiently on its own spontaneously, but enzymes are needed to speed it up to meet our body's demand for new building blocks.<br />
<br />
Enzymes in our digestive system that do precisely this include pepsin, trypsin, and chymotripsin, and each specialize in breaking the bond at just a few specific amino acids in the stomach and the duodenum. These enzymes are able to reach virtually all peptide bonds due to the low pH in our stomachs unfolding the protein from its complex shape into more of a chain. The end result is that the protein we ate is broken up into individual amino acids for absorption into the bloodstream through the lining of our intestines. When this happens, they have none of the properties of the protein we ingested, and do not have a function on their own. It doesn't matter which protein it is, or where it came from, it almost always ends up as nonfunctional pieces that are recycled to new dehydration reactions elsewhere in the body to create new proteins. There's an exceptionally small chance for a novel GMO protein to survive the digestive system intact and functional, and GMO proteins are not any more likely than any other protein to do so.<br />
<br />
Many food allergies, however, are indeed caused by proteins that do resist full digestion in any one or several of these steps for one reason or another. When this happens, our immune systems recognize that it's a foreign protein and attack it, causing the allergy. It's certainly plausible that this could theoretically happen to any GMO protein, but the risk is not deemed serious enough by the FDA to require a clinical trial for new GMOs. I think it's perfectly reasonable for consumers to demand testing for potential food allergies, but I also think the potential risk is quite a bit overhyped. It's certainly worthy of debate. I'd be open for requiring independently operated small scale clinical trials.<br />
<br />
Digesting the nucleic acids DNA and RNA follow a very similar process. They too are polymers where each building block is bound in dehydration reactions, and are broken up by hydrolysis by enzymes called nucleases. The nucleic acids can either be reused to build new nucleic acids, or they can be broken down further into a sugar, the base that makes up the genetic code, and a phosphate group.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQicxyfxEEIAbFL3_4Uqgxj-TJk_5DPsJnfKNMNW_QtS3peW_cK3w" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto; text-align: center;"><img border="0" height="237" src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQicxyfxEEIAbFL3_4Uqgxj-TJk_5DPsJnfKNMNW_QtS3peW_cK3w" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Hydrolysis of DNA. An arrow on the bottom left shows a water molecule approaching the phosphate (PO4) group that binds the pair of nucleotides, The pentagon is a sugar, called deoxyribose. The molecular structure of the bases are not shown (<a href="http://chemwiki.ucdavis.edu/Organic_Chemistry/Organic_Chemistry_With_a_Biological_Emphasis/Chapter_10%3A_Phosphoryl_transfer_reactions/Section_10.4%3A_Phosphate_diesters">Source</a>).</td></tr>
</tbody></table>
Some critics have claimed that the transgenes have been taken up by bacteria in our digestive system intact, but there is <a href="http://academicsreview.org/reviewed-content/genetic-roulette/section-5/5-3-transgenes-and-gut-survival/">no evidence</a> that they've ever survived through the entire digestive tract. This is a hypothetical and exceedingly unlikely event that currently is not supported by the science. Furthermore, a transgene is no more likely than any other gene we digest to be transferred like this. Also, if, say, the gene providing Roundup resistance in Monsanto's corn <i>were</i> to be transferred in full to a bacteria, how exactly would it provide a survival benefit without constant exposure to Roundup in our digestive systems?<br />
<br />
When people claim that the corporations are buying and paying for the science on GMOs, they don't realize it, but they're saying that everything we know over decades of research in molecular biology and digestion is suspect. Because so much of what we do know is fundamental, the FDA long ago declared that GMOs are "substantially equivalent" to any other non-GMO, and required no additional testing. They didn't do it specifically to appease powerful interests, they did it because it would have been irrational for them to do otherwise. Their jurisdiction is public health, and in this case they made the choice with much more science behind it. If you <a href="http://www.youtube.com/watch?feature=player_embedded&v=E4w2Cqbz0VM">listen </a>to vocal critics of GMOs, it seems they're starting to get this message.<br />
<br />
When people say "we have the right to know" what's in our food, hopefully now you can get an idea of why a label saying something is genetically modified doesn't really tell you much except what you already know: this organism was grown using industrial agricultural methods.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com5tag:blogger.com,1999:blog-6329571067764673239.post-65577703351951963932012-12-26T09:52:00.001-06:002012-12-27T14:31:48.143-06:00What To Talk About When You're Talking About GMOsOh boy, the first ever genetically modified animal <a href="http://www.foodsafetynews.com/2012/12/faster-growing-salmon-will-not-harm-environment-fda-says/#.UNcyjm80WSo">passed the last major regulatory hurdle</a>, and presumably has the green light for FDA approval. AquaBounty Technology's AquAdvantage salmon will soon appear (unlabeled) in supermarkets and restaurants around the United States. Back in April, the FDA finalized its environmental assessment on the salmon, and finally <a href="http://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials/VeterinaryMedicineAdvisoryCommittee/UCM224760.pdf">released </a>the results this past week, indicating that they see little risk of negative environmental impact. There's certainly valid reasons to make a personal decision to avoid this salmon, but I do want to try and separate the science from the science fiction to help you make a more informed decision about it. This seems like the perfect time to introduce the concepts and issues around GMOs as someone with personal experience in the lab with them, and, since at this point I think pretty much anyone who reads this knows me, you can vouch that I'm hardly a shill for Big Ag.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://images.sciencedaily.com/2006/07/060730131856.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://images.sciencedaily.com/2006/07/060730131856.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><em style="background-color: white; font-family: Arial, Helvetica, sans-serif; line-height: 15px; text-align: left;">At 18 months, the transgenic fish is clearly much larger than the same-age normal fish. But overall growth of the same generation of fish evens out by 36 months. (Image Credit: Aqua Bounty Technologies)</em></td></tr>
</tbody></table>
To develop AquAdvantage, AquaBounty isolated the gene in the largest species of salmon, the chinook, that contributes to its size. Then, they inserted it in the genome of a wild Atlantic species to replace their own growth gene, allowing it to grow at twice its normal rate. However, this gene only affects the <i>rate</i> of growth, and does not create a new giant species of Atlantic salmon. Just by looking at the full-grown transgenic fish and a full-grown natural Atlantic salmon, you'd never be able to tell a difference. While I grant it may sound a little bizarre, and I definitely thought so about GMOs in general before I became more familiar with them, the techniques are hardly novel at this point. Every biology major for the last 30 years will likely have performed these techniques dozens of times. The ick factor, I think, comes from a misconception about what genes actually are, and an ethical issue of toying with nature that usually underestimates the prevalence of genes transferring from species to species. There's nothing wrong or shameful about this, really. It's more shameful that this information is so arcane.<br />
<br />
Ultimately, the first issue comes down to the fact that DNA is DNA. Think back to your high school biology class, where you learned about its structure. It's the same stuff in every cell in my body, yours, and in the virus I got the other day that's really pissing me off right about now. A gene is basically a stretch of DNA that codes for a specific protein, with molecular switches nearby that turn it on or off. While this stretch of DNA, and the switches that make it up between species may vary in size and the precise code, there's nothing uniquely "chinook" about the chinook's DNA, just as there's nothing uniquely human about ours. Nobody is injecting any crazy new hormones into anything, it's simply replacing the existing hormone with a pretty similar new one using the existing one's own "machinery" to control its expression.<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcTfmEA1rhU7IIGiVuz7_FPSXXKzgw9CQ2IZqczFPr21D6deVSd3Rw" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcTfmEA1rhU7IIGiVuz7_FPSXXKzgw9CQ2IZqczFPr21D6deVSd3Rw" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Who wants a tri-colored fish that was injected with some sort of red shit? (Image Source: <a href="http://www.yvonnegraphy.com/tag/gmos/">Yvonnegraphy</a>)</td></tr>
</tbody></table>
Because of all this, any stretch of DNA is theoretically fungible from species to species. Evolution occurs because the order of base pairs changes over time due to mutations, or perhaps from viruses leaving traces that get inserted into the host's DNA. Roughly 8% of every person's DNA initially came from viruses, just from natural <a href="http://en.wikipedia.org/wiki/Horizontal_gene_transfer">gene transfer</a>. That doesn't mean we're part virus, and it certainly doesn't mean that our ancestors were more natural humans. Nature is constantly changing and adapting to outside stimuli, and the adaptation happens at the DNA level. The concept of genetic modification is to preconceive these types of gene transfers, nothing more, nothing less.<br />
<br />
Based on everything we know about molecular biology, chemistry, and toxicology, it just doesn't really make sense for GMOs, by definition and across the board, to present a public health issue, apart from the possibility that the specific novel protein the new gene encodes produces an allergic reaction. This is reflected in a broad consensus (<a href="http://www.aaas.org/news/releases/2012/1025gm_statement.shtml">AAAS</a>, <a href="http://www.who.int/foodsafety/publications/biotech/20questions/en/">WHO</a>, <a href="http://peer%20reviewed%20publications%20on%20the%20safety%20of%20gm%20foods/">systematic review of 42 peer-reviewed studies</a>, <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2408621/">Royal Society of Medicine</a>) of science and health organizations around the world. I understand why many people do not trust this consensus. It's absolutely true that chemicals or pharmaceuticals once deemed completely safe by industry and regulatory agencies were later found out not to be so safe. I'm always very suspicious of industry, but I try not to let my suspicions replace or override evidence. Evidence is the sum of the best of our understanding about a particular topic, and while it can sometimes be incorrect or incomplete, in our case with GMOs the quality of the small amount of contradictory evidence is of very poor quality. I could write an entire post about the <a href="http://www.forbes.com/sites/emilywillingham/2012/12/07/what-you-need-to-know-about-gm-foods-is-half-the-story/">problems</a> in a recent article that came out linking GMOs to cancer in rats that ultimately make this experiment of little to no value at all, but the conclusions it claimed to reveal will never fully disappear.<br />
<br />
So, getting back to AquAdvantage, how exactly did this salmon pass an environmental assessment? The first question that needs to be answered is whether this new growth gene will escape into natural populations of Atlantic salmon. Now, think back to high school biology again. It's not enough for a gene to get passed on from a parent, the gene must provide a definite selective advantage to survive. It's plausible for quicker growth to be naturally selected for once it hits the ecosystem, but certainly not definite. The risk is real enough, though, that AquaBounty claims they will make, at minimum 95% of the transgenic fish sterile females that cannot possibly pass on the gene in question. Of course, while this does not completely eliminate the theoretical risk, it does reduce the probability of it actually occurring. To further reduce the possibility of a worst-case scenario, the fish will be bred at a facility in Prince Edward Island where, if they do escape, it will be very difficult for them to survive due to the extra cold winter temperatures and high salinity of the Gulf of Lawrence that they would escape to. Ultimately, this was deemed good enough by the FDA. If you have a quibble with the FDA, look at the study and determine how and why you think this is insufficient. They aren't going to listen to assertions and arguments without evidence.<br />
<br />
That's the science behind GMOs, and none of what I said is all that controversial within the relevant fields. In the science bubble, these facts speak for themselves. It's a mistake, and a bit arrogant, to act as if that is or should be the case outside of science, in the court of public opinion as they say. Most people make decisions based on much more complicated factors, not the least of which are anecdotes that strike an emotional chord. Sticking only to arguments based in evidence doesn't really take those complicated factors into account. It's important to try and tell a compelling story, one that competes with powerful anecdotes, so my hope is that what follows below is a decent attempt at it.<br />
<br />
To someone who uses evidence to guide decisions on science and technology, there is a bit of irony in labeling the political right as "anti-science", because we notice that for many on the left, science is more valid depending on whether or not corporations fight it or hold it up. We on the left are often amazed at the mass delusions the right has accepted as truth, from climate denial, to intelligent design, to conversion therapy to "save" people who "made the choice" to be gay. Unfortunately, we are not as critical of our own misconceptions, from the way we talk about GMOs, to not vaccinating our children. I don't think both sides are totally equivalent here, but they do all stem from a sometimes, but often not justified mistrust. It's a difficult thing to accept, but if you believe in science, you have to accept that this is not an appropriate lens from which to view biotech. I haven't read the full environmental assessment, and I'm certain there are valid criticisms to be made from it; there always are. However, none of them amount to a (albeit probably half-serious) headline saying "<a href="http://grist.org/news/the-apocalypse-is-here-fda-clears-way-for-fast-growing-gm-monster-salmon/">The Apocalypse is Here</a>". I think we have the capacity to be a whole lot more rational than the current makeup of the right wing, but this requires examining our own thinking and being secure enough to accept that maybe our gut reactions are leading us to places that aren't totally justified. While it's totally OK to have a visceral reaction to an article saying this fish will destroy humanity, I hope your next reaction is questioning whether those emotions are closing you off to accepting more information that makes things a bit more complex. <i>That</i> is totally not OK, because very little in the world is purely black and white. It wasn't the case when George W. Bush was saying "you're either with us or you're against us", and it's not the case when thinking about organic vs. industrial agriculture.<br />
<br />
While Monsanto is certainly is prone to exaggeration, unethical business practices, and are one of the largest contributors to an unsustainable food system, this does not undo the science on their side about GMOs. Technology pretty much always comes with some risk and some benefit, and the question is always whether the benefits outweigh the risks. Often, it seems that the only benefit is in the bottom line of the company that develops the GMO, and AquAdvantage is not really an exception. The benefits do go to AquaBounty, but I think what tends to get overlooked is that fish farmers who can cycle out their enclosures more quickly will also perceive a benefit. There's plenty of family farmers who actively choose to use Roundup Ready corn because they perceive it to be a more reliable means to provide for their families. I may not want to support either one with my own money, because I don't want to support "conventional" agriculture, and I don't approve of fish farming, but I'm an unapologetic pragmatist. The burden is on us to demonstrate a better way accepting that <i>economics matter</i>, and they matter from a self-interest point of view. I don't want to force someone to be more environmentally responsible with no assurance that their ability to finance their huge, expensive equipment is safe, and I don't want to advocate for any sort of policy that is based more in ideology than evidence. Most of these farmers are heavily in debt and make short-term economic decisions because of it, so make your solution more enticing from a short-term economic point of view. Demonstrate the utility of your solution, and do it without expecting much help from government. We really need it.<br />
<br />
Right now, scientists are working to develop drought-resistant GMOs that can survive dry spells like we went through in the Midwest this past summer, and also require less irrigation and conserves our aquifers. There's also a group working on crops that use less nitrogen, potentially minimizing the use of synthetic fertilizers made from petroleum, and thus reducing the type of agricultural runoff that has created a giant dead zone in the Gulf of Mexico. Some of these issues could possibly be helped by conventional breeding, but compared to genetic engineering it's less efficient in time and money spent. It's certainly possible that neither GMO ever pans out at all. Sure, the ideal solution is to grow our food in a diverse field, without monocropping and synthetic inputs, but I don't see much value in dismissing something that tries to improve the latter issue without forcing farmers to adopt an entirely new and economically unproven method. We certainly are willing to give electric vehicles time to develop, knowing full well that they are mostly impractical right now, while the electricity is still mostly generated by fossil fuels. They are nowhere near their potential, and even the biggest cheerleaders acknowledge this. Same goes, I think, for GMOs. Monsanto doesn't help by insulting our intelligence and acting as if the potential has arrived, but really, what does that even matter? Nobody really believes that. Why would we want to dismiss GMOs like these out of hand because of the techniques used to create them? Are there really more barriers to competition in biotech than any other industry, where the companies involved now will always control it? Just think about where IT once was. In 20 years, I have little doubt that we'll know full well where biotech stands, and today will be looked back upon like the 1960s are in IT, just replace IBM and Bell with Monsanto and Cargill. They'll lose control, because big corporations are good at using influence and access to maintain their market share, but not through innovation. That's where the proverbial college dropout in his or her garage comes in, and they'll most definitely be coming.<br />
<br />
Please don't be afraid to comment on this post if you disagree. I'm happy to engage with people who think I'm crazy.<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-79031873933954620992012-12-14T10:40:00.000-06:002012-12-14T12:03:17.253-06:00The LanguageThe language of science, of course, is math. In physics and chemistry, you need to learn algebra and calculus to have more than a passing knowledge of the relevant topics. For our purposes, where we are exploring environmental risk factors or treatments on health and the environment, the language is statistics. If you are not familiar with the basics, quite simply, you will easily be lead astray by hype. There's no possible way to put all of the basics in a single, much less readable and interesting blog post, but I do think you can try to highlight what separates someone like Nate Silver from Dick Morris. So I'm totally gonna. You don't have to understand the results section of a study to correctly gauge what you're being told is important, but you do need to understand the concepts behind it. The two most important concepts deal with the nature of randomness, and they are confidence intervals, and errors in hypothesis testing.<br />
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><img src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcRRD7VDzZ5GWPivdQrhpUIzQ83ahj_1gBc7-fekIQfK85YpHpF6" style="margin-left: auto; margin-right: auto;" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Don't be like Dick. Too many people are Dicks</td></tr>
</tbody></table>
Virtually every study we will come across involves a rather simple, but important concept: sampling. When you think about it, it's really quite amazing how many of the recent polls in the presidential election, using only around 1500 random people, were able to be so accurate in determining the results of a voting population that ended up being over 125 million people. The assumption that a random sample accurately reflects the larger population like this is the basis for studies that link cancer to various agents, or the prevalence of a certain contaminant in Lake Michigan.<br />
<br />
Unlike a presidential poll, however, there's not only a handful of possible results (e.g. Obama, Romney, undecided). If you're looking at something like the blood pressure readings for people taking a certain medication for hypertension, there is the possibility that you could get any conceivable result possible, although the probability can be reasonably considered 0 outside of a certain range. We're dealing with normal distributions, specifically two different normal distributions for exposure and for no exposure, and then comparing them. This is the crux of data analysis in a nutshell. We want to know what the middle of the curve (i.e. the mean) for people or plots of land or crops exposed to a certain treatment tells us, compared to what the middle of the curve for those not exposed tells us.<br />
<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/325px-Standard_deviation_diagram.svg.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/325px-Standard_deviation_diagram.svg.png" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">The normal distrubution. Taken from <a href="http://en.wikipedia.org/wiki/Normal_distribution">Wikipedia</a></td></tr>
</tbody></table>
<div style="text-align: left;">
Before we discuss this, though, let's take a closer look at the normal distribution. The blue shading shows what I mean by the probability of getting really any conceivable number. The percentages are the probabilities that you'll find any single data point in that range. The key takeaway is that almost anything is technically possible. So when you take only a single test, you always know in the back of your mind that these results could be a total fluke. When scientists report the results of their studies, they acknowledge this by listing the confidence intervals (CI) of their curve. It's the same thing in presidential polls, when they openly declare the margin of error (MOE) involved. There has to be some sort of cutoff for readers to understand the impact of your results. A CI or an MOE gives the range of possibilities in which you are 95% confident your <b>true</b> means exist, if you were to theoretically perform an infinite amount of readings. The more individual data points you take, however, the more certainty you have that your curve represents reality, so the tighter this range is, but there's always some uncertainty, and it gets compounded a bit when you start testing it against other curves.</div>
<br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><img src="http://www.riskglossary.com/images/ex1_standard_deviation.gif" style="margin-left: auto; margin-right: auto;" /></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Left: High variability, thus a large CI. Right: Low variability, thus a smaller CI</td></tr>
</tbody></table>
<br />
Science always starts with a hypothesis (not synonymous with theory! Don't even!) that needs to be tested. Sometimes your peaks are pretty close together, and sometimes they're further apart. Those of us working with statistics put the burden of proof on showing something "statistically significant", which is to say you put the burden of proof on a meaningful difference in results, and found evidence in the numbers for it. Our <a href="http://en.wikipedia.org/wiki/Null_hypothesis">hypothesis </a>is <b>always </b>that we will not find evidence that there is a meaningful effect, with the assumption that the two curves (samples) come from the same population. Similar to how you assume that the Gallup and Rasmussen polls, although they may be telling you slightly different things, represent the same electorate.<br />
<br />
One of the big questions we are asking, of course, is, "what is the probability that these two curves are <b>not</b> from the same population?" This is represented in studies as the "p-value". Obtaining a p-value first requires you to "fit" your data into a standard normal distribution, which for our purposes is generally more than acceptable. You can't compare two curves unless you compare two curves against the same standard, but I'll spare you the details of how you do it, just be aware that the comparison is made fairly.<br />
<br />
You can maybe get a feel for the challenges of this question from the image below. Remember, we're taking one set of readings for each treatment, and if we were to take another set just for good measure, the peak could be quite different just by chance. There's a 95% chance the peak could be anywhere in your confidence interval. Sometimes you see this from week to week in the presidential polls, when the 1500 or so people who respond to the poll in one week seem to show a major swing in opinion from the week prior. News outlets tend to run with the horse race narrative, looking for the one gaffe or moment that caused this crazy swing. The Nate Silvers of the world look at the swing and say, "Simmer down, people. It's almost certainly due to randomness."<br />
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcTgm-nV4Mr3YRDeXJQWkEFL_3Yfab4SfG4iBYd8BghucyewDwDL" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcTgm-nV4Mr3YRDeXJQWkEFL_3Yfab4SfG4iBYd8BghucyewDwDL" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">For our studies, this is what's being analyzed! Taken from <a href="https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcTgm-nV4Mr3YRDeXJQWkEFL_3Yfab4SfG4iBYd8BghucyewDwDL">Missouri State</a></td></tr>
</tbody></table>
<br />
Because of this effect, when your curves are close together, it's going to be pretty difficult to tell the two apart. You are going to have a very high probability of accepting your hypothesis that there is no evidence of an effect, because you (hopefully) have a good amount of data, and the two groups look pretty similar. The yellow part takes the randomness of your sample into account and highlights the possibility that maybe we just caught a fluke. Maybe there is a statistical difference in real life but we just didn't happen to catch it by total chance.<br />
<br />
The red shading, conversely, shows how we allowed a 5% chance that we did actually find evidence of an effect, but only by random chance as well. In practice, it's fairly rare that you get curves that are so far apart you're practically certain that you found evidence of an effect. If you are reading about a study, it's because they found this evidence, and the media thought it was going to generate interest. By convention, the researchers allowed themselves the same 5% cutoff point of being in error, and often times the reported p-value is pretty close to that 5% cutoff.<br />
<br />
And now you know, in excruciating detail, why any single study is just one piece of evidence to throw onto the scale and weigh, even in the best possible circumstances. But statistical uncertainty is just the very, very tip of the iceberg.<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-46124996292339501642012-12-10T13:03:00.001-06:002012-12-10T13:03:37.257-06:00Sometimes One Article IS Meaningful*<br />
<div style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;">
</div>
<br />
A new <a href="http://jnci.oxfordjournals.org/content/early/2012/12/05/jnci.djs461.abstract">meta-analysis</a> published in the latest edition of JNCI provides strong evidence that higher levels of carotenoids in women's bloodstreams may reduce the risk of breast cancer. <a href="http://www.huffingtonpost.com/2012/12/10/carotenoids-breast-cancer-risk-fruits-vegetables-micronutrients_n_2252815.html?utm_hp_ref=healthy-living">Here's</a> a pretty accessible article from HuffPo, and <a href="http://www.medicalnewstoday.com/articles/253783.php">another</a> a bit more in-depth about the findings, and what carotenoids are and where you can find them. If you are a woman at risk of breast cancer, it's well worth reading. There's a lot to consider, though, so let's get to it.<br />
<br />
In my last post, I said in no uncertain terms that the conclusions of any one study do not represent the full story, but when done properly provide a possible clue. A lot of times, they provide little more than evidence that a hypothesis warrants further investigation. There are two potential exceptions to be on the lookout for when reading articles about health and medicine. Most publications are good about saying what type of analysis was performed.<br />
<br />
One is called a systematic review, which is pretty much what it sounds like. The researcher surveys all of the evidence out there on a particular subject and lays it all out in a single article. The other possibility is a meta-analysis, which is similar except that a new statistical analysis is performed on the multiple studies put together, essentially expanding the sample sizes of the people exposed to a treatment or pollutant, and the controls who are not. The latter is what this paper did.<br />
<br />
You don't need much experience with data analysis to know that increasing the sample sizes can make results more representative of the general population, thus potentially allowing you to make a stronger conclusion. Every Royals fan knows by now that our April enthusiasm will be sucked dry by Memorial Day, when more games get played and hot starts level off, or revert to the mean, so to speak.<br />
<br />
The analysis should also smooth out the variations that can happen in the smaller-scale results due to bias or chance, and provide insight into the "true" effect. Bias, in the sense I'm using it, does not refer to a Fox News-like investigator deviously "cooking the books" for a preferred outcome, but rather a tendency to under or overestimate the effect of the exposure due to characteristics of the study subjects. Any one of those circles could be the result of this bias, and the theory is that looking at the fuller picture minimizes that effect. This particular analysis was done on eight cohort studies, which suffice to say at this point means that the subjects were not randomly selected by the investigator, and are particularly subject to these unintentional biases. This funnel plot, below, provides a visual representation:<br />
<br />
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/6/69/Funnel_1.png/220px-Funnel_1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/6/69/Funnel_1.png/220px-Funnel_1.png" /></a><br />
<br />
Each little circle represents the result of a single published study. Just by chance, there should be some results that do not show a meaningful effect (negative, left of center), and some that do show a meaningful effect (positive, right of center). This is essentially just another visual representation of the bell curve that we're all familiar with.<br />
<br />
The big caveat, and this is the case for all reviews and meta-analyses, is that it's possible that literally every published study out there could be over or underestimating the effect, so that your "true" effect may still be questionable. This occurs because a study that doesn't show the hypothesized effect (a negative result) is less likely to be published. Journals like to publish studies that show something interesting, assuming that the alternative is that something interesting didn't occur and not worth reading. Here's a visual representation of what's called publication bias:<br />
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/b/bd/Funnel_2.png/220px-Funnel_2.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/b/bd/Funnel_2.png/220px-Funnel_2.png" /></a><br />
<br />
When negative results do not get published, you get a funnel plot that skews toward a single direction like the graph on the right. If you look at the circles on both images, you can clearly see that the center (i.e. the mean, a.k.a your result) of them is quite different. The image on the right overestimates the effect of whatever the patients are exposed to. It's certainly plausible that our carotenoids are prone to this situation. How do we know that there aren't 10 studies sitting in various researchers' file cabinets that will never get published because they were negative? They obviously aren't going to be included in a meta-analysis if they aren't published.<br />
<br />
So what's the real conclusion to take away from all this?<br />
<br />
There seems to be pretty strong evidence that carotenoids may provide some sort of small-ish protective effect in regard to breast cancer, and only breast cancer as far as all of this evidence is concerned. I'll show you how to look at the results section with all the statistics gibberish and gauge the effect for yourself some other time. There's plausible mechanisms for how exactly this would work described in the HuffPo article, so it's not some mysterious shot in the dark. However, we're still well short of definitive proof. Eat fruits with high carotenoid content because they generally are yummy and are healthful in many other ways, too. Do not buy a $20 bottle of carotonoid supplements, and beware anyone trying to sell you on them. And look at a headline such as <a href="http://www.examiner.com/article/proof-that-fruit-and-vegetable-diet-helps-prevent-breast-cancer">this one</a> and roll your eyes, now that you know better.<br />
<br />
I'll come back to publication bias from time to time, because it's everywhere. The pharmaceutical companies do large controlled experimental trials where the subjects <b>are</b> chosen randomly (i.e. the results are considered more conclusive than a cohort study), and they have plenty of incentive not to publish when the trials have a negative result. Everyone likes a good Big Pharma bashing sesh, and I'm happy to separate the genuine bullshit they pull from the conspiracies that don't really hold up to much scrutiny.<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-18485605016007962142012-12-07T10:03:00.001-06:002012-12-07T10:15:41.000-06:00Why Should I Trust Your "Agenda"?I think most of what I'll be doing in this space will be providing context, nuance, and a scientific perspective on studies making the rounds in health and the environment. A recurring theme will be the cognitive gap between how a scientist reviews these studies vs. how they are presented in the media. Why am I so insistent on this? It's as good a place for a first real blog post as any other.<br />
<br />
I think <a href="http://scienceblogs.com/insolence/2012/12/07/a-simultaneously-sympathetic-and-unsympathetic-commentary-on-an-antivaccine-screed/">this post</a> by Orac at scienceblogs is a good start, and helpfully illustrates why evidence matters.<br />
<br />
<blockquote class="tr_bq">
<span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 14px; line-height: 22px;">One majoor (sic)—perhaps the major—difference between skeptics and cranks like antivaccinationists is that skeptics recognize human cognitive weaknesses that allow us to be misled so easily by spurious correlations. We realize that, far more often than we are prepared to believe, things really do happen by coincidence. When there are enough numbers, and there can be a lot of coincidences.</span></blockquote>
<br />
Scientists are trained in an often counter-intuitive thought process, one that simply doesn't come naturally to humans. This way of thinking even has its own language that is really the exact antithesis of the language used in journalism.<br />
<br />
<blockquote class="tr_bq">
<span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 14px; line-height: 22px;">Science is, in essence, inoculation against these tendencies to draw false (conclusions) and to confuse correlation with causation, a weapon against the limitations of individual observations. However, it always interests me “what we’re up against,” because it goes very much against the grain to think scientifically. Our brains are not hard-wired that way. Learning to accept science over one’s own observations does not come naturally; so it is not surprising that so many people have a great deal of difficult doing just that.</span> </blockquote>
<br />
My goal will be to try to avoid condescending language and pejoratives, because that ultimately does nothing to inform people who aren't already part of the choir, but I think the gist of this idea couldn't be more spot on. Weird things happen, and often we really don't know what the cause is. To us, that's entirely OK. The challenge is why we do what we do, and it's unforgivable to let ideology and/or an emotional reaction guide us to an answer. Science, of course, is constantly used to promote an agenda, but it's allowed to largely because of what I like to call "single-study syndrome".<br />
<br />
Scientists set a very high bar to be convinced of anything. Our first instinct is to essentially tear apart every study and claim that comes out, looking for reasons why its conclusions are limited, or possibly even worthless. Even if the conclusion seems on its face to be totally intuitive. Associations may or may not be meaningful, but they need to be shown as statistically significant (a whole other blog post) more than once, and ideally across a couple of different study designs (another idea for a blog post!). It's best to just go ahead and think like there are no real bombshells in science. Once the headlines fade away, there really aren't.<br />
<br />
If you want to keep checking my blog, all I really ask of you is to accept, or even just consider, the two main ideas of this post:<br />
<br />
<ul>
<li>Your instincts and personal experiences cannot be trusted to explain anything across the board for all people.</li>
<li>Neither can any one particular study.</li>
</ul>
<div>
<br />
I'm going to have a lot of fun explaining the latter, time and time again. I hope you'll enjoy reading it. </div>
Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0tag:blogger.com,1999:blog-6329571067764673239.post-58011887972703146332012-12-03T12:18:00.002-06:002012-12-03T12:18:53.867-06:00"Everybody relax, I'm here"Welcome, welcome. Realizing I share a lot of things on Facebook that are likely of zero interest to ~97% of my friends, I started thinking, there must be some way for people to <b>choose</b> whether they want to be exposed to my links. Two years later I signed up for Blogger.<br />
<br />
Getting down to brass tacks, issues of science and technology cannot be uncoupled with politics, and deeply-held political beliefs cannot be challenged by merely stating facts. Environmental risk factors, from GMOs, synthetic chemicals, pollution, vaccines, and so forth are constantly associated with horrible outcomes from cancer and autism to nothing short of the end of humanity. Clearly, there's issues of <a href="https://www.cell.com/trends/microbiology/abstract/S0966-842X(12)00178-3">cultural identification</a> at play (our team vs. your team, etc.), and those simply must be considered if those of us with a particular set of skills are going to communicate effectively. My goals are simple but ambitious: explore different ways of informing people about issues pertaining to molecular biology, toxicology, biotech, medicine, nutrition, and so forth that don't just rely on spouting facts from an air of supposed authority<b>. </b><br />
<br />
I want to help people understand why evidence matters, how you assess the quality of evidence, and appreciate the nuance of the things I spend my days immersed in. Illustrative explanations of, for instance, the limitations of certain study designs and statistical significance cannot be completely done away with, but they should be enhanced with language that attempts to be consensus-building. I look forward to failing, and adjusting, failing some more and seeing if maybe I stumble upon something that clicks from time to time.<br />
<br />
The title of this blog comes from Mr. Show. The title of this post comes from Big Trouble In Little China, so you better goddamn well believe there will be miscellaneous cultural bric-a-brac as well.<br />
<br />
And <strike>maybe</strike> probably a little basketball, too.Anonymoushttp://www.blogger.com/profile/04881353695735499600noreply@blogger.com0