BPA is considered an endocrine disruptor, meaning that its structure is very similar to a hormone, in this case estradiol. It's so similar, in fact, that hormone receptors for estradiol can be tricked by BPA into a response as if estradiol were bound to it, affecting potentially a number of biological activities. Observational studies have linked BPA to a variety of negative health impacts, especially obesity, neurological damage, cancer, and recently asthma. Considering that virtually everyone is exposed to BPA due to its widespread use in making plastic bottles, can linings, paper receipts, and epoxy resins, these associations naturally should be cause for some concern, particularly in how it may affect infants and children. There is considerable debate on precisely how much concern there should be, probably more than is reflected in most media accounts. The way scientists approach this is quite at odds with the type of information the public needs, which was put eloquently by Richard Sharpe, a researcher in the UK who takes a pretty skeptical stance on BPA's harmful effects.
What is never stressed enough is that scientists work at “the borders of ignorance” – what is in front of us is unknown and we try to find our way forward by making hypotheses based on what is known. This means that we are wrong most of the time, and because of this scientists have to be very cautious about interpretations that are based on our projected ideas about what is on front of us. What decision-makers, politicians and the public want is unequivocal guidance, not uncertainty. So this creates a dilemma for scientists.So far this is beautiful. Absolutely crucial to keep in mind. Sharpe continues:
Those who are more prepared to throw caution to the winds and make unequivocal statements are more likely to be heard, whereas a more cautious scientist saying “we’re not sure” will not be taken notice of.I honestly don't know whether this is the case with BPA or not. The uncertainties are many, and the value of the observational studies showing all these associations is controversial. There are a number of criteria that correlative studies must meet in order to determine whether that correlation actually equals causation, summarized from the link above.
responded that after years of review, these criteria had essentially not been fully met, and did not ban the substance, specifically on the basis of criteria 5, 7, and 9. One of the criticisms of the FDA response is that some evidence suggests even very low doses may have strong effects, and that a typical dose-response curve that rises steadily as the dose increases until it ultimately plateaus does not reflect how BPA works. Rather, BPA may have sort of an inverted U-shaped dose-response curve referred to as hormesis, in which high levels actually don't have an effect at all, or perhaps even the opposite effect.
Dose-response curve of hormesis (Source) |
Additional studies used by the NRDC were based upon experiments performed on isolated tissue samples, which bring up a similar concern, as well as being essentially limited to describing a potential mechanism for the chemical's effects and what sort of tissue it would ultimately effect. Another study showing an association with cardiovascular problems was cross-sectional, which takes a single measurement of exposure at one point in time and looks at whether the higher levels of exposure are associated with a disease. As I've mentioned before, this study design is limited to generating hypotheses, and are definitely not considered suitable for determining causation.
So we have a number of epidemiological associations, experimental data on tissue samples, plus some experimental data on primates, and rodents, all pointing to some negative health effects, sometimes even at small doses. Couldn't these all add up to more than the sum of their parts? Sure, there's really 2 major ways to validate that claim. One way, which the FDA apparently thinks highly of, is to use the data from other mammals and tissue samples to develop a mathematical model, which can be used to predict the effects found in humans. Another would be to approach the problem along the lines of, "given the data that shows such and such effect at this level in animals and tissue, we can assume that the probability of this translating to humans in real world exposure is X." Nobody has seemed to try this yet, and the level of subjectivity involved in determining that X makes some researchers uncomfortable. Recently, a researcher named Justin Teeguarden developed a model to predict the levels that should be typically found in humans, and presented his findings (yet to go through peer review) at the annual meeting of the American Association for the Advancement of Science. His research determined that the levels causing effects in animals and tissues are not plausibly found in humans.
Biologists and epidemiologists who have worked on the studies showing the harmful effects question the validity of the assumptions that went into his model, as well as the lack of predictive power in what the levels he suggest exist in humans actually does at either acute or chronic exposure. Tom Philpott at Mother Jones suggests that Teegaurden's past ties to the plastics industry makes his research suspect, a sentiment I don't entirely share, but is not completely irrelevant.
So what do we make of all of this? I think this is a perfect scenario for an ideological fight where two sides dig in immediately and reach a stalemate. Studies with inherent limitations get disseminated to the public as being probably more suggestive than they really are, feeding premature alarm, while industry unjustifiably dismisses the risk. If you read the FDA's response to NRDC, it appears to me that you're just not going to get far calling for an outright ban on a substance like BPA unless you have a good amount of longitudinal data plus experimental data on mammals using the same type of exposure as would be expected in humans. Is that the best way to go? If not, how can it be improved upon?
Perhaps the NRDC's petition might have been more effective if it were honest about the limitations of the studies supporting their argument and the uncertainties that exist (like the dose-response curve of BPA). In addition to calling for a ban of BPA using the precautionary principle, there should also be a focus on safer alternatives. In other words, don't just point to a problem, especially when it's not totally cut and dried, but demonstrate a workable solution, and pursue it with the same energy that's been used up trying to prove something that may not be provable, at least any time soon.
If there's a lesson time and time again from these sorts of things, there's little in the world that's purely black and white. I love exploring the shades of gray, but I can't expect everyone to. However I'd at least love for people to respect that they're out there, and that this is where reality tends to dwell.
Hi Scott,
ReplyDeleteGreat analysis - Have you heard the audio recording from the symposium Justin organized? All the speakers seemed to use the "oral is the only means of exposure" as the crux of their "It's not dangerous to humans" argument.
Specifically, that because it travels through the digestive system, it is detoxified by the liver before it ever hits the bloodstream. They repeatedly used the figure "99.9%" rendered biologically inactive by the liver, and used that as a basis to take any research findings from urinary or blood samples and view the "harmful" portion as only .1% of the amount actually detected.
But that's not the only means of exposure if you look at the research. You mention in your analysis that exposure from the skin misses some of the metabolic processes that quickly turn BPA into an inactive form called BPA-monoglucuronide. That's exactly right, and it's exactly why dermal exposure is so much more dangerous than oral exposure.
This leads to the next problem: Paper recycling has created a BPA-Trap that collects and concentrates BPA contamination with each passing generation. The recycling process does not destroy BPA, and since a majority of the pulp used in US foodservice products (not to mention other packaging) it's getting worse all the time.
My family and I became concerned because we all used to sell environmentally friendly foodservice packaging to institutional clients and foodservice management companies before being bought out in 2011. As we learned about BPA, it became apparent that the people most trying to do the right thing are in fact the ones exposing themselves the most. We privately tested items from several prominent restaurant and grocery chains and found BPA contamination in many single item at more than 10x the level the FDA has (arbitrarily) determined safe - 50parts per billion.
From where I sit it seems like the science is kept controversial because incomplete information justifies inaction, and listening the the two and a half hours of "The data says it's dangerous... but we just don't know" at Justin's symposium is just wierd. This has been heavily researched for over 30 years, so why is it still controversial?
You can email me at adam@mindtomatter.org, I'd love to talk
Thanks for the comment Adam. I have not heard the audio, but I understand your concerns about it. Obviously there is some skin exposure from thermal receipts, and perhaps far more as you have found in your analysis.
DeleteAs I'm sure you well understand, when you want to see if some sort of environmental toxin or potential risk factor is harmful to humans, obviously you can't run a randomized control trial, so it's a long process to try to establish potential causation. You start with ecological or cross-sectional data to generate the testable hypothesis and build from there. Hopefully the researchers who build on the earlier work address the limitations that previous work had and minimize the new ones they create, but that's not really always the case.
One big issue, contributing to how slow these things move, is whether the way we even analyze the data is really appropriate. I sort of hint at this in the post, but the dominant evidence-based approach is frequentist, which has its own pros and cons. The major downside is that our prior knowledge going in really doesn't factor in the analysis. So you have to test over and over, building up the evidence p-value by p-value using designs that have inherent limitations, where you try the best you can to control for other factors that might be leading to the outcomes you see.
It would be one thing if these p-values were showing a 10 or 20 or 30x higher risk of obesity, or cancer, or whatever, but they're much, much less than that. There are people out there who are skeptical that an increased risk ratio of say 30% from non-randomized studies is even meaningful at all.
Incomplete information does indeed justify inaction, which is sort the major takeaway I wanted to leave the post with. People who want to advocate for action need to know how this game works. The US is not as risk-averse as the EU obviously, so you might as well just not even bother mentioning associations found in cross-sectional studies unless you clearly understand and openly acknowledge what that means. Ultimately, I think it would just be so much more worthwhile to focus effort on really fixing the potential problem with a ready-made solution, rather than playing that game.
I'm not sure I'm cynical enough to believe that environmental groups actually PREFER the slow game and the setbacks, but..well..there's a point to it.