Wow! I can’t
believe its photo Friday again. We had a
great time last week having a few neighbors over for dinner, but forgot to take
any photos. Medically I’m feeling good
and just waiting for a few more test results to come in. I’ll post as soon as they do.
I spend
hours researching the web for information on cancer/nutrition. While searching the web I came across the
work of Dr. Peter Attia, M.D. Here’s a
piece off his blog eatingacademy.com that I thought I’d share. It’s his response to a New York Times article
about eating red meat.
The
reason Gary Taubes and I founded the Nutrition Science Initiative
(NuSI), which we hope to launch this summer is to help the field of nutrition
science join the other scientific fields. Most disciplines of science —
such as physics, chemistry, and biology — use something called the Scientific Method to answer questions.
A
simple figure of this approach is shown below:
The
figure is pretty self-explanatory, so let me get to the part where nutrition
science is making the biggest mistakes: “Conduct an experiment.” There is
no shortage of observations, questions, or hypotheses in the nutrition science
world – so we’re doing well on that front. It’s that pesky experiment
part we’re getting hung up
on. Without doing controlled experiments it is not
possible to distinguish the relationship between cause and effect.
[Just a heads up – this is going to be a recurring theme this week.]
What is an experiment?
There
are several types of experiments and they are not all equally effective
at determining the cause and effect relationship. Climate scientists and
social economists (like one of my favorites, Steven Levitt, whom I’ve been fortunate enough to
spend a day with), for example, often carry out natural experiments. Why? Because the
“laboratory” they study can’t actually be manipulated in a controlled
setting. For example, when Levitt and his colleagues tried to figure out
if swimming pools or guns were more dangerous to children – i.e., Was
a child more likely to drown in a house with a swimming pool or be shot by a
gun in a home with a gun? – they could only look at historical, or observational,
data. They could not design an experiment to study this question
prospectively and in a controlled manner.
How
would one design such an experiment? In a “dream” world you would find,
say, 100,000 families and you would split them into two groups – group 1
and group 2. Group 1 and 2 would be statistically identical in every
way once divided. Because of the size of the population, any
differences between them would cancel out (e.g., socioeconomic status, number
of kids, parenting styles, geography). The 50,000 group 1 families
would then have a swimming pool installed in their backyard and the 50,000
group 2 families would be given a gun to keep in their house.
For
a period of time, say 5 years, the scientists would observe the differences in
child death rates from these two causes (accidental drowning and gunshot
wounds). At the conclusion, provided the study was powered appropriately, the scientists would know
which was more hazardous to the life of a child, a home swimming pool or a home
gun.
Unfortunately
questions like this (and the other questions studied by folks like Levitt)
can’t be studied in a controlled way. Such studies are just impractical, if not
impossible, to do.
Similarly,
to rigorously study the anthropogenic CO2 – climate
change hypothesis, for example, we would need another planet
earth with the same number of humans, cows, lakes, oceans, and kittens that did
NOT burn fossil fuels for 50 years. But, since these scenarios are never
going to happen the folks that carry out natural experiments do the best they
can to statistically manipulate data to separate as many confounding factors as
possible in every effort to identify the relationship between cause and
effect. Cause and effect. Say it with me one more time…cause
and effect.
Enter
the holy grail of experiments: the controlled experiment. In a controlled experiment, as the name suggests, the
scientists have control over all variables between the groups (typically what
we call a “control” group and a “treatment” group). Furthermore, they study
subjects prospectively (rather than backwards looking, or retrospectively)
while only changing one variable at a time. Even a well-designed
experiment, if it changes too many variables (for example), prevents the
investigator from making the important link: cause and effect.
Remember my silly example from this post a few weeks back?
Imagine a clinical experiment for patients with colon
cancer. One group gets randomized to no treatment (“control
group”). The other group gets randomized to a cocktail of 14 different
chemotherapy drugs, plus radiation, plus surgery, plus hypnosis treatments,
plus daily massages, plus daily ice cream sandwiches, plus daily visits from
kittens (“treatment group”). A year later the treatment group has
outlived the control group, and therefore the treatment has worked. But
how do we know EXACTLY what led to the survival benefit? Was it 3 of the
14 drugs? The surgery? The kittens? We cannot know from this
experiment. The only way to know for certain if a treatment works is
to isolate it from all other variables and test it in a randomized prospective
fashion.
As
you can see, even doing a prospective controlled experiment is not enough, like
the one above, if you fail to design the trial correctly. Technically,
the fictitious experiment I describe above is not “wrong,” unless someone – for
example, the scientist who carried out the trial or the newspapers who report
on it – misrepresented it.
If the
New York Times and
CNN reported the following:
New
study proves that kittens cure cancer!, would it be accurate? Not even
close. Sadly, most folks would never read the actual study to understand
why this bumper-sticker conclusion is categorically false. Sure, it is
possible,
based on this study, that kittens can cure cancer. But the scientists in
this hypothetical study have wasted a lot of time and money if their goal was
to determine if
kittens could cure cancer. The best thing this
study did was to reiterate a hypothesis. Nothing more. In other words,
this experiment (even assuming it was done perfectly well from a
technical
standpoint) learned nothing other than the combination of 20 interventions was
better than none because of an
experimental design problem As you
can see; even doing a prospective controlled experiment is not enough, like the
one above, if you fail to design the trial correctly. Technically, the
fictitious experiment I describe above is not “wrong,” unless someone – for
example, the scientist who carried out the trial or the newspapers who report
on it – misrepresented it.
So what does all of this have to do with eating red meat?
In effect, I’ve already told you everything you need to know. I’m not
actually going to spend any time dissecting
the actual study published last week that led to the
screaming headlines about how red meat eaters are at greater risk of death from
all causes (yes, “
all causes,” according to this study)
because it’s already been done a number of times by others this week
alone. Three critical posts on this specific paper can be found
here,
here, and
here.
I can’t suggest strongly enough that you read them all if you really
want to understand the countless limitations of this particular study, and why
its conclusion should be completely disregarded. If you
want bonus points, read the paper first, see if you can understand the failures
of it, then check your “answer” against these post. As silly as this
sounds, it’s actually the best way to know if you’ve really internalized what
I’m describing.
Now, I know what you might be thinking:
Oh, come on Peter, you’re
just upset because this study says something completely opposite to what you
say!
Not so. In fact, I have the same criticism of similarly conducted
studies that “find” conclusions I agree with. For example, on the
exact
same day the red meat study was published online (March 12, 2012) in
the journal
Archives of Internal Medicine, the
same group of
authors from Harvard’s School of Public Health published
another paper in the journal
Circulation. This
second paper reported on the link between sweetened beverage consumption and
heart disease, which “showed” that consumption of sugar-sweetened beverages
increased the risk of heart disease in men. [On another day I will give you my
thoughts on why the media chose to report on the red meat study rather than the
sugar study, despite them both coming out on the
same day, by
the
same authors, from the
same prestigious
university.]
I agree that sugar-sweetened beverages increase the risk of heart disease
(not just in men, of course, but in women, too) along with a whole host of
other diseases like cancer, diabetes, and Alzheimer’s disease. But, the
point remains that
this study does nothing to add to the body of evidence
implicating sugar because it was not a controlled experiment. This was a
waste of time and money reiterating an already strong hypothesis.
That
effort should have been spent on a controlled experiment.
This problem is actually rampant in nutrition
We’ve got studies “proving” that
eating more
grains protect men from colon cancer, that
light-to-moderate
alcohol consumption reduces the risk of stroke in women, and that
low levels
of polyunsaturated fats, including omega-6 fats, increase the risk of hip
fractures in women. Are we to believe these studies? They sure sound
authoritative, and the way the press reports on them it’s hard to argue, right?
How are these studies typically done?
Let’s talk nuts and bolts for a moment. I know some of you might
already be zoning out with the detail, but if you want to understand
why
and how you’re being misled, you actually need to “double-click” (i.e.,
get one layer deeper) a bit. What the researchers do in these studies is
follow a cohort of several tens of thousands of people — nurses, health care
professionals, AARP members,
etcetera — and they ask them what they
eat with a food frequency questionnaire (FFQ) that is known to be almost
fatally flawed in terms of its ability to accurately acquire data about what
people
really eat. Next, the researchers
correlate
disease states, morbidity, and maybe even mortality with food consumption, or
at least
reported food consumption (which is NOT the same thing). So,
the end products are
correlations — eating food
X is
associated with a gain of
Y pounds, for example. Or eating red meat
three-times a week is associated with a 50% increase in the risk of death from
falling pianos or heart attacks or cancer.
The catch, of course, is that
correlations hold no (i.e.,
ZERO) causal information. Just because two events occur in step
does not mean you can conclude one
causes the other. Often in
these articles you’ll hear people give the obligatory, “
correlation doesn’t
necessarily imply causality.” But saying that suggests a slight disconnect
from the real issue. A more accurate statement is “
correlation does not
imply causality” or “
correlations contain no causal information.”
So what explains the findings of studies like this (and virtually
every single one of these studies coming out of massive health databases like
Harvard’s)?
For starters, the foods associated with weight gain (or whichever disease
they are studying) are also the foods associated with “bad” eating habits in
the United States — french fries, sweets, red meat, processed meat,
etc.
Foods associated with weight loss are those
associated with “good” eating habits –fruit, low-fat products, vegetables,
etc. But, that’s not because these foods
cause weight gain or
loss, it’s because they are
markers for who the people who
eat a
certain way and live a certain way.
Think about who eats a lot of french fries (or a lot of processed meats).
They are people who eat at fast food restaurants regularly (or in the case of
processed meats, people who are more likely to be economically
disadvantaged). So, eating lots of french fries, hamburgers, or processed
meats is generally a
marker for people with poor eating habits, which
is often the case when people are less economically advantaged and less
educated than people who buy their food fresh at the local farmer’s market or
at
Whole Foods (or
Whole Paycheck as I like to call
it). Furthermore, people eating more french fries and red meat are less
health conscious in general (or they wouldn’t be eating french fries and red
meat – remember, those of us who
do eat red meat regularly are in the
slim minority of health conscious folks). These studies are rife with
methodological flaws, and I could devote an entire Ph.D. thesis to this topic
alone.
What should we do about this?
I’m guessing most of you – and most physicians and policy makers in the
United States for that matter – are not actually browsing the
American
Journal of Epidemiology (where one can find studies like this all day long).
But occasionally, like last week, the
New York Times, Wall Street Journal,
Washington Post, CBS, ABC, CNN, and everyone else gets wind of a study
like the now-famous red meat study and comments in a misleading fashion.
Health policy in the United States – and by extension much of the world –
is driven by this. It’s not a conspiracy theory, by the way. It’s
incompetence.
Big difference. Keep Hanlon’s razor in mind – Never
attribute to malice that which is adequately explained by stupidity.
This behavior, in my opinion, is unethical and the journalists who report on
it (along with the scientists who stand by not correcting them) are doing
humanity no favors.
I do not dispute that observational epidemiology has played a role in
helping to elucidate “simple” linkages in health sciences (e.g., a great
example is
that of contaminated water and cholera or the
linkage
between scrotal cancer and chimney sweeps). However, multifaceted or
highly complex pathways (e.g., cancer, heart disease) rarely pan out, unless
the disease is virtually unheard of without the implicated cause. A great
example of this is the elucidation of the linkage between
small-cell
lung cancer and smoking – we didn’t need a controlled experiment to link
smoking to this particular variant of lung cancer because nothing else has ever
been shown to even approach the rate of this type of lung cancer the way
smoking has (by a factor of over 100). As a result of this unique fact, Richard
Doll and Austin Bradford Hill were able to design a clever observational
analysis to correctly identify the cause and effect linkage between tobacco and
lung cancer.
But this sort of example is actually the exception
and not the rule when it comes to epidemiology.
I trust by now you have a better understanding of why the “science” of
nutrition is so bankrupt. It is based on almost a complete reliance on
these observational studies. Virtually every piece of nutritional dogma we
suffer from today stems from – you guessed it – an observational study.
Whether it’s Ancel Keys’ observations and correlations of saturated fat intake
and heart disease in his famous
Seven
Countries Study, which “proved” saturated fat is harmful or
Denis
Burkitt’s observation that people in Africa ate more fiber than people in
England and had less colon cancer “proving” that eating fiber is the key to
preventing colon cancer, virtually all of the nutritional dogma we are exposed
to has not actually been scientifically tested. Perhaps the most
influential current example of observational epidemiology is the work of
T. Colin
Campbell, lead author of
The China Study, which claims, “the science is clear” and
“the results are unmistakable.”
Really? Not if you define
science the way scientists do. This doesn’t mean Colin Campbell
is
wrong (though I wholeheartedly believe he
is wrong on about 75% of
what he says based on current data). It means he has not done any
real
science to advance the discussion and hypotheses he espouses. If you want
to read the most remarkable and detailed critiques of this work, please look no
further than
here (Denise Minger) and
here (Michael Eades).
I can only imagine the contribution to mankind Dr. Campbell
could
have given had he spent the same amount of time and money doing actual
scientific experiments to elucidate the impact of dietary intake and chronic
disease. [For example, Campbell would have designed a
prospective
study following subjects randomized to one of two different types of diets for
10 years: plant-based and animal-based, but with
all other factors
controlled for.] This is one irony of enormous observational epidemiology
studies. Not only are they of little value, in a world of finite
resources, they actually detracts from real science being done.
It’s time to start doing real science. We do actually have the luxury of
doing controlled experiments (in fact, several have been done, they just tend
to get ignored), in contrast to climate scientists and social economists.
Isn’t it time we
stop guessing and find out what foods
really
increase our risk of dying prematurely? That’s exactly our intent with NuSI,
but more on that another day…
.