Prostate cancer topics, links and more. Now at 200+ posts!

News: Health Day, Medical News Today, ScienceDaily, Urol Times, Urotoday, Zero Cancer Papers: Pubmed (all), Pubmed (Free only), Amedeo
Journals: Eur Urol, J Urol, JCO, The Prostate Others Pubmed Central Journals (Free): Adv Urol, BMC Urol, J Endourol, Kor J Urol, Rev Urol, Ther Adv Urol, Urol Ann
Reviews: Cochrane Summaries, PC Infolink Newsletters: PCRI, US Too General Medical Reviews: f1000, Health News Review
Loading...

Monday, October 1, 2007

Can most studies be wrong? - Part 3

[updated April 22, 2008]



This month there have been articles in several major US newspapers by:
  • Robert Lee Holtz entitled Most Science Studies Appear to be Tainted by Sloppy Analysis in the September 14th Wall Street Journal [article].
  • Gary Taubes entitled Do We Really Know What Makes US Healthy? in the September 16th New York Times [article] and
  • Andreas von Bubnoff entitled Scientists do the Numbers in the September 17th LA Times [article].
All three cite various studies that were later contradicted such as an observational study on Hormone Replacement Therapy for menopause cited by Taubes that was later contradicted by a randomized clinical trial -- randomization is generally considered to be more reliable since it makes it less likely that one group, by chance, has an advantage such as would be the case if the treatment or comparison group were healthier, younger, etc.

Other observational studies that failed to be confirmed by randomized clinical trials were
  • vitamins C, E and beta carotene failed to show benefit against heart disease
  • fiber failed to show benefit against colon cancer
  • fruits and vegetables failed to show benefit against heart disease
  • low dose aspirin failed to show benefit against colorectal cancer and heart disease in women
  • folic acid fails to show benefit against colon cancer


Bubnoff cites an Ontario study where researchers combed the data to uncover preposterous hypotheses such as one in which sagittarians are 38% more likely to break a leg than people of other star signs. This seems to support the contention that the scientific approach often used in studies can lead to results that are "flat out wrong".

Observational studies have the flaw that one cannot be sure that the treatment and non-treatment groups are the same in important respects. For example,
  • in the case of hormone replacement therapy it was found that "women who take H.R.T. differ from those who don’t in many ways, virtually all of which associate with lower heart-disease risk: they’re thinner; they have fewer risk factors for heart disease to begin with; they tend to be more educated and wealthier; to exercise more; and to be generally more health conscious."
  • in the case of a study on cardiac disease it was found that among those who took the placebo the ones who complied more closely to the instructions had less heart disease. Simple adherence is thought to be associated with a host of healthful activities that can easily bias studies. If the treatment and non-treatment group have differing levels of adherence false conclusions can be expected.
  • doctors prescribe or do not prescribe medications based on a whole host of subtle reasons and this may bias the populations receiving or not receiving treatment to be healthier or less healthy in the first place.
  • to the degree that a study relies on responses from the subject, the wording of the questions asked may affect the results.
  • the particular populations studied may be crucial. For example, in hormone replacement therapy the observational study showing benefit was done on younger women whereas the contradicting studies were done on older women. Maybe it is beneficial for younger or healthier women but not older ones.


In contrast to the above, Lee focuses on bias writing "flawed findings, for the most part, stem not from fraud or formal misconduct, but from more mundane misbehavior: miscalculation, poor study design or self-serving data analysis". He cites work by Ioannidis in which Ionnidis re-analyzed 432 studies and concluded that almost none of them hold up to scrutiny.

Bubnoff points out that even within observational studies there are different types that provide differing levels of evidence:

"Cohort studies follow a healthy group of people (with different intakes of, say, coffee) over time and look at who gets a disease. They're considered the strongest type of epidemiological study.

Case-control or retrospective studies examine people with and without a certain disease and compare their prior life -- for how much coffee they drank, for example -- and see if people who got the disease drank more coffee in their past than those who didn't.

Cross-sectional studies compare people's present lifestyle (how much coffee they drink now) with their present health status."

He also points that there have been success with observational or epidemiological studies. Epidemiological studies that showed smoking is associated with lung cancer have "stood the test of time".

Differences between observational and randomized studies can sometimes be reconciled:
  • In HRT the observational study showing benefit looked at younger women and the contrary randomized study looked at older ones.
  • the Vitamin E observational study that showed benefit was of healthy people whereas the later contrary randomized study was on heart patients who also took other medications that might have overridden the effect of vitamin E.
  • in the study that failed to show benefit from a low fat diet it was determined that the subjects did not comply with the diet


Taubes suggests that reporting may be at fault for not accurately portraying the individual studies as probabilistic statements which are individually merely pieces of a larger whole. To overcome that limitations of observational studies one needs randomized clinical trials but these tend to be expensive - often prohibitively which means that observational studies may be all that we can expect to go on in many cases.

Selection bias occurs when, for example, sicker patients are given more effective treatment. Since sicker patients do worse the result makes it appear as if the more effective treatment is worse. A similar situation is where healthier patients are given more toxic treatments making it seem as if the more toxic treatment is better.
In "The limits of observational data in determining outcomes from cancer therapy." Sharon H. Giordano, Yong-Fang Kuo, Zhigang Duan, Gabriel N. Hortobagyi, Jean Freeman, and James S. Goodwin. Cancer; Published Online: April 21, 2008 (DOI: 10.1002/cncr.23452); Print Issue Date: June 1, 2008 these biases are discussed in the context of prostate cancer treatment. Also see [link].

No comments: