There have recently been several articles published in noted medical journals which derived from meta-analysis of pooled studies. Usually the purpose of a medical study/experiment is to see if intervention (X) affects condition (Y) in a patient, which condition may be discrete (e.g. heart attack) or continuous (e.g. blood pressure). Other studies are retrospective epidemiological studies, which should be used to suggest interventional studies. Unfortunately retrospective studies (e.g. a low fat diet lessons the risk of cancer) as well as meta-analyses (e.g. beta-carotene lowers the heart attack rate) have been shown on prospective, interventional studies to be incorrect. (Of course, the fact that a 5-year low fat diet in white females failed to lower their incidence of either breast cancer or colon cancer has not prevented the American Cancer Society from recommending a low fat diet to prevent cancer, but that discussion is for another blog.)
I find that medical statistics are still poorly understood by the medical-reading public, as well as by many doctors. For instance, the first study (of male cigarette smokers in Finland) to evaluate the effect of cholesterol-lowering by fibrates on the heart attack rate used a 2-tailed t-test, in case cholesterol lowering was harmful. When a null result was found, they redid the study, saying that lowering cholesterol could not possibly be harmful, and by using a 1-tailed t-test, the positive effect of cholesterol-lowering reached statistical significance. It is difficult to explain, without the use of mathematics, why changing the analysis in this matter is incorrect from a statistical point of view. (Just look at all the arguments about the Monte Hall problem, which is really trivial Bayesian analysis.)
Definition: meta-analysis is the combining and analysis of several studies, none of which usually reaches statistical or clinical significance because of small sample size, to determine the effect on a patient of an intervention.
In any event, it is my opinion that the result of any meta-analysis should be treated as an epidemiological event, and should only be a basis to suggest further interventionl testing. The real problem arises when a meta-analysis falsely suggests a negative effect, and then doctors are fearful of doing a prospective study because of the threat of malpractice. The basic problem with meta-analysis is its lack of mathematical rigor, and its basis on the subjective views of the lead investigator. I propose there should be an on-line journal of medical meta-analytic results, rather than formal peer-reviewed publication, and the results should be interpreted as tentative conclusions to be tested, rather than hardened "fact".
We also have read articles where two different meta-analyses of the same clinical intervention (e.g. low molecular weight heparin to prevent DVT's) reach opposite conclusions, and it is impossible to reject either study.
The following is a list of some of the problems that, in my opinion, prevent meta-analysis from reaching the statistical rigor and reproducibility that we should require of any process that affects human life and health:
Interpretation of "effect size" is totally arbitrary.
At least two authors are usually required to code the individual studies for grouping, because of the acknowledged subjective interpretation that occurs in coding. "Valid" studies require a reliability above 0.8!
The underlying level of risk in every studied group is of key importance, and often cannot be weighted properly. (e.g. the higher your cholesterol, the greater is the benefit of lowering it.)
The danger of Simpson's Paradox (or the Yule-Simpson Effect). This occurs when two smaller studies point to one result of intervention, and the pooled study points in the other direction. (Its existence is easy to demonstrate in vector space, but you can ask one of your statistical buddies to explain its basis to you.)
There are two different methods of combining results: the fixed effects model or the random effects model. The choice of approach strongly determines the results of the meta-analysis, and neither is always demonstrably "right".
Every meta-analysis requires subjective decisions, and few if any readers will redo the calculations with their own decisions.
Bias in selecting which studies to use.
There are TWO methods to measure effect: standardized mean difference, and correlation, and therefore there can be two different results (but only one is published).
There are various sources of heterogenicity and selection bias that cannot be determined.
There is often post-hoc sub-group analysis in meta-analyses, and this can easily lead to false statistical conclusions. This is similar in concept to the difference between an everywhere continuous function and a piecewise continuous function.
Results are often discarded if they do not make "medical sense", but it is always tricky and suspect to discard any result.
Comparing randomized trials may not be itself a random process.
The capacity to detect bias is small if only a small number or trials is considered.
There is no statistical solution to the problem of failure to reject the null hypothesis of homogeneous events, even if significant differences exist between the studies.
And finally: assessing the quality of a study in order to determine whether or not to include it in one's meta-analysis, is purely subjective.
Saturday, May 30, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment