Saturday, March 19, 2011

The Problems With Meta-Analyses (2)

     I had written a more mathematical blog in May, 2009, denoting the logical and mathematical/statistical problems with meta-analyses, but since that time many more meta-analyses have been published, and the public has discussed these results as if they were clinical fact.  It is important to understand that the results of a meta-analysis should be presented only as a hypothetical clinical result, to be tested forwards in a properly designed clinical format, and not accepted as proven fact (such as the recent suggestion that women who ingest calcium supplements increase their risk of heart disease). In brief, a meta-analysis collects several studies of the same problem, none of which reaches clinical or statistical significance, in the hopes that the sum can be greater than its parts, and that combining non-significant studies can reach a significant result!

     Some readily understandable problems with meta-analyses:

1) You are never told which studies the author rejects as not being acceptable for his/her meta-analysis, so you cannot form your own opinion as to the validity of rejecting those particular studies.

2) The problem of the Simpson Paradox, or the Yule-Simpson Effect: sometimes all the included studies point in one direction as being clinically significant, but the meta-analysis points in exactly the opposite direction. This has been discussed, for instance, in "Chance" magazine, published by Dartmouth University, where they showed different ways of calculating Derek Jeter's batting average, with differing results, using the same data in each case.

3) There are two different statistical models or assumptions by which the analyzer combines the effects of the individual studies: the fixed effects model and the random effects model. Each model makes different assumptions about the underlying statistical distribution of observed data,, so each calculation produces different results.

4) There are two different methods for measuring the effect of the clinical  intervention: standardized mean difference or correlation. Each method produces a different end result.

5) If we look at #3 and #4, we see immediately that there are four possible combinations of analyses, leadeing to four different conclusions for the same set of studies. No one paper shows all four combinations and all four possible results.

6) Finally, the choice of what constitutes a "significant' effect in any of the included studies is purely arbitrary. When this question was studied by clinical psychologists,  no two analytical scientists reached the same conclusions of what was significant in all the included studies.

     We therefore see that the result of any meta-analysis is largely dependent on the analyzer, and the reader never has enough data to redo the analysis, so the results have to be taken on faith, which is hardly a scientific result.

     "There are three kinds of lies: Lies, Damn Lies, and Statistics"-----Mark Twain

No comments:

Post a Comment