Friday, November 30, 2012

Do Statins Reduce Your Risk of Dying from Cancer?

    There was a very interesting article published in the November 8, 2012  issue of the New England Journal of Medicine (vol. 367, pp 1792-1802) as well as an excellent analysis of the study published as an editorial comment in the same volume (pp 1848-1850). One theory about cancer growth is that those cells need cholesterol for cellular reproduction, so that any drug that lowers cholesterol should provide a benefit. The study was done in Denmark, which has a remarkably homogeneous population and moreover, because of national health care, has excellent registries of cancer diagnosis and incidence, as well as deaths from cancer, all cause mortality, and the prescribing of drugs. (The availability of such statistical data is one side benefit of national health care if the computer programs are written properly.)
     The retrospective study reviewed data relating to medical care from 1995 to 2009, and encompassed almost 300,000 subjects. The hypothesis tested was that patients who were taking statins BEFORE the diagnosis of cancer would have a reduced cancer mortality. It was found that patients who were taking statins before the diagnosis of cancer had a reduction in  their probability of dying from cancer from 100% to 85 % and the same reduction was found in mortality from any cause after the diagnosis of cancer. (The actual number of 85% +/- 2% is the hazard ratio, which is a statistical concept.) One remarkable fact was that the 15% reduction in mortality risk was independent of the dose of the statin(!).
     Several clinical events and facts were not controlled for. No mention is made of cigarette smoking in the two groups, or if they had had cancer surgery. Also, no mention was made of OTC aspirin or NSAID use, and we know of several studies linking such use to a reduced incidence of colon cancer.
     It is naturally suggested that a clinical study be made to test the hypothesis that was generated by this retrospective study, i.e. the hypothesis that statin use has a positive effect on cancer mortality as well as on all-cause mortality. I want to re-emphasize that this was a statistical analysis, and not a forward double-blind study so it would be tempting but scientifically incorrect to draw a clinical conclusion from the results.  The problem I foresee is  that no one will want to be in the placebo arm of the study, and risk not lowering his/her chance of dying. (This is in fact what happened when St. Vincent's Hospital in NYC tried to test drugs for AIDS with a placebo arm in the study. The political pressure was so great that the FDA granted the researchers permission to omit the placebo arm.)   I predict that once the news of this study gets out to the general public, then millions of people will start taking statins of their own accord (probably from Mexico since almost every drug that is a prescription-only drug in the U.S. is sold OTC in Mexico).
     Isn't the motto of Dupont, the giant chemical company, "Better living through chemistry"?

Thursday, November 29, 2012

How to Interpret Medical Research Results

     As a former research physicist and editor and reviewer for physics journals, as well as  a reviewer for 10 years for the Annals of Internal Medicine, I feel compelled to comment on the plethora of clinical medical studies that are being released faster and more furiously. Research scientists understand the limitations inherent in clinical reports, whether published in a journal or presented verbally at a conference, but people who get their information by reading internet or newspaper summaries of the results do not. I will try to outline what to look for in these reports, and what level of trust should be placed in the reported results, without resorting to scientific and statistical jargon such as regression to the mean, Simpson's Paradox, and the Law of Large Numbers. Interested readers might want to read my two earlier blogs on the fallacies inherent in using meta-analyses to suggest valid clinical treatments.

     One problem in reading newspaper summaries of published articles is that in their efforts to simplify the results and conclusions for their readers, columnists have made interpretive errors. At the very least, the columnist should refer you to the National Library of Medicine's web page,, where the abstract of the discussed article is contained. As a general rule, the abstract will contain the essential conclusions of the published journal article. There are a number of reputable newspaper columnists who are themselves practicing doctors, such as Danielle Ofri,  Perri Klass,  and Lisa Goldman, and I have always found their columns to be accurate. Without mentioning names, I have found errors in most articles written by science writers, especially when the interpretation of the published results is open to question, or ambiguous.

     You may take it as a general rule that if there is heated argument about the "correct" interpretation of a scientific result, then the clinical interpretation is not yet known with certainty. In physics we would say that if you have to argue about the interpretation of data points, then more data is needed.  And let us not forget the government's insidious modification of established facts, much as Stalin removed the faces of Politburo members from the Soviet Encyclopedia if they fell out of favor. The best example here in the U.S. deals with  the iconic picture of the abstract painter Jackson Pollock. This well-known photograph  showed him bent over his work with his ever-present cigarette dangling from his mouth. When the U.S. Postal Service reproduced this picture to use on a commemorative stamp, his cigarette was airbrushed out of the picture.

     For starters, please remember that the gold standard for the imprimatur of the probable accuracy of a scientific or clinical report is its publication in a journal article that has been reviewed by a scientist in the field. This has been shown to be the best way to make sure that there are no biases in the article, that the data is presented clearly and fairly, and that the conclusions follow from the data and are clinically significant.
Ideally, if any graphs are presented with data points, the data points should have error bars attached to them, showing the variation that could have been introduced by the imprecision of their measurement. Also,  the authors of the article should report who paid for the  research  and if  they have received speaking fees or consultation fees  from the government or drug companies, or any other sources (which of course newspaper reporters and TV news commentators almost never report about themselves).In addition, if the authors speculate about theoretical and possible future applications of their clinical results, the ideas should clearly be labeled as speculations and suggestions.

    From what I have said, it should be clear that a verbal report given at a medical conference should only be treated  as an interesting idea. All too often, initial verbal reports or announcements at news conference do not withstand the rigor of being reviewed for journal publication, let alone the test of time. I'm sure we all remember the foofraw connected with the report of table-top cold fusion. In fact, in this day and age of instant celebrity, I would expect any discoverer of an important clinical event to appear on the Oprah show (and this is meant as a compliment to Oprah, and not a denigration of her show). For this reason I take all medical news reports with a grain of salt until I can read about it in a refereed journal. In fact, The New England Journal of Medicine will often publish two similar studies in the same issue, and then have an editorial comment on why the two agree or disagree.

     And even reputable journal articles may not be correct. All too often one article is followed shortly (and sometimes in the same journal!) with a clinical research article coming to the opposite conclusion. To mention just a few: is lowering salt in the diet of an average person beneficial or harmful? do mammograms of women in their 40's save lives from breast cancer? does treating prostate cancer save lives? And then we come to the famous conflict in presenting data between relative risk and absolute risk. If the death rate from lightning strikes is 1/2,000,000 and I can reduce it to 1/1,000,000, I can present this result either by saying that I reduced your (relative) risk of lightning-stroke death by 50%, or I reduced your (absolute risk) by 1:1,000,000. Most journal articles and reports speak of the benefits of a new medical treatment in relative risk, because that number is always larger, and sounds much more impressive. A 35%  reduction in your change of getting leprosy would probably not be of much interest or medical importance to you because your absolute risk is so low. OTOH you might leap to take a new treatment that reduced your chance of getting a heart attack by 35%, but is it a 35 % reduction over the next  year, or is it a 35% reduction over the next 10 years, i.e. only a 3.5% reduction per year. In other words, how long do you have to live to reap the quoted benefit of the proposed treatment?

     Let me close with a statement attributed to Voltaire: No one argues over whether or not 7 x 8 = 56, but Giordano Bruno was burned at the stake for maintaining that the earth moved around the sun.