PakMediNet - Medical Information Gateway of Pakistan

Discussion Forum For Health Professionals

Post a Message

Lost your password?

Post Icon:

Note: Only Health Care Professionals (Doctors, Nurses, Pharmacists etc) and Members of this forum can add a message or reply to this message. Messages of the Non Health Care Professionals will be deleted without notification.

Topic Review - Newest First (only newest 5 are displayed)

rqayyum

Re: 95% Confidence Interval

This article-link given by Dr Siddiqui, provides strong support to performing a meta-analysis (as also pointed by the authors of this article). This article, basically, points out that many studies are under-powered to reach to a conclusive result. This, generally, happens due to inadequate sample size.

There are multiple reasons for inadequate sample size. Probably, the most common is the nonavailability of adequate information to calculate sample size. With inadequate information, if authors are too optimistic about the potential benefits of the treatment-effect, they are likely to enroll a smaller number of patients. Sometimes, when authors (being fortunate) have treatment effect size from a previous study, they may not have variance of their particular study-population (or event rate) from which study-sample size will be drawn. Thus, quite often, authors have to use their best judgment (in other words, guess) to calculate a sample size.

Occasionally, another reason for having an inadequate study-sample size is that no one actually calculated sample size, and the study-sample size was determined by the availability of resources (such as time, expertise, and money). Whether, this is due to ignorance of researcher with sample-size calculations or due to some other reason, it should be discouraged.

Meta-analysis can combine the under-powered studies in to a “mega-trial”, and can reach to a conclusion (positive or negative). As pointed out by Hunter and Schmidt in their book “Methods of Meta-analysis, 2nd ed.”, there is no treatment that can have an effect size exactly equal to zero; which is how a p-value of greater than 0.05 is interpreted (albeit wrongly). Instead, a treatment should have at least some, no matter how small, beneficial or harmful effect.

asiddiqui

Re: Re: 95% Confidence Interval

You may also find this intersting...
http://bmj.bmjjournals.com/cgi/content/full/311/7003/485

rqayyum

Re: 95% Confidence Interval

I came across this interesting article, which, although not related directly to 95% confidence interval, emphasizes a somewhat related and interesting point. This is its summary

"A common error in statistical analyses is to summarize comparisons by declarations of statistical significance or non-significance. There are a number of difficulties with this approach. First is the oft-cited dictum that statistical significance is not the same as practical significance. Another difficulty is that this dichotomization into significant and non-significant results encourages the dismissal of observed differences in favor of the usually less interesting null hypothesis of no difference.
Here, we focus on a less commonly noted problem, namely that changes in statistical significance are not themselves significant. By this, we are not merely making the commonplace observation that any particular threshold is arbitrary—for example, only a small change is required to move an estimate from a 5.1% significance level to 4.9%, thus moving it into statistical significance. Rather, we are pointing out that even large changes in significance levels can correspond to small, non-significant changes in the underlying variables. We illustrate with a theoretical and an applied example."

and here is link to full article (hardly 5 pages long).
http://www.stat.columbia.edu/~gelman/research/unpublished/signifrev.pdf

[Edited by rqayyum on 06-04-2006 at 08:04 PM GMT]

rqayyum

Re: 95% Confidence Interval

I agreed with Anwer completely (being a clinician, I can't dare to disagree with a statistician on a statistical issue).

Albert Einstein has done some wonderful things in physics, although I have yet to find any of his non-physics accomplishment. His this quote has been quoted many times, however, I have found it non-applicable in practical life (it is probably applicable in teaching advanced courses only). Everyday, I have to make things simpler than they should be, so that my patients can understand what they have. The other options are to either use medical jargon (I am sure even a brilliant physicist or statistician will find it difficult to understand) or not to tell them anything at all. I believe my job is to treat my patients and not to impress them. Therefore, with due respect to Albert Einstein, I beg to differ with his this quote.

Probably the right thing to say is that the explaination should be simplified enough to make it understandable to the group it is directed to. (docosama stayed confused even after a "wrong" and a correct definition). I think we need to do a better job at explaining this. And, by the way, Albert Einstein got famous because of over-simplification of his theories of relativity by the media. If the media had followed Einstein's advice, Mr Anwer would had to qoute someone else.

[Edited by rqayyum on 05-30-2005 at 04:34 AM GMT]

anwer_khur

Re: Re: 95% Confidence Interval

What you have written is just endorsing my point. I think it's good idea to give Confidence interval as well as p-vlaue. One may interpret the result as one likes.

Sometime simple thing makes a lot of problem. We must remember what Albert Einstein said :
"Make everything as simple as possible, but not simpler."
Anwer

quote:
rqayyum wrote:
Thank you anwar_khur for providing a clear explaination. However, the way I tried to explain was intentional. I wanted to make it as easy to understand as possible for someone who has very elementary understanding of biostatistics. However, in search of clarity I did compromise exact definition. I would like to say that this is a rather common way of explaining CI in non-statistical books. For example, in " How to report Statistics in Medicine" guidelines for authors, editors, and reviewers by American College of Physicians, CI is defined as " A confidence interval is the range of values, consistent with data, that is believed to encompass the actual or 'true' population value."

Whether using p-value or 95% CI is not controversial in the medical field. Please allow me to quote from the CONSORT guidelines (guidelines adopted for the reporting of randomized controlled trials by major medical journals). "17. For each primary and secondary outcome, a summary of results for each group and the estimated effect size and its precision (e.g., 95% confidence interval)." (ref: Ann Intern Med. 2001;134:657-662). Full detail can be seen on the CONSORT website, however here is an excrept,
"For all outcome measures, authors should provide a confidence interval to indicate the precision* (uncertainty) of the estimate. A 95% confidence interval is conventional, but occasionally other levels are used. Many journals require or strongly encourage the use of confidence intervals. They are especially valuable in relation to nonsignificant differences, for which they often indicate that the result does not rule out an important clinical difference. The use of confidence intervals has markedly increased in recent years, although not in all medical specialties. Although P values may be provided in addition to confidence intervals, results should not be reported solely as P values".

These guidelines are adopted by many journals including Annals of Internal Medicine, BMJ, Lancet, JAMA, and NEJM. As docosama has given examples from NEJM (and one will find the same in JAMA, BMJ, Lancet, and Annals), 95%CI are given in almost every article with or without p-value.

[Edited by rqayyum on 05-26-2005 at 02:50 AM GMT]

[Edited by rqayyum on 05-26-2005 at 03:00 AM GMT]