PakMediNet Discussion Forum : Biostatistics : 95% Confidence Interval
95% Confidence IntervalAlthough we all are much more familiar with p-value, it is less useful than 95% confidence interval(CI). p-value does not tell us about the precision of the clinical effect estimate. Being clinicians, we are interested in the extent of clinical effect and not in the p-value. A trial about a drug with a big sample size, which decreases blood pressure by 0.5 mm Hg, may give a small p-value, however as clinicians, we may not like to prescribe such a poorly effective drug to our patients. On the other hand, another trial of the same drug with a small sample size may show a change in blood pressure of 15 mm Hg with a significant p-value just by chance due to sampling error. However, a 95% CI will give a better estimate of the location of the population mean clinical effect. Basically, 95% CI tells us the precision of the effect size estimate which p-value can not. Therefore, everyone should try to report 95% CI in the manuscripts with or without p-value.
Posted by: rqayyumPosts: 199 :: 22-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
You are absolutely right Rqayyum. Can you tell us here what does means by values given in the following article (an example): (CI 95%;0.85 - 1.35)
Higher cystatin C levels were directly associated, in a dose–response manner, with a higher risk of death from all causes. As compared with the first quintile, the hazard ratios (and 95 percent confidence intervals) for death were as follows: second quintile, 1.08 (0.86 to 1.35); third quintile, 1.23 (1.00 to 1.53); fourth quintile, 1.34 (1.09 to 1.66); quintile 5a, 1.77 (1.34 to 2.26); 5b, 2.18 (1.72 to 2.7
; and 5c, 2.58 (2.03 to 3.27).
Thanks
Posted by: docosamaPosts: 333 :: 23-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
In every clinical study we try to estimate some population parameter along with its dispersion. Ideally, we should examine every individual of the population under consideration and determine the parameter of interest. However, most of the time it is not possible. Therefore, we take a sample from the population under consideration and determine the sample statistic as an estimate of the population parameter. It is important to understand that this is an estimate and not actual population parameter. We then try to determine the interval around the observed sample statistic in which we think actual population parameter of interest will be found. When we say 95% confidence interval (CI), we are actually saying that we are 95% confident that actual population parameter will lie in this interval. Not only that CI gives us a range in which we expect to find the actual population parameter, it can also tell us about the statistical sgnificance. In case of hazard ratio (HR), if CI includes 1, the results are not significant.
Let me give an example (from the May 19, 2005 issue of NEJM), same article that you quoted. As compared with the first quintile, the HR for death of the second quintile is 1.08(0.86-95%CI-1.35). It tells us two things; first the HR is not statistically significant as CI includes 1. Second, although the result is statistically not significant, the actual population HR can be anywhere between 0.86 on one hand, to 1.35 on the other extreme. In other words, it also tells us that although investigators were unable to detect a statistically significant difference between 1st and 2nd quintiles, they may have made a typeII error in accepting the null hypothesis and that actual population HR may be as high as 1.35. (although no one will say this in their manuscript). Now lets look at the HR of the fourth quintile, which is 1.34(1.09-95%CI-1.66). It tells us that the HR is statistically significant (CI does not include 1) and investigators are 95% confident that the actual population HR is somewhere between 1.09 to 1.66.
I hope I gave you the answer to the question you asked.
Posted by: rqayyumPosts: 199 :: 25-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
I think it is just waste of time to discuss pros and cons of confidence intervals and p-values. A considerable literature in statistics as well as biostatistics discusses this. I don’t want to involve in this debate here. But I would like to discuss one important misconception.
rqayyum wrote "When we say 95% confidence interval (CI), we are actually saying that we are 95% confident that actual population parameter will lie in this interval"
This is WRONG.
The confidence interval (CI) is the range of values above and below the point estimate that is likely to include the true value of the treatment effect. 1-alpha CI for an unknown parameter, say mu, is an interval computed from the sample data having the property that, in REPEATED SAMPLING 100 (1- alpha)% of the intervals obtained will contain the value mu. So, a 95% CI implies that in repeated sampling 95% of the intervals would be expected to contain the true parameter value. The use of CIs assumes that a study provides one sample of observations out of many possible samples that would be derived if the study were repeated many times.
I hope it will clear some confusion.
Anwer Khurshid
[Edited by anwer_khur on 05-25-2005 at 07:58 AM GMT]
Posted by: anwer_khurPosts: 30 :: 25-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
I am still confused. Let me quote another example: (NEJM: 2004;351(17):1741-51)
A total of 545 patients were randomly assigned to groups that received either dexamethasone (274 patients) or placebo (271 patients). Only 10 patients (1.8 percent) had been lost to follow-up at nine months of treatment. Treatment with dexamethasone was associated with a reduced risk of death (relative risk, 0.69; 95 percent confidence interval, 0.52 to 0.92; P=0.01). It was not associated with a significant reduction in the proportion of severely disabled patients (34 of 187 patients [18.2 percent] among survivors in the dexamethasone group vs. 22 of 159 patients [13.8 percent] in the placebo group, P=0.27) or in the proportion of patients who had either died or were severely disabled after nine months (odds ratio, 0.81; 95 percent confidence interval, 0.58 to 1.13; P=0.22). The treatment effect was consistent across subgroups that were defined by disease-severity grade (stratified relative risk of death, 0.68; 95 percent confidence interval, 0.52 to 0.91; P=0.007) and by HIV status (stratified relative risk of death, 0.78; 95 percent confidence interval, 0.59 to 1.04; P=0.0
. Significantly fewer serious adverse events occurred in the dexamethasone group than in the placebo group (26 of 274 patients vs. 45 of 271 patients, P=0.02).
In this sentence "Treatment with dexamethasone was associated with a reduced risk of death (relative risk, 0.69; 95 percent confidence interval, 0.52 to 0.92; P=0.01)" - from where the values 0.52 & 0.92 are calculated? Are these relative risks for both groups?
Posted by: docosamaPosts: 333 :: 25-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
Thank you anwar_khur for providing a clear explaination. However, the way I tried to explain was intentional. I wanted to make it as easy to understand as possible for someone who has very elementary understanding of biostatistics. However, in search of clarity I did compromise exact definition. I would like to say that this is a rather common way of explaining CI in non-statistical books. For example, in " How to report Statistics in Medicine" guidelines for authors, editors, and reviewers by American College of Physicians, CI is defined as " A confidence interval is the range of values, consistent with data, that is believed to encompass the actual or 'true' population value."
Whether using p-value or 95% CI is not controversial in the medical field. Please allow me to quote from the CONSORT guidelines (guidelines adopted for the reporting of randomized controlled trials by major medical journals). "17. For each primary and secondary outcome, a summary of results for each group and the estimated effect size and its precision (e.g., 95% confidence interval)." (ref: Ann Intern Med. 2001;134:657-662). Full detail can be seen on the CONSORT website, however here is an excrept,
"For all outcome measures, authors should provide a confidence interval to indicate the precision* (uncertainty) of the estimate. A 95% confidence interval is conventional, but occasionally other levels are used. Many journals require or strongly encourage the use of confidence intervals. They are especially valuable in relation to nonsignificant differences, for which they often indicate that the result does not rule out an important clinical difference. The use of confidence intervals has markedly increased in recent years, although not in all medical specialties. Although P values may be provided in addition to confidence intervals, results should not be reported solely as P values".
These guidelines are adopted by many journals including Annals of Internal Medicine, BMJ, Lancet, JAMA, and NEJM. As docosama has given examples from NEJM (and one will find the same in JAMA, BMJ, Lancet, and Annals), 95%CI are given in almost every article with or without p-value.
[Edited by rqayyum on 05-26-2005 at 02:50 AM GMT]
[Edited by rqayyum on 05-26-2005 at 03:00 AM GMT]
Posted by: rqayyumPosts: 199 :: 26-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
I will let anwar_khur give the statistical explaination of the question posed by docosama. However, I will give a very simplistic answer. I hope it will make CI easy to understand. Assuming a normal distribution, multiply standard error (SE) with 2 (actual number varies depending on the distribution used, but 2 is a good approximation if you are not reporting in an article yourself). Then substract this 2xSE from the statistic, this will give you the lower limit of the CI. Add 2xSE to the statistic and this will give you the upper limit. Recall: Approximately 95% of the values in a normal distribution are within 2 x standard deviation.
[Edited by rqayyum on 05-26-2005 at 02:36 AM GMT]
Posted by: rqayyumPosts: 199 :: 26-05-2005 :: | Reply to this Message
Re: Re: 95% Confidence Interval
What you have written is just endorsing my point. I think it's good idea to give Confidence interval as well as p-vlaue. One may interpret the result as one likes.
Sometime simple thing makes a lot of problem. We must remember what Albert Einstein said :
"Make everything as simple as possible, but not simpler."
Anwer
quote:
rqayyum wrote:
Thank you anwar_khur for providing a clear explaination. However, the way I tried to explain was intentional. I wanted to make it as easy to understand as possible for someone who has very elementary understanding of biostatistics. However, in search of clarity I did compromise exact definition. I would like to say that this is a rather common way of explaining CI in non-statistical books. For example, in " How to report Statistics in Medicine" guidelines for authors, editors, and reviewers by American College of Physicians, CI is defined as " A confidence interval is the range of values, consistent with data, that is believed to encompass the actual or 'true' population value."
Whether using p-value or 95% CI is not controversial in the medical field. Please allow me to quote from the CONSORT guidelines (guidelines adopted for the reporting of randomized controlled trials by major medical journals). "17. For each primary and secondary outcome, a summary of results for each group and the estimated effect size and its precision (e.g., 95% confidence interval)." (ref: Ann Intern Med. 2001;134:657-662). Full detail can be seen on the CONSORT website, however here is an excrept,
"For all outcome measures, authors should provide a confidence interval to indicate the precision* (uncertainty) of the estimate. A 95% confidence interval is conventional, but occasionally other levels are used. Many journals require or strongly encourage the use of confidence intervals. They are especially valuable in relation to nonsignificant differences, for which they often indicate that the result does not rule out an important clinical difference. The use of confidence intervals has markedly increased in recent years, although not in all medical specialties. Although P values may be provided in addition to confidence intervals, results should not be reported solely as P values".
These guidelines are adopted by many journals including Annals of Internal Medicine, BMJ, Lancet, JAMA, and NEJM. As docosama has given examples from NEJM (and one will find the same in JAMA, BMJ, Lancet, and Annals), 95%CI are given in almost every article with or without p-value.
[Edited by rqayyum on 05-26-2005 at 02:50 AM GMT]
[Edited by rqayyum on 05-26-2005 at 03:00 AM GMT]
Posted by: anwer_khurPosts: 30 :: 28-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
I agreed with Anwer completely (being a clinician, I can't dare to disagree with a statistician on a statistical issue).
Albert Einstein has done some wonderful things in physics, although I have yet to find any of his non-physics accomplishment. His this quote has been quoted many times, however, I have found it non-applicable in practical life (it is probably applicable in teaching advanced courses only). Everyday, I have to make things simpler than they should be, so that my patients can understand what they have. The other options are to either use medical jargon (I am sure even a brilliant physicist or statistician will find it difficult to understand) or not to tell them anything at all. I believe my job is to treat my patients and not to impress them. Therefore, with due respect to Albert Einstein, I beg to differ with his this quote.
Probably the right thing to say is that the explaination should be simplified enough to make it understandable to the group it is directed to. (docosama stayed confused even after a "wrong" and a correct definition). I think we need to do a better job at explaining this. And, by the way, Albert Einstein got famous because of over-simplification of his theories of relativity by the media. If the media had followed Einstein's advice, Mr Anwer would had to qoute someone else.
[Edited by rqayyum on 05-30-2005 at 04:34 AM GMT]
Posted by: rqayyumPosts: 199 :: 29-05-2005 :: | Reply to this Message
Re: 95% Confidence Interval
I came across this interesting article, which, although not related directly to 95% confidence interval, emphasizes a somewhat related and interesting point. This is its summary
"A common error in statistical analyses is to summarize comparisons by declarations of statistical significance or non-significance. There are a number of difficulties with this approach. First is the oft-cited dictum that statistical significance is not the same as practical significance. Another difficulty is that this dichotomization into significant and non-significant results encourages the dismissal of observed differences in favor of the usually less interesting null hypothesis of no difference.
Here, we focus on a less commonly noted problem, namely that changes in statistical significance are not themselves significant. By this, we are not merely making the commonplace observation that any particular threshold is arbitrary—for example, only a small change is required to move an estimate from a 5.1% significance level to 4.9%, thus moving it into statistical significance. Rather, we are pointing out that even large changes in significance levels can correspond to small, non-significant changes in the underlying variables. We illustrate with a theoretical and an applied example."
and here is link to full article (hardly 5 pages long).
http://www.stat.columbia.edu/~gelman/research/unpublished/signifrev.pdf
[Edited by rqayyum on 06-04-2006 at 08:04 PM GMT]
Posted by: rqayyumPosts: 199 :: 06-04-2006 :: | Reply to this Message
Re: Re: 95% Confidence Interval
You may also find this intersting...
http://bmj.bmjjournals.com/cgi/content/full/311/7003/485
Posted by: asiddiquiPosts: 26 :: 25-05-2006 :: | Reply to this Message
Re: 95% Confidence Interval
This article-link given by Dr Siddiqui, provides strong support to performing a meta-analysis (as also pointed by the authors of this article). This article, basically, points out that many studies are under-powered to reach to a conclusive result. This, generally, happens due to inadequate sample size.
There are multiple reasons for inadequate sample size. Probably, the most common is the nonavailability of adequate information to calculate sample size. With inadequate information, if authors are too optimistic about the potential benefits of the treatment-effect, they are likely to enroll a smaller number of patients. Sometimes, when authors (being fortunate) have treatment effect size from a previous study, they may not have variance of their particular study-population (or event rate) from which study-sample size will be drawn. Thus, quite often, authors have to use their best judgment (in other words, guess) to calculate a sample size.
Occasionally, another reason for having an inadequate study-sample size is that no one actually calculated sample size, and the study-sample size was determined by the availability of resources (such as time, expertise, and money). Whether, this is due to ignorance of researcher with sample-size calculations or due to some other reason, it should be discouraged.
Meta-analysis can combine the under-powered studies in to a “mega-trial”, and can reach to a conclusion (positive or negative). As pointed out by Hunter and Schmidt in their book “Methods of Meta-analysis, 2nd ed.”, there is no treatment that can have an effect size exactly equal to zero; which is how a p-value of greater than 0.05 is interpreted (albeit wrongly). Instead, a treatment should have at least some, no matter how small, beneficial or harmful effect.
Posted by: rqayyumPosts: 199 :: 25-05-2006 :: | Reply to this Message