Sensitivity and specificity: Difference between revisions
imported>Robert Badgett (Started 'Summary statistics for diagnostic ability' section) |
imported>Robert Badgett |
||
Line 37: | Line 37: | ||
===Area under the ROC curve=== | ===Area under the ROC curve=== | ||
The area under the ROC curve, or c-index has been proposed. The c-index varies from 0 to 1 and a result of 0.5 indicates that the diagnostic test does not add to guessing.<ref name="pmid7063747">{{cite journal |author=Hanley JA, McNeil BJ |title=The meaning and use of the area under a receiver operating characteristic (ROC) curve |journal=Radiology |volume=143 |issue=1 |pages=29–36 |year=1982 |month=April |pmid=7063747 |doi= |url=http://radiology.rsnajnls.org/cgi/pmidlookup?view=long&pmid=7063747 |issn=}}</ref> | The area under the ROC curve, or c-index has been proposed. The c-index varies from 0 to 1 and a result of 0.5 indicates that the diagnostic test does not add to guessing.<ref name="pmid7063747">{{cite journal |author=Hanley JA, McNeil BJ |title=The meaning and use of the area under a receiver operating characteristic (ROC) curve |journal=Radiology |volume=143 |issue=1 |pages=29–36 |year=1982 |month=April |pmid=7063747 |doi= |url=http://radiology.rsnajnls.org/cgi/pmidlookup?view=long&pmid=7063747 |issn=}}</ref> Variations have been proposed.<ref name="pmid15900606">{{cite journal |author=Walter SD |title=The partial area under the summary ROC curve |journal=Stat Med |volume=24 |issue=13 |pages=2025–40 |year=2005 |month=July |pmid=15900606 |doi=10.1002/sim.2103 |url=http://dx.doi.org/10.1002/sim.2103 |issn=}}</ref><ref name="pmid18687288">{{cite journal |author=Bangdiwala SI, Haedo AS, Natal ML, Villaveces A |title=The agreement chart as an alternative to the receiver-operating characteristic curve for diagnostic tests |journal=J Clin Epidemiol |volume=61 |issue=9 |pages=866–74 |year=2008 |month=September |pmid=18687288 |doi=10.1016/j.jclinepi.2008.04.002 |url=http://linkinghub.elsevier.com/retrieve/pii/S0895-4356(08)00120-0 |issn=}}</ref> | ||
===Sum of sensitivity and specificity=== | ===Sum of sensitivity and specificity=== | ||
This easy metric is called S+T.<ref name="pmid4014166">{{cite journal |author=Connell FA, Koepsell TD |title=Measures of gain in certainty from a diagnostic test |journal=Am. J. Epidemiol. |volume=121 |issue=5 |pages=744–53 |year=1985 |month=May |pmid=4014166 |doi= |url=http://aje.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=4014166 |issn=}}</ref> It varies from 0 to 2 and a result of 1 indicates that the diagnostic test does not add to guessing. | This easy metric is called S+T.<ref name="pmid4014166">{{cite journal |author=Connell FA, Koepsell TD |title=Measures of gain in certainty from a diagnostic test |journal=Am. J. Epidemiol. |volume=121 |issue=5 |pages=744–53 |year=1985 |month=May |pmid=4014166 |doi= |url=http://aje.oxfordjournals.org/cgi/pmidlookup?view=long&pmid=4014166 |issn=}}</ref> It varies from 0 to 2 and a result of 1 indicates that the diagnostic test does not add to guessing. | ||
===Proportionate reduction in uncertainty score=== | |||
The proportionate reduction in uncertainty score (PRU) has been proposed.<ref name="pmid17158858">{{cite journal |author=Coulthard MG |title=Quantifying how tests reduce diagnostic uncertainty |journal=Arch. Dis. Child. |volume=92 |issue=5 |pages=404–8 |year=2007 |month=May |pmid=17158858 |doi=10.1136/adc.2006.111633 |url=http://adc.bmj.com/cgi/pmidlookup?view=long&pmid=17158858 |issn=}}</ref> | |||
==Threats to validity of calculations== | ==Threats to validity of calculations== |
Revision as of 15:46, 22 September 2008
The sensitivity and specificity of diagnostic tests are based on Bayes Theorem and defined as "measures for assessing the results of diagnostic and screening tests. Sensitivity represents the proportion of truly diseased persons in a screened population who are identified as being diseased by the test. It is a measure of the probability of correctly diagnosing a condition. Specificity is the proportion of truly nondiseased persons who are so identified by the screening test. It is a measure of the probability of correctly identifying a nondiseased person. (From Last, Dictionary of Epidemiology, 2d ed)."[1]
Successful application of sensitivity and specificity is an important part of practicing evidence-based medicine.
Calculations
Disease | ||||
---|---|---|---|---|
Present | Absent | |||
Test result | Positive | Cell A | Cell B | Total with a positive test |
Negative | Cell C | Cell D | Total with a negative test | |
Total with disease | Total without disease |
Sensitivity and specificity
Predictive value of tests
The predictive values of diagnostic tests are defined as "in screening and diagnostic tests, the probability that a person with a positive test is a true positive (i.e., has the disease), is referred to as the predictive value of a positive test; whereas, the predictive value of a negative test is the probability that the person with a negative test does not have the disease. Predictive value is related to the sensitivity and specificity of the test."[2]
Summary statistics for diagnostic ability
While simply reporting the accuracy of a test seems intuitive, the accuracy is heavily influenced by the prevalence of disease.[3] For example, if the disease occurred with a a frequency of one in one thousand, then simply guessing that all patients do not have disease will yield an accuracy of over 99%.
Area under the ROC curve
The area under the ROC curve, or c-index has been proposed. The c-index varies from 0 to 1 and a result of 0.5 indicates that the diagnostic test does not add to guessing.[4] Variations have been proposed.[5][6]
Sum of sensitivity and specificity
This easy metric is called S+T.[7] It varies from 0 to 2 and a result of 1 indicates that the diagnostic test does not add to guessing.
Proportionate reduction in uncertainty score
The proportionate reduction in uncertainty score (PRU) has been proposed.[8]
Threats to validity of calculations
Various biases incurred during the study and analysis of a diagnostic tests can affect the validity of the calculations. An example is spectrum bias.
Poorly designed studies may overestimate the accuracy of a diagnostic test.[9]
References
- ↑ National Library of Mediicne. Sensitivity and specificity. Retrieved on 2007-12-09.
- ↑ National Library of Mediicne. Predictive value of tests. Retrieved on 2007-12-09.
- ↑ Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA (May 1982). "Evaluating the yield of medical tests". JAMA 247 (18): 2543–6. PMID 7069920. [e]
- ↑ Hanley JA, McNeil BJ (April 1982). "The meaning and use of the area under a receiver operating characteristic (ROC) curve". Radiology 143 (1): 29–36. PMID 7063747. [e]
- ↑ Walter SD (July 2005). "The partial area under the summary ROC curve". Stat Med 24 (13): 2025–40. DOI:10.1002/sim.2103. PMID 15900606. Research Blogging.
- ↑ Bangdiwala SI, Haedo AS, Natal ML, Villaveces A (September 2008). "The agreement chart as an alternative to the receiver-operating characteristic curve for diagnostic tests". J Clin Epidemiol 61 (9): 866–74. DOI:10.1016/j.jclinepi.2008.04.002. PMID 18687288. Research Blogging.
- ↑ Connell FA, Koepsell TD (May 1985). "Measures of gain in certainty from a diagnostic test". Am. J. Epidemiol. 121 (5): 744–53. PMID 4014166. [e]
- ↑ Coulthard MG (May 2007). "Quantifying how tests reduce diagnostic uncertainty". Arch. Dis. Child. 92 (5): 404–8. DOI:10.1136/adc.2006.111633. PMID 17158858. Research Blogging.
- ↑ Lijmer JG, Mol BW, Heisterkamp S, et al (September 1999). "Empirical evidence of design-related bias in studies of diagnostic tests". JAMA 282 (11): 1061–6. PMID 10493205. [e]