On Sat, 19 Jan 2019 02:17:02 -0800 (PST), Cosine <
asecant@gmail.com>
wrote:
Cosine? 2019?1?3???? UTC+8??10?58?31????
Hi:
Is it possible to increase simultaneously both sensitivity and specificity? >>
Why and why not?
Furthermore, under what conditions could we reach this goal, and under what conditions we could not?
Thanks,
* * * my Reply, to which Cosine is responding in followup.
< (Assuming the right versions of sensitivity and specificity.)
Sure. The combination measures "reliability".
We are pleased when we can discover or devise a scale
with better reliability.
But if you are speaking of a single, monotonic ROC scale,
a change of cutoff can never "improve" both. >
(also a Q about whether this is homework)
* * *
No, this is from one who is doing self-study on statistics, and this person is not studying in univ/college/high school.
What do you mean by reliability? From Wikipedia, I do not see the relationship between reliability and sensitivity/specificity.
Would you explain more on this?
David Duffy gives a Wiki citation, which might be useful. It gives
some more background.
However, in that article, the word "reliability" occurs only once,
so it might not answer your problem. I will say some more.
"Reliability" of a test tells how well it can be reproduced - two
raters, two assays, whatever. An inconsistent test is not a
good one. A test that is perfectly reproducible has the chance
to be good, but it (presumably) is indirect, and can still misinform.
"Reliability" puts a ceiling on how good the "validity" can be,
where the validity tells how well it represents the reality
(disease, say, where diagnosis is abbreviated Dx). - If the test
can't predict itself very well, it certainly can't predict the Dx.
But predicting itself does not prove that it does predict the Dx.
The computation of reliabililty and validity may be the same sort
of 2x2 table. For reliability, you compare Dx with Dx where either
may be wrong, for two raters or assays or whatever; for validity,
you compare Dx with Dx, where one is the "true score."
If your predictor is a dichotomy, you have a single score for
sensitivity and specificity as measures of validity. If your
predictor is a continuous score, for which you can you might select
any cutoff, you can draw a ROC curve, which shows the trade-off
between sensitivity and specificity as the cutoff is increased.
Whether you start with a dichotomy or create one from a
continuous score, improving the reliability of a particular test
offers the chance of improving both reliability and sensitivity at
the same time, since you get to draw a new ROC curve.
Note this: a totally different test might have higher "validity"
even though it has less "reliability" -- Say, a perfectly-reproduced
test does not measure the disease very well. So, if your choices
include tests that are not revisions of the original, my first
response was wrong. What will do the job is a test with higher
validity rather than higher reliability/reproducibility.
Hope this helps.
--
Rich Ulrich
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)