• ? increasing simultaneously sensitivity and specificity

    From Cosine@21:1/5 to All on Wed Jan 2 18:58:28 2019
    Hi:

    Is it possible to increase simultaneously both sensitivity and specificity?

    Why and why not?

    Furthermore, under what conditions could we reach this goal, and under what conditions we could not?

    Thanks,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Fri Jan 4 12:47:55 2019
    On Wed, 2 Jan 2019 18:58:28 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Hi:

    Is it possible to increase simultaneously both sensitivity and specificity?


    (Assuming the right versions of sensitivity and specificity.)

    Sure. The combination measures "reliability".
    We are pleased when we can discover or devise a scale
    with better reliability.

    But if you are speaking of a single, monotonic ROC scale,
    a change of cutoff can never "improve" both.


    Why and why not?

    Furthermore, under what conditions could we reach this goal, and under what conditions we could not?


    If this is homework, the followup questions seem oddly worded,
    given my perspective.


    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Cosine@21:1/5 to All on Sat Jan 19 02:17:02 2019
    Cosine於 2019年1月3日星期四 UTC+8上午10時58分31秒寫道:
    Hi:

    Is it possible to increase simultaneously both sensitivity and specificity?

    Why and why not?

    Furthermore, under what conditions could we reach this goal, and under what conditions we could not?

    Thanks,

    No, this is from one who is doing self-study on statistics, and this person is not studying in univ/college/high school.

    What do you mean by reliability? From Wikipedia, I do not see the relationship between reliability and sensitivity/specificity.

    Would you explain more on this?

    Thanks,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Duffy@21:1/5 to Cosine on Sat Jan 19 23:44:34 2019
    Cosine <asecant@gmail.com> wrote:

    What do you mean by reliability? From Wikipedia, I do not see the
    relationship between reliability and sensitivity/specificity.

    See https://en.wikipedia.org/wiki/Youden%27s_J_statistic

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Sun Jan 20 13:57:17 2019
    On Sat, 19 Jan 2019 02:17:02 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Cosine? 2019?1?3???? UTC+8??10?58?31????
    Hi:

    Is it possible to increase simultaneously both sensitivity and specificity? >>
    Why and why not?

    Furthermore, under what conditions could we reach this goal, and under what conditions we could not?

    Thanks,

    * * * my Reply, to which Cosine is responding in followup.
    < (Assuming the right versions of sensitivity and specificity.)

    Sure. The combination measures "reliability".
    We are pleased when we can discover or devise a scale
    with better reliability.

    But if you are speaking of a single, monotonic ROC scale,
    a change of cutoff can never "improve" both. >
    (also a Q about whether this is homework)

    * * *


    No, this is from one who is doing self-study on statistics, and this person is not studying in univ/college/high school.

    What do you mean by reliability? From Wikipedia, I do not see the relationship between reliability and sensitivity/specificity.

    Would you explain more on this?

    David Duffy gives a Wiki citation, which might be useful. It gives
    some more background.

    However, in that article, the word "reliability" occurs only once,
    so it might not answer your problem. I will say some more.

    "Reliability" of a test tells how well it can be reproduced - two
    raters, two assays, whatever. An inconsistent test is not a
    good one. A test that is perfectly reproducible has the chance
    to be good, but it (presumably) is indirect, and can still misinform.

    "Reliability" puts a ceiling on how good the "validity" can be,
    where the validity tells how well it represents the reality
    (disease, say, where diagnosis is abbreviated Dx). - If the test
    can't predict itself very well, it certainly can't predict the Dx.
    But predicting itself does not prove that it does predict the Dx.

    The computation of reliabililty and validity may be the same sort
    of 2x2 table. For reliability, you compare Dx with Dx where either
    may be wrong, for two raters or assays or whatever; for validity,
    you compare Dx with Dx, where one is the "true score."

    If your predictor is a dichotomy, you have a single score for
    sensitivity and specificity as measures of validity. If your
    predictor is a continuous score, for which you can you might select
    any cutoff, you can draw a ROC curve, which shows the trade-off
    between sensitivity and specificity as the cutoff is increased.

    Whether you start with a dichotomy or create one from a
    continuous score, improving the reliability of a particular test
    offers the chance of improving both reliability and sensitivity at
    the same time, since you get to draw a new ROC curve.

    Note this: a totally different test might have higher "validity"
    even though it has less "reliability" -- Say, a perfectly-reproduced
    test does not measure the disease very well. So, if your choices
    include tests that are not revisions of the original, my first
    response was wrong. What will do the job is a test with higher
    validity rather than higher reliability/reproducibility.

    Hope this helps.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From barth@leaderamp.com@21:1/5 to All on Sun Jan 20 10:51:05 2019
    Regarding reliability, reliability refers to the consistency of a measure. That is, given the same (or nearly the same) set of conditions, do we obtain the same measure? Sensitivity and specificity refer to the ability of a measure or a variable more
    generally to predict a dichotomous criterion. Thus, sensitivity and specificity provide support for criterion validity. Regarding statistics to summarize sensitivity and specificity, the area under the receiver operating characteristic (ROC) curve is the
    standard, albeit its values generally range from ~ 0.5 (meaning no good trade-off between sensitivity and specificity and at any given cut point on the measure) to 1 (perfect sensitivity and specificity). (Note that values < 0.5 are possible indicating
    that the measure predicts the criterion at a level worse than change). The Youden J statistic provides a measure that ranges from 0 to 1, but is likely specific to a iven cut point, whereas the area under the curve is not specific to a single cut point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)