• Q lower p-value or narrower CI the better

    From Cosine@21:1/5 to All on Tue Jan 11 10:22:02 2022
    Hi:

    When doing a statistical test, we often compute the p-value and confidence interval (CI) at a given significance level of alpha.

    Questions arise: would it be better to have a lower p-value? Likewise, would it be better to have a narrower CI? Why and why not?

    Thanks,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Tue Jan 11 14:34:21 2022
    On Tue, 11 Jan 2022 10:22:02 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Hi:

    When doing a statistical test, we often compute the p-value and confidence interval (CI) at a given significance level of alpha.

    Questions arise: would it be better to have a lower p-value? Likewise, would it be better to have a narrower CI? Why and why not?


    Cost. And your friends may make fun of you if you spend too much.

    I want a car that has great acceleration and top speed and outstanding
    fuel economical. It should also be roomy and fun to drive. It should
    look good. Especiallly, it should be cheap to buy and to insure.

    Unfortunate, that dream. Buying, I've always settled for a limited
    choice among "what's available" and what is convenient.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Cosine@21:1/5 to All on Tue Jan 11 19:10:23 2022
    Cosine 在 2022年1月12日 星期三上午2:22:04 [UTC+8] 的信中寫道:
    Hi:

    When doing a statistical test, we often compute the p-value and confidence interval (CI) at a given significance level of alpha.

    Questions arise: would it be better to have a lower p-value? Likewise, would it be better to have a narrower CI? Why and why not?


    For p-value, suppose we have two new diagnostic methods, A and B. We want to know:
    1) are they both better than the standard method?
    2) is method A better than B?

    We desing studies and use the accuracy (Acc) to check the performances.

    By comparing the methods A and the standard one, we have: Acc_Asmp, p-value_A, CI_A
    B Acc_Bsmp, p-value_B, CI_B

    If p-value_A < p-value_B, could we say that Acc_Asmp is more significant than Acc_Bsmp?

    Similarly, we define the width of CI as WCI. We have WCI_A and WCI_B.

    If WCI_A < WCI_B, could we say that Acc_A is more significant or more reliable, since
    we could be sure that the true value of Acc_A would fall in a narrower CI (smaller width)?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Thu Jan 13 20:24:21 2022
    On Tue, 11 Jan 2022 19:10:23 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Cosine ? 2022?1?12? ?????2:22:04 [UTC+8] ??????
    Hi:

    When doing a statistical test, we often compute the p-value and confidence interval (CI) at a given significance level of alpha.

    Questions arise: would it be better to have a lower p-value? Likewise, would it be better to have a narrower CI? Why and why not?


    For p-value, suppose we have two new diagnostic methods, A and B. We want to know:
    1) are they both better than the standard method?
    2) is method A better than B?

    I think you want to mull the idea that TESTing is separate
    from ESTIMATION. Testing starts by designating a cutoff.
    Estimation reports "effect sizes" -- usually, in natural units of the experiment, rather than by comparing p-values.

    Yeah, I know that if tests are entirely commensurate (same Ns,
    SDs,), then I know that a p= 0.001 reflects a t-test mean-difference
    which is about twice that for p= 0.05. I would only ever mention
    the comparisosn if I were already engaged in explaining "effect
    sizes" in a more thorough way, such as "Why we should ignore
    all the 'tiny' effects where p is not < 0.001."


    And this statement of yours is not the way statisticians ever discuss
    either -- "For p-value, suppose we have ....".

    I assume that you intended to say something like, "For two new
    diagnostic methods, we have tests (with p-values) comparing each
    to a standard and also to each other."

    In a testing environment, or when one is TALKing about testing,
    we never would ASSERT that A is better than B unless the test
    for A vs B has a p-value meets the cutoff that was designated.



    We desing studies and use the accuracy (Acc) to check the performances.

    By comparing the methods A and the standard one, we have: Acc_Asmp, p-value_A, CI_A
    B Acc_Bsmp, p-value_B, CI_B

    If p-value_A < p-value_B, could we say that Acc_Asmp is more significant than Acc_Bsmp?

    Similarly, we define the width of CI as WCI. We have WCI_A and WCI_B.

    If WCI_A < WCI_B, could we say that Acc_A is more significant or more reliable, since
    we could be sure that the true value of Acc_A would fall in a narrower CI (smaller width)?

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)