• Q statistically IN-significant

    From Cosine@21:1/5 to All on Wed Apr 21 05:52:25 2021
    Hi:

    When we have the result of the experiment to be p-value <= alpha, we could simply claim that the null hypothesis could be rejected. However, what if the result is p-value > alpha? What kind of constructive discussions could we make more than simply
    saying that we could not reject the null hypothesis? For example, could we found something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get the result of p-value > alpha, what useful information could we deduce, in addition to H0 could not be rejected?

    Thank you,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to Cosine on Wed Apr 21 13:52:30 2021
    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <= alpha,
    we could simply claim that the null hypothesis could be rejected.
    However, what if the result is p-value > alpha? What kind of
    constructive discussions could we make more than simply saying that
    we could not reject the null hypothesis? For example, could we found something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get the
    result of p-value > alpha, what useful information could we deduce,
    in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not enough.
    You should almost always go on to derive a confidence interval for an
    "effect size". You then need to think, in the particular context,
    about whether that range of effect-size is important. If you were
    planning to go on to a further experiment, you could pick a value for
    the effect-size (possibly from within the confidence interval, but some important-to-detect value) and set-up the experiment so as to be able
    to detect an effect of that size. This might just mean choosing a
    sample-size, either via a test-power-type analysis or by an argument
    based on an a formula for a standard error for the estimated
    effect-size, using results from the first experiment.

    But, haven't you asked this same question here previously?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to dajhawkxx@nowherel.com on Wed Apr 21 19:33:40 2021
    On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones" <dajhawkxx@nowherel.com> wrote:

    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <= alpha,
    we could simply claim that the null hypothesis could be rejected.
    However, what if the result is p-value > alpha? What kind of
    constructive discussions could we make more than simply saying that
    we could not reject the null hypothesis? For example, could we found
    something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get the
    result of p-value > alpha, what useful information could we deduce,
    in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not enough.
    You should almost always go on to derive a confidence interval for an
    "effect size". You then need to think, in the particular context,
    about whether that range of effect-size is important. If you were
    planning to go on to a further experiment, you could pick a value for
    the effect-size (possibly from within the confidence interval, but some >important-to-detect value) and set-up the experiment so as to be able
    to detect an effect of that size. This might just mean choosing a >sample-size, either via a test-power-type analysis or by an argument
    based on an a formula for a standard error for the estimated
    effect-size, using results from the first experiment.

    But, haven't you asked this same question here previously?

    Good comments.

    I have some wandering thoughts, inspired by, "more than simply
    saying that we could not reject the null hypothesis".

    Let's assume that some experiment was thought, a priori, to have
    enough power to produce an interesting result. But it failed to.

    Was the experiment carried out without hitch? Was protocol
    followed? without modification? on the sample that was expected?
    (I wonder how many clinical treatment trials have had their results
    confounded by the covid epidemic.)

    I remember reading of one clinical study which "failed to replicate"
    the treatment results of an earlier study; the original authors
    complained that the experiment, as PERFORMED, did NOT use the
    most important aspects of methods they had recommended. I
    concluded that since I was not a clinician, I could not judge whether
    the differences should be important.

    Were the background conditions what were expected? If the
    epidemic disappears, the expected number of Events may not
    have appeared.

    If the experiment failed to find its interesting result, even though
    carried out without obvious problems emerging, then it must be
    time to revise the hypothesis (if you are unwilling to abandon it).
    Does it need a specific sub-sample or circumstances or some
    variation in how something is done ("double the dose")?

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From duncan smith@21:1/5 to Rich Ulrich on Thu Apr 22 16:26:02 2021
    On 22/04/2021 00:33, Rich Ulrich wrote:
    On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones" <dajhawkxx@nowherel.com> wrote:

    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <= alpha,
    we could simply claim that the null hypothesis could be rejected.
    However, what if the result is p-value > alpha? What kind of
    constructive discussions could we make more than simply saying that
    we could not reject the null hypothesis? For example, could we found
    something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get the
    result of p-value > alpha, what useful information could we deduce,
    in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not enough.
    You should almost always go on to derive a confidence interval for an
    "effect size". You then need to think, in the particular context,
    about whether that range of effect-size is important. If you were
    planning to go on to a further experiment, you could pick a value for
    the effect-size (possibly from within the confidence interval, but some
    important-to-detect value) and set-up the experiment so as to be able
    to detect an effect of that size. This might just mean choosing a
    sample-size, either via a test-power-type analysis or by an argument
    based on an a formula for a standard error for the estimated
    effect-size, using results from the first experiment.

    But, haven't you asked this same question here previously?

    Good comments.

    I have some wandering thoughts, inspired by, "more than simply
    saying that we could not reject the null hypothesis".

    Let's assume that some experiment was thought, a priori, to have
    enough power to produce an interesting result. But it failed to.

    Was the experiment carried out without hitch? Was protocol
    followed? without modification? on the sample that was expected?
    (I wonder how many clinical treatment trials have had their results confounded by the covid epidemic.)

    I remember reading of one clinical study which "failed to replicate"
    the treatment results of an earlier study; the original authors
    complained that the experiment, as PERFORMED, did NOT use the
    most important aspects of methods they had recommended. I
    concluded that since I was not a clinician, I could not judge whether
    the differences should be important.

    Were the background conditions what were expected? If the
    epidemic disappears, the expected number of Events may not
    have appeared.

    If the experiment failed to find its interesting result, even though
    carried out without obvious problems emerging, then it must be
    time to revise the hypothesis (if you are unwilling to abandon it).
    Does it need a specific sub-sample or circumstances or some
    variation in how something is done ("double the dose")?


    And after the various hypothesis revisions and subgroup analyses how
    would you justify any claims arising from the exercise?

    Duncan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to duncan smith on Thu Apr 22 17:36:07 2021
    duncan smith wrote:

    On 22/04/2021 00:33, Rich Ulrich wrote:
    On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones" <dajhawkxx@nowherel.com> wrote:

    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <=
    alpha, >>> we could simply claim that the null hypothesis could be
    rejected. >>> However, what if the result is p-value > alpha? What
    kind of >>> constructive discussions could we make more than simply
    saying that >>> we could not reject the null hypothesis? For example,
    could we found >>> something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get
    the >>> result of p-value > alpha, what useful information could we
    deduce, >>> in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not
    enough. >> You should almost always go on to derive a confidence
    interval for an >> "effect size". You then need to think, in the
    particular context, >> about whether that range of effect-size is
    important. If you were >> planning to go on to a further experiment,
    you could pick a value for >> the effect-size (possibly from within
    the confidence interval, but some >> important-to-detect value) and
    set-up the experiment so as to be able >> to detect an effect of that
    size. This might just mean choosing a >> sample-size, either via a test-power-type analysis or by an argument >> based on an a formula
    for a standard error for the estimated >> effect-size, using results
    from the first experiment.

    But, haven't you asked this same question here previously?

    Good comments.

    I have some wandering thoughts, inspired by, "more than simply
    saying that we could not reject the null hypothesis".

    Let's assume that some experiment was thought, a priori, to have
    enough power to produce an interesting result. But it failed to.

    Was the experiment carried out without hitch? Was protocol
    followed? without modification? on the sample that was expected?
    (I wonder how many clinical treatment trials have had their results confounded by the covid epidemic.)

    I remember reading of one clinical study which "failed to
    replicate" the treatment results of an earlier study; the original
    authors complained that the experiment, as PERFORMED, did NOT use
    the most important aspects of methods they had recommended. I
    concluded that since I was not a clinician, I could not judge
    whether the differences should be important.

    Were the background conditions what were expected? If the
    epidemic disappears, the expected number of Events may not
    have appeared.

    If the experiment failed to find its interesting result, even though carried out without obvious problems emerging, then it must be
    time to revise the hypothesis (if you are unwilling to abandon it).
    Does it need a specific sub-sample or circumstances or some
    variation in how something is done ("double the dose")?


    And after the various hypothesis revisions and subgroup analyses how
    would you justify any claims arising from the exercise?

    Duncan

    The context here is that one would protect oneself from the dangers of data-dredging or over-analysing a single set of data by going on to do
    an independent experiment and an independent analysis of data from that experiment.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From duncan smith@21:1/5 to David Jones on Fri Apr 23 01:50:39 2021
    On 22/04/2021 18:36, David Jones wrote:
    duncan smith wrote:

    On 22/04/2021 00:33, Rich Ulrich wrote:
    On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones"
    <dajhawkxx@nowherel.com> wrote:

    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <=
    alpha, >>> we could simply claim that the null hypothesis could be
    rejected. >>> However, what if the result is p-value > alpha? What
    kind of >>> constructive discussions could we make more than simply
    saying that >>> we could not reject the null hypothesis? For example,
    could we found >>> something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get
    the >>> result of p-value > alpha, what useful information could we
    deduce, >>> in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not
    enough. >> You should almost always go on to derive a confidence
    interval for an >> "effect size". You then need to think, in the
    particular context, >> about whether that range of effect-size is
    important. If you were >> planning to go on to a further experiment,
    you could pick a value for >> the effect-size (possibly from within
    the confidence interval, but some >> important-to-detect value) and
    set-up the experiment so as to be able >> to detect an effect of that
    size. This might just mean choosing a >> sample-size, either via a
    test-power-type analysis or by an argument >> based on an a formula
    for a standard error for the estimated >> effect-size, using results
    from the first experiment.

    But, haven't you asked this same question here previously?

    Good comments.

    I have some wandering thoughts, inspired by, "more than simply
    saying that we could not reject the null hypothesis".

    Let's assume that some experiment was thought, a priori, to have
    enough power to produce an interesting result. But it failed to.

    Was the experiment carried out without hitch? Was protocol
    followed? without modification? on the sample that was expected?
    (I wonder how many clinical treatment trials have had their results
    confounded by the covid epidemic.)

    I remember reading of one clinical study which "failed to
    replicate" the treatment results of an earlier study; the original
    authors complained that the experiment, as PERFORMED, did NOT use
    the most important aspects of methods they had recommended. I
    concluded that since I was not a clinician, I could not judge
    whether the differences should be important.

    Were the background conditions what were expected? If the
    epidemic disappears, the expected number of Events may not
    have appeared.

    If the experiment failed to find its interesting result, even though
    carried out without obvious problems emerging, then it must be
    time to revise the hypothesis (if you are unwilling to abandon it).
    Does it need a specific sub-sample or circumstances or some
    variation in how something is done ("double the dose")?


    And after the various hypothesis revisions and subgroup analyses how
    would you justify any claims arising from the exercise?

    Duncan

    The context here is that one would protect oneself from the dangers of data-dredging or over-analysing a single set of data by going on to do
    an independent experiment and an independent analysis of data from that experiment.


    There still needs to be some justifiable, pre-specified criterion for
    claiming a positive result (or stopping and generating no such claim).

    Duncan

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to duncan smith on Fri Apr 23 09:14:18 2021
    duncan smith wrote:

    On 22/04/2021 18:36, David Jones wrote:
    duncan smith wrote:

    On 22/04/2021 00:33, Rich Ulrich wrote:
    On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones"
    <dajhawkxx@nowherel.com> wrote:

    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <=
    alpha, >>> we could simply claim that the null hypothesis could be
    rejected. >>> However, what if the result is p-value > alpha? What
    kind of >>> constructive discussions could we make more than simply
    saying that >>> we could not reject the null hypothesis? For
    example, >> could we found >>> something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get
    the >>> result of p-value > alpha, what useful information could we
    deduce, >>> in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not
    enough. >> You should almost always go on to derive a confidence
    interval for an >> "effect size". You then need to think, in the
    particular context, >> about whether that range of effect-size is
    important. If you were >> planning to go on to a further
    experiment, >> you could pick a value for >> the effect-size
    (possibly from within >> the confidence interval, but some >> important-to-detect value) and >> set-up the experiment so as to be
    able >> to detect an effect of that >> size. This might just mean
    choosing a >> sample-size, either via a >> test-power-type analysis
    or by an argument >> based on an a formula >> for a standard error
    for the estimated >> effect-size, using results >> from the first
    experiment.

    But, haven't you asked this same question here previously?

    Good comments.

    I have some wandering thoughts, inspired by, "more than simply
    saying that we could not reject the null hypothesis".

    Let's assume that some experiment was thought, a priori, to have
    enough power to produce an interesting result. But it failed to.

    Was the experiment carried out without hitch? Was protocol
    followed? without modification? on the sample that was expected?
    (I wonder how many clinical treatment trials have had their
    results >>> confounded by the covid epidemic.)

    I remember reading of one clinical study which "failed to
    replicate" the treatment results of an earlier study; the original
    authors complained that the experiment, as PERFORMED, did NOT use
    the most important aspects of methods they had recommended. I
    concluded that since I was not a clinician, I could not judge
    whether the differences should be important.

    Were the background conditions what were expected? If the
    epidemic disappears, the expected number of Events may not
    have appeared.

    If the experiment failed to find its interesting result, even
    though >>> carried out without obvious problems emerging, then it
    must be >>> time to revise the hypothesis (if you are unwilling to
    abandon it). >>> Does it need a specific sub-sample or circumstances
    or some >>> variation in how something is done ("double the dose")?


    And after the various hypothesis revisions and subgroup analyses
    how >> would you justify any claims arising from the exercise?

    Duncan

    The context here is that one would protect oneself from the dangers
    of data-dredging or over-analysing a single set of data by going on
    to do an independent experiment and an independent analysis of data
    from that experiment.


    There still needs to be some justifiable, pre-specified criterion for claiming a positive result (or stopping and generating no such claim).

    Duncan

    Well yes, one can't just go on doing new experiments until the result
    one wants is found. You can't just ignore all the individual steps
    taken in an endeavour. One approach would be to set up a simulation
    experiment that replicates all those individual steps. Alternatively,
    there would be approaches via the sequential-testing standards used in quality-control. However, a lot is left to the experimenter's integrity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to dajhawkxx@nowherel.com on Fri Apr 23 14:04:17 2021
    On Fri, 23 Apr 2021 09:14:18 +0000 (UTC), "David Jones" <dajhawkxx@nowherel.com> wrote:

    duncan smith wrote:

    On 22/04/2021 18:36, David Jones wrote:
    duncan smith wrote:

    On 22/04/2021 00:33, Rich Ulrich wrote:
    On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones"
    <dajhawkxx@nowherel.com> wrote:

    Cosine wrote:

    Hi:

    When we have the result of the experiment to be p-value <=
    alpha, >>> we could simply claim that the null hypothesis could be
    rejected. >>> However, what if the result is p-value > alpha? What
    kind of >>> constructive discussions could we make more than simply
    saying that >>> we could not reject the null hypothesis? For
    example, >> could we found >>> something useful to re-design a new
    experiment?

    Say when we are testing a new drug or a new method and we get
    the >>> result of p-value > alpha, what useful information could we
    deduce, >>> in addition to H0 could not be rejected?

    Thank you,

    In general, the result of a significance test on its own is not
    enough. >> You should almost always go on to derive a confidence
    interval for an >> "effect size". You then need to think, in the
    particular context, >> about whether that range of effect-size is
    important. If you were >> planning to go on to a further
    experiment, >> you could pick a value for >> the effect-size
    (possibly from within >> the confidence interval, but some >>
    important-to-detect value) and >> set-up the experiment so as to be
    able >> to detect an effect of that >> size. This might just mean
    choosing a >> sample-size, either via a >> test-power-type analysis
    or by an argument >> based on an a formula >> for a standard error
    for the estimated >> effect-size, using results >> from the first
    experiment.

    But, haven't you asked this same question here previously?

    Good comments.

    I have some wandering thoughts, inspired by, "more than simply
    saying that we could not reject the null hypothesis".

    Let's assume that some experiment was thought, a priori, to have
    enough power to produce an interesting result. But it failed to.

    Was the experiment carried out without hitch? Was protocol
    followed? without modification? on the sample that was expected?
    (I wonder how many clinical treatment trials have had their
    results >>> confounded by the covid epidemic.)

    I remember reading of one clinical study which "failed to
    replicate" the treatment results of an earlier study; the original
    authors complained that the experiment, as PERFORMED, did NOT use
    the most important aspects of methods they had recommended. I
    concluded that since I was not a clinician, I could not judge
    whether the differences should be important.

    Were the background conditions what were expected? If the
    epidemic disappears, the expected number of Events may not
    have appeared.

    If the experiment failed to find its interesting result, even
    though >>> carried out without obvious problems emerging, then it
    must be >>> time to revise the hypothesis (if you are unwilling to
    abandon it). >>> Does it need a specific sub-sample or circumstances
    or some >>> variation in how something is done ("double the dose")?


    And after the various hypothesis revisions and subgroup analyses
    how >> would you justify any claims arising from the exercise?

    Duncan

    The context here is that one would protect oneself from the dangers
    of data-dredging or over-analysing a single set of data by going on
    to do an independent experiment and an independent analysis of data
    from that experiment.


    There still needs to be some justifiable, pre-specified criterion for
    claiming a positive result (or stopping and generating no such claim).

    Duncan

    Well yes, one can't just go on doing new experiments until the result
    one wants is found. You can't just ignore all the individual steps

    We have been open-ended about whose science we could be speaking
    of, and whose conventions apply. In clinical research (psychiatry), I
    divided our hypotheses into the ones that we were confirming from
    the start, justifying the study; and all others. Something found by data-dredging would be a "speculative"result, to be considered in the
    future.

    taken in an endeavour. One approach would be to set up a simulation >experiment that replicates all those individual steps. Alternatively,
    there would be approaches via the sequential-testing standards used in >quality-control. However, a lot is left to the experimenter's integrity.

    The report of results will face critiques, before and after
    publication. The investigator has standards to meet.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Duffy@21:1/5 to Cosine on Fri Apr 30 05:06:48 2021
    Cosine <asecant@gmail.com> wrote:
    Hi:

    When we have the result of the experiment to be p-value <= alpha, we could simply claim that the null hypothesis could be rejected. However, what if the result is p-value > alpha? What kind of constructive discussions could we make more than simply
    saying that we could not reject the null hypothesis? For example, could we found something useful to re-design a new experiment?

    Say when we are testing a new drug or a new method and we get the result of p-value > alpha, what useful information could we deduce, in addition to H0 could not be rejected?


    Have a look at R.A. Fisher's analysis of Mendel's experimental data -
    spoiler, he thought they fitted the hypothesis much better than he would
    expect by chance.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)