Hi:
When we have the result of the experiment to be p-value <= alpha,
we could simply claim that the null hypothesis could be rejected.
However, what if the result is p-value > alpha? What kind of
constructive discussions could we make more than simply saying that
we could not reject the null hypothesis? For example, could we found something useful to re-design a new experiment?
Say when we are testing a new drug or a new method and we get the
result of p-value > alpha, what useful information could we deduce,
in addition to H0 could not be rejected?
Thank you,
Cosine wrote:
Hi:
When we have the result of the experiment to be p-value <= alpha,
we could simply claim that the null hypothesis could be rejected.
However, what if the result is p-value > alpha? What kind of
constructive discussions could we make more than simply saying that
we could not reject the null hypothesis? For example, could we found
something useful to re-design a new experiment?
Say when we are testing a new drug or a new method and we get the
result of p-value > alpha, what useful information could we deduce,
in addition to H0 could not be rejected?
Thank you,
In general, the result of a significance test on its own is not enough.
You should almost always go on to derive a confidence interval for an
"effect size". You then need to think, in the particular context,
about whether that range of effect-size is important. If you were
planning to go on to a further experiment, you could pick a value for
the effect-size (possibly from within the confidence interval, but some >important-to-detect value) and set-up the experiment so as to be able
to detect an effect of that size. This might just mean choosing a >sample-size, either via a test-power-type analysis or by an argument
based on an a formula for a standard error for the estimated
effect-size, using results from the first experiment.
But, haven't you asked this same question here previously?
On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones" <dajhawkxx@nowherel.com> wrote:
Cosine wrote:
Hi:
When we have the result of the experiment to be p-value <= alpha,
we could simply claim that the null hypothesis could be rejected.
However, what if the result is p-value > alpha? What kind of
constructive discussions could we make more than simply saying that
we could not reject the null hypothesis? For example, could we found
something useful to re-design a new experiment?
Say when we are testing a new drug or a new method and we get the
result of p-value > alpha, what useful information could we deduce,
in addition to H0 could not be rejected?
Thank you,
In general, the result of a significance test on its own is not enough.
You should almost always go on to derive a confidence interval for an
"effect size". You then need to think, in the particular context,
about whether that range of effect-size is important. If you were
planning to go on to a further experiment, you could pick a value for
the effect-size (possibly from within the confidence interval, but some
important-to-detect value) and set-up the experiment so as to be able
to detect an effect of that size. This might just mean choosing a
sample-size, either via a test-power-type analysis or by an argument
based on an a formula for a standard error for the estimated
effect-size, using results from the first experiment.
But, haven't you asked this same question here previously?
Good comments.
I have some wandering thoughts, inspired by, "more than simply
saying that we could not reject the null hypothesis".
Let's assume that some experiment was thought, a priori, to have
enough power to produce an interesting result. But it failed to.
Was the experiment carried out without hitch? Was protocol
followed? without modification? on the sample that was expected?
(I wonder how many clinical treatment trials have had their results confounded by the covid epidemic.)
I remember reading of one clinical study which "failed to replicate"
the treatment results of an earlier study; the original authors
complained that the experiment, as PERFORMED, did NOT use the
most important aspects of methods they had recommended. I
concluded that since I was not a clinician, I could not judge whether
the differences should be important.
Were the background conditions what were expected? If the
epidemic disappears, the expected number of Events may not
have appeared.
If the experiment failed to find its interesting result, even though
carried out without obvious problems emerging, then it must be
time to revise the hypothesis (if you are unwilling to abandon it).
Does it need a specific sub-sample or circumstances or some
variation in how something is done ("double the dose")?
On 22/04/2021 00:33, Rich Ulrich wrote:
On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones" <dajhawkxx@nowherel.com> wrote:
Cosine wrote:
Hi:
alpha, >>> we could simply claim that the null hypothesis could beWhen we have the result of the experiment to be p-value <=
rejected. >>> However, what if the result is p-value > alpha? What
kind of >>> constructive discussions could we make more than simply
saying that >>> we could not reject the null hypothesis? For example,
could we found >>> something useful to re-design a new experiment?
the >>> result of p-value > alpha, what useful information could weSay when we are testing a new drug or a new method and we get
deduce, >>> in addition to H0 could not be rejected?
Thank you,
enough. >> You should almost always go on to derive a confidenceIn general, the result of a significance test on its own is not
interval for an >> "effect size". You then need to think, in the
particular context, >> about whether that range of effect-size is
important. If you were >> planning to go on to a further experiment,
you could pick a value for >> the effect-size (possibly from within
the confidence interval, but some >> important-to-detect value) and
set-up the experiment so as to be able >> to detect an effect of that
size. This might just mean choosing a >> sample-size, either via a test-power-type analysis or by an argument >> based on an a formula
for a standard error for the estimated >> effect-size, using results
from the first experiment.
But, haven't you asked this same question here previously?
Good comments.
I have some wandering thoughts, inspired by, "more than simply
saying that we could not reject the null hypothesis".
Let's assume that some experiment was thought, a priori, to have
enough power to produce an interesting result. But it failed to.
Was the experiment carried out without hitch? Was protocol
followed? without modification? on the sample that was expected?
(I wonder how many clinical treatment trials have had their results confounded by the covid epidemic.)
I remember reading of one clinical study which "failed to
replicate" the treatment results of an earlier study; the original
authors complained that the experiment, as PERFORMED, did NOT use
the most important aspects of methods they had recommended. I
concluded that since I was not a clinician, I could not judge
whether the differences should be important.
Were the background conditions what were expected? If the
epidemic disappears, the expected number of Events may not
have appeared.
If the experiment failed to find its interesting result, even though carried out without obvious problems emerging, then it must be
time to revise the hypothesis (if you are unwilling to abandon it).
Does it need a specific sub-sample or circumstances or some
variation in how something is done ("double the dose")?
And after the various hypothesis revisions and subgroup analyses how
would you justify any claims arising from the exercise?
Duncan
duncan smith wrote:
On 22/04/2021 00:33, Rich Ulrich wrote:
On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones"alpha, >>> we could simply claim that the null hypothesis could be
<dajhawkxx@nowherel.com> wrote:
Cosine wrote:
Hi:
When we have the result of the experiment to be p-value <=
rejected. >>> However, what if the result is p-value > alpha? What
kind of >>> constructive discussions could we make more than simply
saying that >>> we could not reject the null hypothesis? For example,
could we found >>> something useful to re-design a new experiment?
the >>> result of p-value > alpha, what useful information could we
Say when we are testing a new drug or a new method and we get
deduce, >>> in addition to H0 could not be rejected?
enough. >> You should almost always go on to derive a confidence
Thank you,
In general, the result of a significance test on its own is not
interval for an >> "effect size". You then need to think, in the
particular context, >> about whether that range of effect-size is
important. If you were >> planning to go on to a further experiment,
you could pick a value for >> the effect-size (possibly from within
the confidence interval, but some >> important-to-detect value) and
set-up the experiment so as to be able >> to detect an effect of that
size. This might just mean choosing a >> sample-size, either via a
test-power-type analysis or by an argument >> based on an a formula
for a standard error for the estimated >> effect-size, using results
from the first experiment.
But, haven't you asked this same question here previously?
Good comments.
I have some wandering thoughts, inspired by, "more than simply
saying that we could not reject the null hypothesis".
Let's assume that some experiment was thought, a priori, to have
enough power to produce an interesting result. But it failed to.
Was the experiment carried out without hitch? Was protocol
followed? without modification? on the sample that was expected?
(I wonder how many clinical treatment trials have had their results
confounded by the covid epidemic.)
I remember reading of one clinical study which "failed to
replicate" the treatment results of an earlier study; the original
authors complained that the experiment, as PERFORMED, did NOT use
the most important aspects of methods they had recommended. I
concluded that since I was not a clinician, I could not judge
whether the differences should be important.
Were the background conditions what were expected? If the
epidemic disappears, the expected number of Events may not
have appeared.
If the experiment failed to find its interesting result, even though
carried out without obvious problems emerging, then it must be
time to revise the hypothesis (if you are unwilling to abandon it).
Does it need a specific sub-sample or circumstances or some
variation in how something is done ("double the dose")?
And after the various hypothesis revisions and subgroup analyses how
would you justify any claims arising from the exercise?
Duncan
The context here is that one would protect oneself from the dangers of data-dredging or over-analysing a single set of data by going on to do
an independent experiment and an independent analysis of data from that experiment.
On 22/04/2021 18:36, David Jones wrote:
duncan smith wrote:
On 22/04/2021 00:33, Rich Ulrich wrote:
On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones"
<dajhawkxx@nowherel.com> wrote:
Cosine wrote:
Hi:
example, >> could we found >>> something useful to re-design a new experiment?alpha, >>> we could simply claim that the null hypothesis could beWhen we have the result of the experiment to be p-value <=
rejected. >>> However, what if the result is p-value > alpha? What
kind of >>> constructive discussions could we make more than simply
saying that >>> we could not reject the null hypothesis? For
the >>> result of p-value > alpha, what useful information could weSay when we are testing a new drug or a new method and we get
deduce, >>> in addition to H0 could not be rejected?
Thank you,
experiment, >> you could pick a value for >> the effect-sizeenough. >> You should almost always go on to derive a confidenceIn general, the result of a significance test on its own is not
interval for an >> "effect size". You then need to think, in the
particular context, >> about whether that range of effect-size is
important. If you were >> planning to go on to a further
(possibly from within >> the confidence interval, but some >> important-to-detect value) and >> set-up the experiment so as to be
able >> to detect an effect of that >> size. This might just mean
choosing a >> sample-size, either via a >> test-power-type analysis
or by an argument >> based on an a formula >> for a standard error
for the estimated >> effect-size, using results >> from the first
experiment.
But, haven't you asked this same question here previously?
Good comments.
I have some wandering thoughts, inspired by, "more than simply
saying that we could not reject the null hypothesis".
Let's assume that some experiment was thought, a priori, to have
enough power to produce an interesting result. But it failed to.
results >>> confounded by the covid epidemic.)Was the experiment carried out without hitch? Was protocol
followed? without modification? on the sample that was expected?
(I wonder how many clinical treatment trials have had their
I remember reading of one clinical study which "failed to
replicate" the treatment results of an earlier study; the original
authors complained that the experiment, as PERFORMED, did NOT use
the most important aspects of methods they had recommended. I
concluded that since I was not a clinician, I could not judge
whether the differences should be important.
Were the background conditions what were expected? If the
epidemic disappears, the expected number of Events may not
have appeared.
though >>> carried out without obvious problems emerging, then itIf the experiment failed to find its interesting result, even
must be >>> time to revise the hypothesis (if you are unwilling to
abandon it). >>> Does it need a specific sub-sample or circumstances
or some >>> variation in how something is done ("double the dose")?
how >> would you justify any claims arising from the exercise?And after the various hypothesis revisions and subgroup analyses
Duncan
The context here is that one would protect oneself from the dangers
of data-dredging or over-analysing a single set of data by going on
to do an independent experiment and an independent analysis of data
from that experiment.
There still needs to be some justifiable, pre-specified criterion for claiming a positive result (or stopping and generating no such claim).
Duncan
duncan smith wrote:
On 22/04/2021 18:36, David Jones wrote:
duncan smith wrote:example, >> could we found >>> something useful to re-design a new
On 22/04/2021 00:33, Rich Ulrich wrote:
On Wed, 21 Apr 2021 13:52:30 +0000 (UTC), "David Jones"alpha, >>> we could simply claim that the null hypothesis could be
<dajhawkxx@nowherel.com> wrote:
Cosine wrote:
Hi:
When we have the result of the experiment to be p-value <=
rejected. >>> However, what if the result is p-value > alpha? What
kind of >>> constructive discussions could we make more than simply
saying that >>> we could not reject the null hypothesis? For
experiment?
experiment, >> you could pick a value for >> the effect-sizethe >>> result of p-value > alpha, what useful information could we
Say when we are testing a new drug or a new method and we get
deduce, >>> in addition to H0 could not be rejected?
enough. >> You should almost always go on to derive a confidence
Thank you,
In general, the result of a significance test on its own is not
interval for an >> "effect size". You then need to think, in the
particular context, >> about whether that range of effect-size is
important. If you were >> planning to go on to a further
(possibly from within >> the confidence interval, but some >>
important-to-detect value) and >> set-up the experiment so as to be
able >> to detect an effect of that >> size. This might just mean
choosing a >> sample-size, either via a >> test-power-type analysis
or by an argument >> based on an a formula >> for a standard error
for the estimated >> effect-size, using results >> from the first
experiment.
results >>> confounded by the covid epidemic.)
But, haven't you asked this same question here previously?
Good comments.
I have some wandering thoughts, inspired by, "more than simply
saying that we could not reject the null hypothesis".
Let's assume that some experiment was thought, a priori, to have
enough power to produce an interesting result. But it failed to.
Was the experiment carried out without hitch? Was protocol
followed? without modification? on the sample that was expected?
(I wonder how many clinical treatment trials have had their
though >>> carried out without obvious problems emerging, then it
I remember reading of one clinical study which "failed to
replicate" the treatment results of an earlier study; the original
authors complained that the experiment, as PERFORMED, did NOT use
the most important aspects of methods they had recommended. I
concluded that since I was not a clinician, I could not judge
whether the differences should be important.
Were the background conditions what were expected? If the
epidemic disappears, the expected number of Events may not
have appeared.
If the experiment failed to find its interesting result, even
must be >>> time to revise the hypothesis (if you are unwilling to
abandon it). >>> Does it need a specific sub-sample or circumstances
or some >>> variation in how something is done ("double the dose")?
how >> would you justify any claims arising from the exercise?
And after the various hypothesis revisions and subgroup analyses
Duncan
The context here is that one would protect oneself from the dangers
of data-dredging or over-analysing a single set of data by going on
to do an independent experiment and an independent analysis of data
from that experiment.
There still needs to be some justifiable, pre-specified criterion for
claiming a positive result (or stopping and generating no such claim).
Duncan
Well yes, one can't just go on doing new experiments until the result
one wants is found. You can't just ignore all the individual steps
taken in an endeavour. One approach would be to set up a simulation >experiment that replicates all those individual steps. Alternatively,
there would be approaches via the sequential-testing standards used in >quality-control. However, a lot is left to the experimenter's integrity.
Hi:saying that we could not reject the null hypothesis? For example, could we found something useful to re-design a new experiment?
When we have the result of the experiment to be p-value <= alpha, we could simply claim that the null hypothesis could be rejected. However, what if the result is p-value > alpha? What kind of constructive discussions could we make more than simply
Say when we are testing a new drug or a new method and we get the result of p-value > alpha, what useful information could we deduce, in addition to H0 could not be rejected?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 113 |
Nodes: | 8 (1 / 7) |
Uptime: | 70:04:33 |
Calls: | 2,499 |
Files: | 8,667 |
Messages: | 1,912,906 |