Hi:
Found that many academic journals would require the submission
report the statistical significance (in terms of p-value or
confidence interval) of the results; however, it seems less often
that a journal requires reporting the statistical power of the
results. Why is that?
Should a "complete" always include both statistical significance
(p-value or alpha) and power ( 1-beta )? What are the "practical"
meaning for power analysis? Say, would it be possible that the
results are not significant, but of high power? What are the
practical meaning for this situation?
Cosine wrote:
Hi:
Found that many academic journals would require the submission
report the statistical significance (in terms of p-value or
confidence interval) of the results; however, it seems less often
that a journal requires reporting the statistical power of the
results. Why is that?
Should a "complete" always include both statistical significance
(p-value or alpha) and power ( 1-beta )? What are the "practical"
meaning for power analysis? Say, would it be possible that the
results are not significant, but of high power? What are the
practical meaning for this situation?
You should look into the similarities in the theory behind power
analyses and confidence intervals. Specifically not approximate
confidence intervals, but the approach where points in the confidence intervals are defined to be those not rejected by a significance
test.So low power means a wide confidence interval.
But power analyses may not be thought of as being part of the "results"
of an experiment, but rather something that goes before the experiment,
being used to help to decide on the design and sample size. After an
initial experiment, you might do a power analysis to help you design
the next one in terms of sample size, given that the power analysis
will require assumptions or estimates of the sampling variation
inherent in the experimental procedure.
Hi:
Found that many academic journals would require the submission
report the statistical significance (in terms of p-value or
confidence interval) of the results; however, it seems less often
that a journal requires reporting the statistical power of the
results. Why is that?
Should a "complete" always include both statistical significance
(p-value or alpha) and power ( 1-beta )? What are the "practical"
meaning for power analysis? Say, would it be possible that the
results are not significant, but of high power? What are the
practical meaning for this situation?
If you found something, obviously you had enough power.
On Friday, February 3, 2023 at 4:00:24 PM UTC-5, Rich Ulrich wrote:
If you found something, obviously you had enough power.
Rich, a former boss of mine made a statement similar to yours in a stats book he co-authored, and I challenged it (using simulation) in this short presentation:
https://www.researchgate.net/publication/299533433_Does_Statistical_Significance_Really_Prove_that_Power_was_Adequate
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 190:58:34 |
Calls: | 6,739 |
Calls today: | 2 |
Files: | 12,268 |
Messages: | 5,366,833 |