On Sat, 29 Aug 2020 14:02:49 -0700 (PDT), Cosine <
asecant@gmail.com>
wrote:
Hi:
Before conducting multiple comparisons, we would conduct the F test to make sure that there would be some effect other than a random one. That is, we would not spend much time ending up finding out that there is no difference among the groups of data.
Conducting an overall F test is a /strategy/ that can be
appropriate. In certain circumstances, I have liked to do
the overall F, followed by Least Significant Difference (LSD)
testing -- that is, No correction for multiple tests.
One reason that a vast number of clinical trials use only
two groups is the consideration of "power of analysis."
If there is a limit on how many cases are available (or how
much money you can spend), splitting your available sample
three ways will reduce the power (a) by reducing the N
for each group and (b) by introducing "multiple tests."
For one Control and two+ Test groups, you might use a
larger N for the Control and employ Duncan's Test as your
overall test. With fixed N, that maximizes power for the simple
contrasts to Control.
Thnk in another direction, would there be any benefits by conducting the F test AFTER the multiple comparisons have been completed. Of course, this means that we conduct the comparison without first completing an F test. Now, would the results of the F
test conducted after the multiple comparisons add extra information to the analysis?
No. Unless there's some context I'm not thinking of.
--
Rich Ulrich
--- SoupGate-Win32 v1.05
* Origin: fsxNet Usenet Gateway (21:1/5)