• Q what re-sampling could help t-test

    From Cosine@21:1/5 to All on Tue Jan 7 11:32:24 2020
    Hi:

    When doing analysis for problems with a small sample, a popular way is to replace the z-test to t-test. However, there is still another approach, the re-sampling method. One can repeatedly and randomly draw "new" samples from the original sample set to
    form another sample set. After a set fo "new" sample sets are built, one can do analysis on these sample sets. But what does this type of approach helps to "cure" the problem of having a small sample set? Does it help improve the power of analysis or
    else?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Tue Jan 7 16:38:42 2020
    On Tue, 7 Jan 2020 11:32:24 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Hi:

    When doing analysis for problems with a small
    sample, a popular way is to replace the z-test to t-test.
    However, there is still another approach, the re-sampling
    method. One can repeatedly and randomly draw "new"
    samples from the original sample set to form another
    sample set. After a set fo "new" sample sets are built,
    one can do analysis on these sample sets. But what
    does this type of approach helps to "cure" the
    problem of having a small sample set? Does it help
    improve the power of analysis or else?

    What you are describing is called "bootstrap".
    It is used for circumstances where the direct computation
    of the variance is hard to define, or is made unreliable by
    oddities of the distribution.

    The t-test is simple. Thus, it is not improved by bootstrapping.

    The choice between assuming "common variance" and
    "separate variances" for the two groups should depend
    on expectations based on expectations a professional in
    the area would have for the data, /not/ on the test (in SPSS,
    say) that tells you that "variances are unequal."

    The biggest help for robustness of t-testing is the
    willingness to perform a transformation that produces
    a scale that is "interval" in terms of whatever the
    hypotheses are about. (For instance: chemical
    concentrations in biological processes are typically
    compared as "twice as much" or "ten times as much" -
    implying that /those/ processes merit taking the logs
    of the raw concentrations, to produce "equal intervals."


    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to Rich Ulrich on Wed Jan 8 00:26:34 2020
    Rich Ulrich wrote:

    On Tue, 7 Jan 2020 11:32:24 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Hi:

    When doing analysis for problems with a small
    sample, a popular way is to replace the z-test to t-test.
    However, there is still another approach, the re-sampling
    method. One can repeatedly and randomly draw "new"
    samples from the original sample set to form another
    sample set. After a set fo "new" sample sets are built,
    one can do analysis on these sample sets. But what
    does this type of approach helps to "cure" the
    problem of having a small sample set? Does it help
    improve the power of analysis or else?

    What you are describing is called "bootstrap".
    It is used for circumstances where the direct computation
    of the variance is hard to define, or is made unreliable by
    oddities of the distribution.

    The t-test is simple. Thus, it is not improved by bootstrapping.

    The choice between assuming "common variance" and
    "separate variances" for the two groups should depend
    on expectations based on expectations a professional in
    the area would have for the data, not on the test (in SPSS,
    say) that tells you that "variances are unequal."

    The biggest help for robustness of t-testing is the
    willingness to perform a transformation that produces
    a scale that is "interval" in terms of whatever the
    hypotheses are about. (For instance: chemical
    concentrations in biological processes are typically
    compared as "twice as much" or "ten times as much" -
    implying that those processes merit taking the logs
    of the raw concentrations, to produce "equal intervals."

    The OP's question, being vague, encompasses also the possibility of
    doing permutations for testing rather than bootstrapping (for variances
    or testing). The same sort of problems outlined for bootstrapping still
    apply, but there are rather fewer variations-on-a-theme for
    permutations when it comes to constructing formal tests. In addition,
    with permutations, it is rather clearer what role is being played by
    the (what should be) explicit assumption that "all permutations are
    equally likely" under the null hypothesis. Thus one would not consider
    using permutations for an non-paired-two-group case, if there were not
    strong evidence that variances in the two samples were equal. Similarly
    the role of "pairs" in a permutation test for paired-sample two-group
    tests is made strongly evident, in saying that the values in a pair may
    be swapped or not-swapped with equal probability under the null
    hypothesis.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)