• Q comparing data of others

    From Cosine@21:1/5 to All on Fri Apr 23 06:08:24 2021
    Hi:

    To test if the data from the two groups have a meaningful difference, we need to use the difference as the test statistics. For example, to test if the performance of a new method is better than the existing method.

    The question is that what if we do not have the original data but only some reported statistics? For example, a publication reported some statistics of method-1, and another publication reported some statistics of method-2. How do we check if there
    would be any statistically meaningful difference between the two methods?

    Thank you,

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Fri Apr 23 13:42:52 2021
    On Fri, 23 Apr 2021 06:08:24 -0700 (PDT), Cosine <asecant@gmail.com>
    wrote:

    Hi:

    To test if the data from the two groups have a meaningful difference, we need to use the difference as the test statistics. For example, to test if the performance of a new method is better than the existing method.

    The question is that what if we do not have the original data but only some reported statistics? For example, a publication reported some statistics of method-1, and another publication reported some statistics of method-2. How do we check if there
    would be any statistically meaningful difference between the two methods?


    This falls under the topic, "meta-analysis." See https://en.wikipedia.org/wiki/Meta-analysis

    I wish that the article, at the start, should emphasize that
    a credible meta-analysis must include both an expert on
    the subject matter and a competent statistician.


    "Meta analysis" most often describes the combining of
    multiple studies, but a competent meta-analysis will begin
    by considering whether the results (even) are similar
    enough to combine as representing one outcome. So,
    contrasts are relevant. Initial steps for combining or
    contrasting are similar.

    For two studies:
    Find a common measure of "effect size"; compute the
    respective standard errors for each; compute the t-test.

    If there is a sizable difference between outcomes (logically
    or statistically), it is ill-advised to advertise the common (weighted?-average) estimate of outcome as useful.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Miller@21:1/5 to Cosine on Fri Apr 23 17:21:43 2021
    On Saturday, April 24, 2021 at 1:08:32 AM UTC+12, Cosine wrote:
    Hi:

    To test if the data from the two groups have a meaningful difference, we need to use the difference as the test statistics. For example, to test if the performance of a new method is better than the existing method.

    The question is that what if we do not have the original data but only some reported statistics? For example, a publication reported some statistics of method-1, and another publication reported some statistics of method-2. How do we check if there
    would be any statistically meaningful difference between the two methods?

    If you are willing to treat the two datasets as if they came from a single study, you might be able to test directly, depending on what stats have been reported. To compare with a 2-sample t-test, for example, you would need only the 2 sample sizes, 2
    observed means, and 2 observed s.d.'s. (And if the s.d. isn't given in one report, you could recover it if the report included the standard error or the mean or a confidence interval for the mean.) Many tests are based on some relatively simple but "
    sufficient" summary stats, and you can often compute these tests using only the summary info that is reported.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)