• Q assembling sample set from existing smaller sets

    From Cosine@21:1/5 to All on Mon Jul 5 10:42:33 2021
    Hi:

    How do we properly assemble a dataset for testing the performance of a new method of screening by a set of small datasets?

    To have enough power, we need to have a dataset that is large enough. This might not be possible in practice. Some papers resolve this issue by combining a set of similar but small datasets.

    The critical problem here is: what are the rules to make sure the dataset combined is appropriate? Are there books that illustrate this type of problem?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Duffy@21:1/5 to Cosine on Tue Jul 6 03:30:10 2021
    Cosine <asecant@gmail.com> wrote:
    How do we properly assemble a dataset for testing the performance of a new method of screening by a set of small datasets?
    To have enough power, we need to have a dataset that is large enough. This might not be possible in practice. Some papers resolve this issue by combining a set of similar but small datasets.
    The critical problem here is: what are the rules to make sure the dataset combined is appropriate? Are there books that illustrate this type of problem?

    One approach
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2790297/

    This is not just statistical, but domain specific (excluding one dataset
    is based on common-sensical tests of study quality, but also a
    knowledge of the underlying science of the test). Check out http://prisma-statement.org/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to davidD@qimr.edu.au on Wed Jul 7 11:52:03 2021
    On Tue, 6 Jul 2021 03:30:10 +0000 (UTC), David Duffy
    <davidD@qimr.edu.au> wrote:

    Cosine <asecant@gmail.com> wrote:
    How do we properly assemble a dataset for testing the performance of a new method of screening by a set of small datasets?
    To have enough power, we need to have a dataset that is large enough. This might not be possible in practice. Some papers resolve this issue by combining a set of similar but small datasets.
    The critical problem here is: what are the rules to make sure the dataset combined is appropriate? Are there books that illustrate this type of problem?

    One approach
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2790297/

    This is not just statistical, but domain specific (excluding one dataset
    is based on common-sensical tests of study quality, but also a
    knowledge of the underlying science of the test). Check out >http://prisma-statement.org/

    Thanks for the reference.

    "Welcome to the Preferred Reporting Items for Systematic Reviews
    and Meta-Analyses (PRISMA) website!"

    I haven't spent much time reading, but that looks like an
    excellent resource.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Duffy@21:1/5 to Rich Ulrich on Wed Jul 7 23:37:14 2021
    Rich Ulrich <rich.ulrich@comcast.net> wrote:
    http://prisma-statement.org/
    "Welcome to the Preferred Reporting Items for Systematic Reviews
    and Meta-Analyses (PRISMA) website!"

    The Cochrane Consortium is the other such go-to resource. I just noticed they have a textbook on how to do "Cochrane Reviews of Diagnostic Test Accuracy",

    https://methods.cochrane.org/sdt/handbook-dta-reviews

    and software notes,

    https://methods.cochrane.org/sdt/software-meta-analysis-dta-studies

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)