How do we properly assemble a dataset for testing the performance of a new method of screening by a set of small datasets?
To have enough power, we need to have a dataset that is large enough. This might not be possible in practice. Some papers resolve this issue by combining a set of similar but small datasets.
The critical problem here is: what are the rules to make sure the dataset combined is appropriate? Are there books that illustrate this type of problem?
Cosine <asecant@gmail.com> wrote:
How do we properly assemble a dataset for testing the performance of a new method of screening by a set of small datasets?
To have enough power, we need to have a dataset that is large enough. This might not be possible in practice. Some papers resolve this issue by combining a set of similar but small datasets.
The critical problem here is: what are the rules to make sure the dataset combined is appropriate? Are there books that illustrate this type of problem?
One approach
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2790297/
This is not just statistical, but domain specific (excluding one dataset
is based on common-sensical tests of study quality, but also a
knowledge of the underlying science of the test). Check out >http://prisma-statement.org/
http://prisma-statement.org/"Welcome to the Preferred Reporting Items for Systematic Reviews
and Meta-Analyses (PRISMA) website!"
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 71:25:13 |
Calls: | 6,656 |
Calls today: | 2 |
Files: | 12,201 |
Messages: | 5,332,225 |
Posted today: | 1 |