• Q what do we PROVE by showing statistical significance of a clinical tr

    From Cosine@21:1/5 to All on Thu Sep 24 11:48:25 2020
    Hi:

    What do we prove by showing the statistical significance of the results of a clinical trial? Say, we have a new drug, a new device, or a new procedure, and then we design a clinical trial to show that there is statistical significance between the
    treatment results of the group using the new stuff and those of the controlled group. If this trial success, we claim that the new stuff is effective.

    But how reliable is this claim? Say, we have no doubt that if we cut off the head of a person, this person shall die. But even if we have new stuff passing the trial mentioned above, in practice, we still find that using that new stuff does not save
    everyone.

    So, what did we prove by conducting that kind of trail?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Duffy@21:1/5 to Cosine on Fri Sep 25 03:08:58 2020
    Cosine <asecant@gmail.com> wrote:
    Hi:

    What do we prove by showing the statistical significance of the
    results of a clinical trial? Say, we have a new drug, a new device, or
    a new procedure, and then we design a clinical trial to show that there
    is statistical significance between the treatment results of the group
    using the new stuff and those of the controlled group. If this trial
    success, we claim that the new stuff is effective.

    But how reliable is this claim? Say, we have no doubt that if we cut
    off the head of a person, this person shall die. But even if we have new stuff passing the trial mentioned above, in practice, we still find that using that new stuff does not save everyone.

    So, what did we prove by conducting that kind of trial?


    You need to read some textbooks. Consider my head chopping off
    experiment:

    N Prop_died 95%_CI
    5 100% 57%-100%
    10 100% 72%-100%
    20 100% 83%-100%

    When should the Data Monitoring Committee suggest we stop the trial?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bruce Weaver@21:1/5 to David Duffy on Fri Sep 25 06:43:35 2020
    On Thursday, September 24, 2020 at 11:09:03 PM UTC-4, David Duffy wrote:


    You need to read some textbooks. Consider my head chopping off
    experiment:

    N Prop_died 95%_CI
    5 100% 57%-100%
    10 100% 72%-100%
    20 100% 83%-100%

    When should the Data Monitoring Committee suggest we stop the trial?

    David's experiment reminds me of this systematic review published in BMJ:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to bweaver@lakeheadu.ca on Fri Sep 25 14:55:37 2020
    On Fri, 25 Sep 2020 06:43:35 -0700 (PDT), Bruce Weaver
    <bweaver@lakeheadu.ca> wrote:

    On Thursday, September 24, 2020 at 11:09:03 PM UTC-4, David Duffy wrote:


    You need to read some textbooks. Consider my head chopping off
    experiment:

    N Prop_died 95%_CI
    5 100% 57%-100%
    10 100% 72%-100%
    20 100% 83%-100%

    When should the Data Monitoring Committee suggest we stop the trial?

    David's experiment reminds me of this systematic review published in BMJ:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/

    I point out that this article has the implicit endorsement of NIH,
    appearing on the nih.gov website.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to davidD@qimr.edu.au on Fri Sep 25 15:43:30 2020
    On Fri, 25 Sep 2020 03:08:58 +0000 (UTC), David Duffy
    <davidD@qimr.edu.au> wrote:

    Cosine <asecant@gmail.com> wrote:
    Hi:

    What do we prove by showing the statistical significance of the
    results of a clinical trial?

    There is a modern trend /away from/ "statistical significance."

    Perhaps it is clearer to speak of the (often, 95%) Confidence
    Interval, as a clearer statement of the same information. See
    David Duffy's example, below.

    Herman Rubin, a wise gentleman who used to post here,
    regularly returned to "decision theory" as the underlying
    guide. I liked that reminder. If you want to, decisions can
    explicitly include costs and benefits, and Bayesian methods
    sometimes make use of "informative" prior information (instead
    of taking a technically-uninformative prior in the math).


    Say, we have a new drug, a new device, or
    a new procedure, and then we design a clinical trial to show that there
    is statistical significance between the treatment results of the group
    using the new stuff and those of the controlled group. If this trial
    success, we claim that the new stuff is effective.

    But how reliable is this claim? Say, we have no doubt that if we cut
    off the head of a person, this person shall die. But even if we have new
    stuff passing the trial mentioned above, in practice, we still find that
    using that new stuff does not save everyone.

    So, what did we prove by conducting that kind of trial?

    When we look at the CI, we should also apply the notion of
    "effect size." We might decide not to use a "better" method when
    it is expensive and when its improvement, as shown in trials
    with huge Ns, is very small.



    You need to read some textbooks. Consider my head chopping off
    experiment:

    N Prop_died 95%_CI
    5 100% 57%-100%
    10 100% 72%-100%
    20 100% 83%-100%

    When should the Data Monitoring Committee suggest we stop the trial?

    One might reach these quandries when ignoring costs and
    benefits from the start.

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)