• Q influence of statistics on scientific discoveries

    From Cosine@21:1/5 to All on Fri Dec 10 13:25:00 2021
    Hi:

    Is there literature discussing the issues about how statistical theories affect the construction or deconstruction of the hypotheses, theories, or laws in natural science, e.g., physics, chemistry, biology, and in engineering, e.g., electrical
    engineering, computer engineering, civil engineering?

    For example, a famous/important hypothesis/theory/law could not be proved or disproved until someone use some statistical theories to resolve this issue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rich Ulrich@21:1/5 to All on Sun Dec 12 15:37:30 2021
    On Fri, 10 Dec 2021 13:25:00 -0800 (PST), Cosine <asecant@gmail.com>
    wrote:

    Hi:

    Is there literature discussing the issues about how statistical
    theories affect the construction or deconstruction of the hypotheses, theories, or laws in natural science, e.g., physics, chemistry,
    biology, and in engineering, e.g., electrical engineering, computer engineering, civil engineering?

    Well, I don't know what you mean by "statistical theories."

    Are you asking whether "statisticians" are engaged in those
    areas? There are surely a lot of high-powered mathematicians
    in physics and chemistry these days. Sometimes the same
    distributions have different names, depending on the specialty,
    but some insights bleed over.

    The interior of nuclear reactors and atom bombs are modeled
    by the statistical distribution of the capture-cross-section. The
    Cauchy distribution, also called the "Lorentzian distribution",
    has no mean and infinite variance, so this model is

    MRIs and CT scans are statistical methods of deconvolution.

    I was asked (forty years ago) by a young physicist about how they
    should consider sparse data that they had about the weight/power
    of new nuclear particles -- different instruments have different
    limits, and the measurements made near the max power were
    prone to greater error. I might have added some to what he
    already knew, since his focus had never been inference.

    Those guys have caught up on the rules of inference. New particles
    or phenomena,these days, are often identified after counting the
    number of "events detected" over months or years, and if p < 1 in
    10 thousand (or some such), Something New can be declared.

    Astronomical events and studies also resemble (to me, at least)
    methods in decovolution, ane they apply testing of significance.

    What happens with a large number of gas molecules follows
    the ideal gas law, for an ideal gas, PV= nRT . Applied statistics
    give them the velocity of particles; exceptions to the law give
    systematic departures. Statistical studies of those data lead to
    further models.


    For example, a famous/important hypothesis/theory/law could not be
    proved or disproved until someone use some statistical theories to
    resolve this issue.


    Astronomers and particle physicists are using statistical inference
    to conclude that old theories are NOT sufficient, and the door
    is opened to new suggestions. Sometimes there is a new idea that
    some of them jump on. Our best, old theories are the ones that
    have not yet been supplanted.

    Statistics can show that some OLD idea is not enough; we can't
    /prove/ that the new explanation has to be forever 'right.'

    --
    Rich Ulrich

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)