• More on math constants for Fortran standard

    From David Duffy@21:1/5 to All on Fri Apr 1 05:18:57 2022
    There was a recent thread on what value of pi people would like to have available, say in a module. I noticed GSL has such inherited from the
    LIBC, ie "the constants from the Unix98 standard":
    M_E, M_LOG2E, M_LOG10E, M_LN2, M_LN10, M_PI, M_PI_2 [pi/2], M_PI_4,
    M_1_PI [1/pi], M_2_PI, M_2_SQRTPI, M_SQRT2, M_SQRT1_2 [1/sqrt(2)]
    which are macros, with the double long versions are M_El etc, and
    single precision M_Ef (so 39 numbers and counting). Something like

    use math_constants
    real (kind=8), parameter :: pi = m_constant("M_PI", kind=8)

    too ugly? I imagine m_constant() has a lookup table, but could run something like the spigot algorithm, and generate as many digits as requested ;)

    Cheers, David Duffy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to David Duffy on Fri Apr 1 10:55:32 2022
    On 4/1/22 12:18 AM, David Duffy wrote:
    There was a recent thread on what value of pi people would like to have available, say in a module. I noticed GSL has such inherited from the
    LIBC, ie "the constants from the Unix98 standard":
    M_E, M_LOG2E, M_LOG10E, M_LN2, M_LN10, M_PI, M_PI_2 [pi/2], M_PI_4,
    M_1_PI [1/pi], M_2_PI, M_2_SQRTPI, M_SQRT2, M_SQRT1_2 [1/sqrt(2)]
    which are macros, with the double long versions are M_El etc, and
    single precision M_Ef (so 39 numbers and counting). Something like

    use math_constants
    real (kind=8), parameter :: pi = m_constant("M_PI", kind=8)

    too ugly? I imagine m_constant() has a lookup table, but could run something like the spigot algorithm, and generate as many digits as requested ;)

    In addition to settling on which constants are available to the
    programmer, there are a couple of things that should be considered in
    the user interface.

    1) It should be difficult for the programmer to assign a low-precision
    value to a higher precision parameter (or variable). This is a common
    error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make
    him introduce low-precision variables and to do the conversions
    manually. Such a mistake should be difficult to make, not easy.

    2) It should be difficult to use a high-precision constant within an
    expression by mistake. The mixed-kind expressions in fortran require
    such expressions to be evaluated in the higher precision, so if this is
    done by mistake, it would result in unintended computational
    inefficiencies. If the programmer intends to evaluate those types of
    mixed-kind expressions, then he should be required to introduce
    variables of the appropriate kind to use in the expressions or to
    otherwise make the intention clear. Such a mistake should be difficult
    to make, not easy.

    3) The constants should be specified somehow so that their values do not
    depend on such details as the current rounding mode. This means that the constants might differ in the last few bits from an expression evaluated
    at run time that, in exact arithmetic, would return that same value.
    This convention would allow a programmer to test the floating point
    values for equality for values that were extracted separately. Other
    floating point comparisons should continue to take into account the
    finite precision of floating point arithmetic.

    4) I'm assuming that these constants are all mathematically based, so
    that their actual values do not change over time. If physical constants
    were to be included, ones that get refined over time with higher
    precision, then that also presents programming issues. If one section of
    code uses one value for Planks constant, and another section uses a
    different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the
    dates of their specification would somehow need to be included. To give
    one example, I memorized Avogadro's constant in high school as
    6.023*10^23. It was an experimentally determined value with error bars.
    The current definition is the integer constant, 6.02214076*10^23 with no
    error bars. Note that the 1960 era number that I memorized was off in
    the fourth decimal.

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Ron Shepard on Fri Apr 1 09:51:41 2022
    On Friday, April 1, 2022 at 8:55:37 AM UTC-7, Ron Shepard wrote:

    (snip)

    1) It should be difficult for the programmer to assign a low-precision
    value to a higher precision parameter (or variable). This is a common
    error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make
    him introduce low-precision variables and to do the conversions
    manually. Such a mistake should be difficult to make, not easy.

    Interesting idea, but note that it disallows popular constants
    like 0.1. Unless you restrict it to constants close to pi.


    2) It should be difficult to use a high-precision constant within an expression by mistake. The mixed-kind expressions in fortran require
    such expressions to be evaluated in the higher precision, so if this is
    done by mistake, it would result in unintended computational
    inefficiencies. If the programmer intends to evaluate those types of mixed-kind expressions, then he should be required to introduce
    variables of the appropriate kind to use in the expressions or to
    otherwise make the intention clear. Such a mistake should be difficult
    to make, not easy.

    Java requires a cast for narrowing conversions, which are in the
    direction of:

    double --> float --> long --> int --> short --> byte.

    That catches some constant mistakes, but not all.

    Note also that Java, like C, has floating point constants
    that default to double, with a trailing f for float.

    Java also has some useful constants in the Java.lang.Math
    class, such as Math.PI and Math.E. Those are double,
    with no float version given.


    3) The constants should be specified somehow so that their values do not depend on such details as the current rounding mode. This means that the constants might differ in the last few bits from an expression evaluated
    at run time that, in exact arithmetic, would return that same value.
    This convention would allow a programmer to test the floating point
    values for equality for values that were extracted separately. Other
    floating point comparisons should continue to take into account the
    finite precision of floating point arithmetic.

    As well as I know it, rounding mode is a run-time convention.

    Though in the case of x87 temporary real format, they will convert
    to double or float at run time, with current rounding mode.

    And x87 does include some useful constants.
    Besides 0, 1, and pi, it has log2(10), log2(e),
    log(2) and log10(2).

    4) I'm assuming that these constants are all mathematically based, so
    that their actual values do not change over time. If physical constants
    were to be included, ones that get refined over time with higher
    precision, then that also presents programming issues. If one section of
    code uses one value for Planks constant, and another section uses a
    different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the dates of their specification would somehow need to be included. To give
    one example, I memorized Avogadro's constant in high school as
    6.023*10^23. It was an experimentally determined value with error bars.
    The current definition is the integer constant, 6.02214076*10^23 with no error bars. Note that the 1960 era number that I memorized was off in
    the fourth decimal.

    Many of the useful physical constants are now exact, since 2019.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robin Vowels@21:1/5 to All on Fri Apr 1 19:27:13 2022
    On Saturday, April 2, 2022 at 3:51:43 AM UTC+11, gah4 wrote:
    On Friday, April 1, 2022 at 8:55:37 AM UTC-7, Ron Shepard wrote:

    (snip)
    1) It should be difficult for the programmer to assign a low-precision value to a higher precision parameter (or variable). This is a common
    error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make him introduce low-precision variables and to do the conversions
    manually. Such a mistake should be difficult to make, not easy.
    .
    Interesting idea, but note that it disallows popular constants
    like 0.1.
    .
    0.1 is not a poplar constant. And anyway, after 30 years,
    it should be well-known by now that it is written 0.1d0
    or some comparable way.
    .
    Unless you restrict it to constants close to pi.
    .
    2) It should be difficult to use a high-precision constant within an expression by mistake. The mixed-kind expressions in fortran require
    such expressions to be evaluated in the higher precision, so if this is done by mistake, it would result in unintended computational inefficiencies. If the programmer intends to evaluate those types of mixed-kind expressions, then he should be required to introduce
    variables of the appropriate kind to use in the expressions or to
    otherwise make the intention clear. Such a mistake should be difficult
    to make, not easy.
    Java requires a cast for narrowing conversions, which are in the
    direction of:

    double --> float --> long --> int --> short --> byte.

    That catches some constant mistakes, but not all.

    Note also that Java, like C, has floating point constants
    that default to double, with a trailing f for float.

    Java also has some useful constants in the Java.lang.Math
    class, such as Math.PI and Math.E. Those are double,
    with no float version given.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to All on Fri Apr 1 23:15:51 2022
    On Friday, April 1, 2022 at 7:27:15 PM UTC-7, Robin Vowels wrote:

    (snip, I wrote)
    Interesting idea, but note that it disallows popular constants
    like 0.1.

    0.1 is not a poplar constant. And anyway, after 30 years,
    it should be well-known by now that it is written 0.1d0
    or some comparable way.

    Maybe I missed the question. I thought he wanted to catch:

    PI = 3.141

    that is, approximations to pi much less than the precision of
    of the variable. That would also apply to:

    PI = 3.14159D0

    again, much less than the desired precision.

    One could have a warning for such bad approximations to pi.
    Maybe also for 2pi and pi/2. Maybe for sqrt(2) and sqrt(3),
    or log(10.0)

    Catch the mistakes of all those lazy programmers who
    don't look up constants to enough digits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robin Vowels@21:1/5 to All on Sat Apr 2 02:57:32 2022
    On Saturday, April 2, 2022 at 5:15:53 PM UTC+11, gah4 wrote:
    On Friday, April 1, 2022 at 7:27:15 PM UTC-7, Robin Vowels wrote:

    (snip, I wrote)
    Interesting idea, but note that it disallows popular constants
    like 0.1.
    0.1 is not a poplar constant. And anyway, after 30 years,
    it should be well-known by now that it is written 0.1d0
    or some comparable way.
    .
    Maybe I missed the question. I thought he wanted to catch:

    PI = 3.141
    .
    Certainly is is trivial for a compiler to warn when an assignment
    to high precision variable is made from a low-precision constant.
    .
    that is, approximations to pi much less than the precision of
    of the variable. That would also apply to:

    PI = 3.14159D0

    again, much less than the desired precision.
    .
    This one is a different kettle of fish.
    It is an assignment of a double-precision constant
    to a double-precision variable, which is OK.
    .
    One could have a warning for such bad approximations to pi.
    ,
    But how is a compiler going to know that the user wants
    something of greater precision? It could well be that that
    is what the user wants.
    .
    Maybe also for 2pi and pi/2. Maybe for sqrt(2) and sqrt(3),
    or log(10.0)

    Catch the mistakes of all those lazy programmers who
    don't look up constants to enough digits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Phillip Helbig (undress to reply@21:1/5 to nospam@nowhere.org on Sat Apr 2 17:24:03 2022
    In article <VfF1K.493064$7F2.260003@fx12.iad>, Ron Shepard
    <nospam@nowhere.org> writes:

    4) I'm assuming that these constants are all mathematically based, so
    that their actual values do not change over time. If physical constants
    were to be included, ones that get refined over time with higher
    precision, then that also presents programming issues. If one section of
    code uses one value for Planks constant, and another section uses a
    different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the dates of their specification would somehow need to be included. To give
    one example, I memorized Avogadro's constant in high school as
    6.023*10^23. It was an experimentally determined value with error bars.
    The current definition is the integer constant, 6.02214076*10^23 with no error bars. Note that the 1960 era number that I memorized was off in
    the fourth decimal.

    Note that, with time, more and more constants of nature have become
    defined with various revisions of SI, so not just the speed of light and Avogadro's number but also, now, Plancks's constant.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Collins@21:1/5 to David Duffy on Sun Apr 3 05:00:06 2022
    On Friday, April 1, 2022 at 6:19:04 AM UTC+1, David Duffy wrote:
    There was a recent thread on what value of pi people would like to have available, say in a module. <snip>

    Would it be possible to have new intrinsic functions of the form:

    pi(kind)

    which would provide the value of pi for the appropriate real kind? The advantage of this approach is that the compiler can evaluate the intrinsic and insert the required value. There should be no performance penalty. pi is not, of course, the only
    example, but is probably the most common.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Ron Shepard on Sun Apr 3 12:31:02 2022
    Ron Shepard <nospam@nowhere.org> schrieb:

    In addition to settling on which constants are available to the
    programmer, there are a couple of things that should be considered in
    the user interface.

    1) It should be difficult for the programmer to assign a low-precision
    value to a higher precision parameter (or variable).
    $ cat a.f
    PROGRAMME MAIN
    DOUBLE PRECISION PI, ONE
    PI = 3.141592
    ONE = 1.
    END
    $ gfortran -Wconversion -Wconversion-extra a.f
    a.f:3:11:

    3 | PI = 3.141592
    | 1
    Warnung: Umwandlung von »REAL(4)« in »REAL(8)« bei (1) [-Wconversion-extra] a.f:4:12:

    4 | ONE = 1.
    | 1
    Warnung: Umwandlung von »REAL(4)« in »REAL(8)« bei (1) [-Wconversion-extra]

    This also shows a problem with the approach: The assignment to "ONE"
    is OK, because 1. is the same. However, if you are willing to go
    though legacy code with a fine comb, that is possible.

    This is a common
    error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make
    him introduce low-precision variables and to do the conversions
    manually. Such a mistake should be difficult to make, not easy.

    2) It should be difficult to use a high-precision constant within an expression by mistake. The mixed-kind expressions in fortran require
    such expressions to be evaluated in the higher precision, so if this is
    done by mistake, it would result in unintended computational
    inefficiencies. If the programmer intends to evaluate those types of mixed-kind expressions, then he should be required to introduce
    variables of the appropriate kind to use in the expressions or to
    otherwise make the intention clear. Such a mistake should be difficult
    to make, not easy.

    $ cat b.f
    PROGRAMME MAIN
    DOUBLE PRECISION A, B
    B = 1.D0
    A = 3.14159 * B
    PRINT *,A
    END
    $ gfortran -Wconversion -Wconversion-extra b.f
    b.f:4:17:

    4 | A = 3.14159 * B
    | 1
    Warning: Conversion from 'REAL(4)' to 'REAL(8)' at (1) [-Wconversion-extra]

    3) The constants should be specified somehow so that their values do not depend on such details as the current rounding mode. This means that the constants might differ in the last few bits from an expression evaluated
    at run time that, in exact arithmetic, would return that same value.
    This convention would allow a programmer to test the floating point
    values for equality for values that were extracted separately. Other
    floating point comparisons should continue to take into account the
    finite precision of floating point arithmetic.

    I think the approach of comparing against a suitable epsilon is better.

    4) I'm assuming that these constants are all mathematically based, so
    that their actual values do not change over time. If physical constants
    were to be included, ones that get refined over time with higher
    precision, then that also presents programming issues. If one section of
    code uses one value for Planks constant, and another section uses a
    different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the dates of their specification would somehow need to be included. To give
    one example, I memorized Avogadro's constant in high school as
    6.023*10^23. It was an experimentally determined value with error bars.
    The current definition is the integer constant, 6.02214076*10^23 with no error bars. Note that the 1960 era number that I memorized was off in
    the fourth decimal.

    So they did change it, did they? I memorized the same number as
    you did.

    There is, by the way, another potential error that you do not
    mention:

    $ cat c.f
    PROGRAM MAIN
    PRINT *,3.1415926535897932
    END
    $ gfortran -Wconversion -Wconversion-extra c.f
    c.f:2:33:

    2 | PRINT *,3.1415926535897932
    | 1
    Warning: Non-significant digits in 'REAL(4)' number at (1), maybe incorrect KIND [-Wconversion-extra]

    where the warning also is likely to generate false positives, but
    sometimes may be useful.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to All on Sun Apr 3 10:43:29 2022
    On 4/2/22 12:24 PM, Phillip Helbig (undress to reply) wrote:
    In article <VfF1K.493064$7F2.260003@fx12.iad>, Ron Shepard <nospam@nowhere.org> writes:

    4) I'm assuming that these constants are all mathematically based, so
    that their actual values do not change over time. If physical constants
    were to be included, ones that get refined over time with higher
    precision, then that also presents programming issues. If one section of
    code uses one value for Planks constant, and another section uses a
    different value, then chaos can ensue. It might be useful to have such
    constants in the language, but issues such as the physical units and the
    dates of their specification would somehow need to be included. To give
    one example, I memorized Avogadro's constant in high school as
    6.023*10^23. It was an experimentally determined value with error bars.
    The current definition is the integer constant, 6.02214076*10^23 with no
    error bars. Note that the 1960 era number that I memorized was off in
    the fourth decimal.

    Note that, with time, more and more constants of nature have become
    defined with various revisions of SI, so not just the speed of light and Avogadro's number but also, now, Plancks's constant.

    There have been exchanges about what is measured and what is defined.
    The speed of light used to be a measured quantity, based on the
    definitions of the second and of the length of a meter. Now, the length
    of a meter is the experimentally determined quantity, and the second and
    the speed of light are the defined quantities. Plancks's constant is
    like that too. It was previously a measured quantity, now it is a
    defined quantity. It isn't necessarily clear that our current
    definitions, and their associated values, will remain forever, they
    could change again in the future as different types of measurements
    becomes more refined (or even possible). There is also the odd feature
    that a particular constant can be defined, with no error bars, in one
    set of units, but it becomes an experimentally determined value, with
    error bars, in another set of units.

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Ron Shepard on Sun Apr 3 20:56:23 2022
    On Sunday, April 3, 2022 at 8:43:33 AM UTC-7, Ron Shepard wrote:

    (snip)

    There have been exchanges about what is measured and what is defined.
    The speed of light used to be a measured quantity, based on the
    definitions of the second and of the length of a meter. Now, the length
    of a meter is the experimentally determined quantity, and the second and
    the speed of light are the defined quantities. Plancks's constant is
    like that too. It was previously a measured quantity, now it is a
    defined quantity. It isn't necessarily clear that our current
    definitions, and their associated values, will remain forever, they
    could change again in the future as different types of measurements
    becomes more refined (or even possible). There is also the odd feature
    that a particular constant can be defined, with no error bars, in one
    set of units, but it becomes an experimentally determined value, with
    error bars, in another set of units.

    Funny thing, though.

    When they changed the definitions, they didn't change which are base units
    and which are derived units.

    Since derived units is supposed to mean that its value is derived
    from the standard meaning of other units, that should change.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)