There was a recent thread on what value of pi people would like to have available, say in a module. I noticed GSL has such inherited from the
LIBC, ie "the constants from the Unix98 standard":
M_E, M_LOG2E, M_LOG10E, M_LN2, M_LN10, M_PI, M_PI_2 [pi/2], M_PI_4,
M_1_PI [1/pi], M_2_PI, M_2_SQRTPI, M_SQRT2, M_SQRT1_2 [1/sqrt(2)]
which are macros, with the double long versions are M_El etc, and
single precision M_Ef (so 39 numbers and counting). Something like
use math_constants
real (kind=8), parameter :: pi = m_constant("M_PI", kind=8)
too ugly? I imagine m_constant() has a lookup table, but could run something like the spigot algorithm, and generate as many digits as requested ;)
1) It should be difficult for the programmer to assign a low-precision
value to a higher precision parameter (or variable). This is a common
error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make
him introduce low-precision variables and to do the conversions
manually. Such a mistake should be difficult to make, not easy.
2) It should be difficult to use a high-precision constant within an expression by mistake. The mixed-kind expressions in fortran require
such expressions to be evaluated in the higher precision, so if this is
done by mistake, it would result in unintended computational
inefficiencies. If the programmer intends to evaluate those types of mixed-kind expressions, then he should be required to introduce
variables of the appropriate kind to use in the expressions or to
otherwise make the intention clear. Such a mistake should be difficult
to make, not easy.
3) The constants should be specified somehow so that their values do not depend on such details as the current rounding mode. This means that the constants might differ in the last few bits from an expression evaluated
at run time that, in exact arithmetic, would return that same value.
This convention would allow a programmer to test the floating point
values for equality for values that were extracted separately. Other
floating point comparisons should continue to take into account the
finite precision of floating point arithmetic.
4) I'm assuming that these constants are all mathematically based, so
that their actual values do not change over time. If physical constants
were to be included, ones that get refined over time with higher
precision, then that also presents programming issues. If one section of
code uses one value for Planks constant, and another section uses a
different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the dates of their specification would somehow need to be included. To give
one example, I memorized Avogadro's constant in high school as
6.023*10^23. It was an experimentally determined value with error bars.
The current definition is the integer constant, 6.02214076*10^23 with no error bars. Note that the 1960 era number that I memorized was off in
the fourth decimal.
On Friday, April 1, 2022 at 8:55:37 AM UTC-7, Ron Shepard wrote:.
(snip)
1) It should be difficult for the programmer to assign a low-precision value to a higher precision parameter (or variable). This is a common
error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make him introduce low-precision variables and to do the conversions
manually. Such a mistake should be difficult to make, not easy.
Interesting idea, but note that it disallows popular constants.
like 0.1.
Unless you restrict it to constants close to pi..
2) It should be difficult to use a high-precision constant within an expression by mistake. The mixed-kind expressions in fortran requireJava requires a cast for narrowing conversions, which are in the
such expressions to be evaluated in the higher precision, so if this is done by mistake, it would result in unintended computational inefficiencies. If the programmer intends to evaluate those types of mixed-kind expressions, then he should be required to introduce
variables of the appropriate kind to use in the expressions or to
otherwise make the intention clear. Such a mistake should be difficult
to make, not easy.
direction of:
double --> float --> long --> int --> short --> byte.
That catches some constant mistakes, but not all.
Note also that Java, like C, has floating point constants
that default to double, with a trailing f for float.
Java also has some useful constants in the Java.lang.Math
class, such as Math.PI and Math.E. Those are double,
with no float version given.
Interesting idea, but note that it disallows popular constants
like 0.1.
0.1 is not a poplar constant. And anyway, after 30 years,
it should be well-known by now that it is written 0.1d0
or some comparable way.
On Friday, April 1, 2022 at 7:27:15 PM UTC-7, Robin Vowels wrote:.
(snip, I wrote)
Interesting idea, but note that it disallows popular constants0.1 is not a poplar constant. And anyway, after 30 years,
like 0.1.
it should be well-known by now that it is written 0.1d0
or some comparable way.
Maybe I missed the question. I thought he wanted to catch:.
PI = 3.141
that is, approximations to pi much less than the precision of.
of the variable. That would also apply to:
PI = 3.14159D0
again, much less than the desired precision.
One could have a warning for such bad approximations to pi.,
Maybe also for 2pi and pi/2. Maybe for sqrt(2) and sqrt(3),
or log(10.0)
Catch the mistakes of all those lazy programmers who
don't look up constants to enough digits.
4) I'm assuming that these constants are all mathematically based, so
that their actual values do not change over time. If physical constants
were to be included, ones that get refined over time with higher
precision, then that also presents programming issues. If one section of
code uses one value for Planks constant, and another section uses a
different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the dates of their specification would somehow need to be included. To give
one example, I memorized Avogadro's constant in high school as
6.023*10^23. It was an experimentally determined value with error bars.
The current definition is the integer constant, 6.02214076*10^23 with no error bars. Note that the 1960 era number that I memorized was off in
the fourth decimal.
There was a recent thread on what value of pi people would like to have available, say in a module. <snip>
In addition to settling on which constants are available to the$ cat a.f
programmer, there are a couple of things that should be considered in
the user interface.
1) It should be difficult for the programmer to assign a low-precision
value to a higher precision parameter (or variable).
This is a common
error in legacy codes, e.g. assigning a single-precision PI to a double precision parameter and expecting it is accurate to full precision. If a programmer wants to do this on purpose, then it would be alright to make
him introduce low-precision variables and to do the conversions
manually. Such a mistake should be difficult to make, not easy.
2) It should be difficult to use a high-precision constant within an expression by mistake. The mixed-kind expressions in fortran require
such expressions to be evaluated in the higher precision, so if this is
done by mistake, it would result in unintended computational
inefficiencies. If the programmer intends to evaluate those types of mixed-kind expressions, then he should be required to introduce
variables of the appropriate kind to use in the expressions or to
otherwise make the intention clear. Such a mistake should be difficult
to make, not easy.
3) The constants should be specified somehow so that their values do not depend on such details as the current rounding mode. This means that the constants might differ in the last few bits from an expression evaluated
at run time that, in exact arithmetic, would return that same value.
This convention would allow a programmer to test the floating point
values for equality for values that were extracted separately. Other
floating point comparisons should continue to take into account the
finite precision of floating point arithmetic.
4) I'm assuming that these constants are all mathematically based, so
that their actual values do not change over time. If physical constants
were to be included, ones that get refined over time with higher
precision, then that also presents programming issues. If one section of
code uses one value for Planks constant, and another section uses a
different value, then chaos can ensue. It might be useful to have such constants in the language, but issues such as the physical units and the dates of their specification would somehow need to be included. To give
one example, I memorized Avogadro's constant in high school as
6.023*10^23. It was an experimentally determined value with error bars.
The current definition is the integer constant, 6.02214076*10^23 with no error bars. Note that the 1960 era number that I memorized was off in
the fourth decimal.
In article <VfF1K.493064$7F2.260003@fx12.iad>, Ron Shepard <nospam@nowhere.org> writes:
4) I'm assuming that these constants are all mathematically based, so
that their actual values do not change over time. If physical constants
were to be included, ones that get refined over time with higher
precision, then that also presents programming issues. If one section of
code uses one value for Planks constant, and another section uses a
different value, then chaos can ensue. It might be useful to have such
constants in the language, but issues such as the physical units and the
dates of their specification would somehow need to be included. To give
one example, I memorized Avogadro's constant in high school as
6.023*10^23. It was an experimentally determined value with error bars.
The current definition is the integer constant, 6.02214076*10^23 with no
error bars. Note that the 1960 era number that I memorized was off in
the fourth decimal.
Note that, with time, more and more constants of nature have become
defined with various revisions of SI, so not just the speed of light and Avogadro's number but also, now, Plancks's constant.
There have been exchanges about what is measured and what is defined.
The speed of light used to be a measured quantity, based on the
definitions of the second and of the length of a meter. Now, the length
of a meter is the experimentally determined quantity, and the second and
the speed of light are the defined quantities. Plancks's constant is
like that too. It was previously a measured quantity, now it is a
defined quantity. It isn't necessarily clear that our current
definitions, and their associated values, will remain forever, they
could change again in the future as different types of measurements
becomes more refined (or even possible). There is also the odd feature
that a particular constant can be defined, with no error bars, in one
set of units, but it becomes an experimentally determined value, with
error bars, in another set of units.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 159 |
Nodes: | 16 (0 / 16) |
Uptime: | 98:20:16 |
Calls: | 3,209 |
Files: | 10,563 |
Messages: | 3,009,579 |