The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
Expected: 4.9406564584124654E-324 , got: 4.9406564584124654E-324, T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324, T
Expected: 9.8813129168249309E-324 , got: 2.2250738585072014E-308, F
Can someone please test the program on Linux to verify that this
is not a Windows specific problem. Is this a known issue?
Conversion between textual decimal representation and internal binary representation is not easy. I would especially be surprised if any conversion
could use 722 digits in the conversion process.
Are there cases where changing the 722nd digit will change the value?
On Thursday, October 13, 2022 at 10:40:30 AM UTC-7, gah4 wrote:
Conversion between textual decimal representation and internal binary representation is not easy. I would especially be surprised if any conversion
could use 722 digits in the conversion process.
128-bit IEEE is even more fun, but I don't know of a Fortran compiler that has a limit on digits that would preclude exact conversions.
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
program test
!
use :: iso_fortran_env
!
implicit none
!
real(real64), parameter :: SUB1 = real(z'1', real64)
real(real64), parameter :: SUB2 = real(z'2', real64)
!
character(len=1000) :: buff
real(real64) :: got
!
! Inexact representation of the smallest subnormal number:
buff = '5e-324'
read (buff, *) got
print *, 'Expected:', SUB1, ', got:', got, ',', SUB1 == got
!
! Inexact representation of the second smallest subnormal number.
buff = '9e-324'
read (buff, *) got
print *, 'Expected:', SUB2, ', got:', got, ',', SUB2 == got
!
! Exact representation of the number which is exactly halfway
! between the smallest and the second smallest subnormal number
! 2**(-1074) + 2**(-1075). This number should be rounded to the
! second smallest subnormal number according to the default IEEE
! 754 rounding rules:
buff = '7.41098468761869816264853189302332058547589703921& &4871466383785237510132609053131277979497545424539& &8856969484704316857659638998506553390969459816219& &4016172817189451069785467106791768725751773473155& &5330779540854980960845750095811137303474765809687& &1009590975442271004757307809711118935784838675653& &9987835030152280559340465937397917907387238682993& &9581848166016912201945649993128979841136206248449& &8678713572180352209017023903285791732520220528974& &0208029068540216066123755499834026713000358124864& &7904138574340187552090159017259254714629617513415& &9774938718574737870961645638908718119841271673056& &0170454930047052695901657637768849082679869725733& &6652176556794107250876433756084600398490497214911& &7463085539556354188641513168478436313080237596295&
&773983001708984375e-324'
read (buff, *) got
print *, 'Expected:', SUB2, ', got:', got, ',', SUB2 == got
!
end program test
I tried the program on Windows 7 with gfortran 11.2.0 and on
Windows 10 with gfortran 12.2.0. Both are from MSYS2/MinGW64
and the results are identical:
Expected: 4.9406564584124654E-324 , got: 4.9406564584124654E-324, T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324, T
Expected: 9.8813129168249309E-324 , got: 2.2250738585072014E-308, F
The third print should give the same result as the second one.
The result looks like the smallest normal double precision number
which is not quiet correct.
On Windows 10 with the Intel ifort 64-Bit compiler version 2021.5.0
Build 20211109_000000:
Expected: 4.940656458412465E-324 , got: 4.940656458412465E-324, T
Expected: 9.881312916824931E-324 , got: 9.881312916824931E-324, T
Expected: 9.881312916824931E-324 , got: 9.881312916824931E-324, T
Can someone please test the program on Linux to verify that this
is not a Windows specific problem. Is this a known issue?
Regards
--
Thomas
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
program test
!
use :: iso_fortran_env
!
implicit none
!
real(real64), parameter :: SUB1 = real(z'1', real64)
real(real64), parameter :: SUB2 = real(z'2', real64)
!
Expected: 4.9406564584124654E-324 , got: 4.9406564584124654E-324, T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324, T
Expected: 9.8813129168249309E-324 , got: 2.2250738585072014E-308, F
The third print should give the same result as the second one.
The result looks like the smallest normal double precision number
which is not quiet correct.
On Friday, October 14, 2022 at 1:42:40 AM UTC+13, Thomas Schnurrenberger wrote:
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
program test
!
use :: iso_fortran_env
!
implicit none
!
real(real64), parameter :: SUB1 = real(z'1', real64)
real(real64), parameter :: SUB2 = real(z'2', real64)
!
character(len=1000) :: buff
real(real64) :: got
!
! Inexact representation of the smallest subnormal number:
buff = '5e-324'
read (buff, *) got
print *, 'Expected:', SUB1, ', got:', got, ',', SUB1 == got
!
! Inexact representation of the second smallest subnormal number.
buff = '9e-324'
read (buff, *) got
print *, 'Expected:', SUB2, ', got:', got, ',', SUB2 == got
!
! Exact representation of the number which is exactly halfway
! between the smallest and the second smallest subnormal number
! 2**(-1074) + 2**(-1075). This number should be rounded to the
! second smallest subnormal number according to the default IEEE
! 754 rounding rules:
buff = '7.41098468761869816264853189302332058547589703921&
&4871466383785237510132609053131277979497545424539&
&8856969484704316857659638998506553390969459816219&
&4016172817189451069785467106791768725751773473155&
&5330779540854980960845750095811137303474765809687&
&1009590975442271004757307809711118935784838675653&
&9987835030152280559340465937397917907387238682993&
&9581848166016912201945649993128979841136206248449&
&8678713572180352209017023903285791732520220528974&
&0208029068540216066123755499834026713000358124864&
&7904138574340187552090159017259254714629617513415&
&9774938718574737870961645638908718119841271673056&
&0170454930047052695901657637768849082679869725733&
&6652176556794107250876433756084600398490497214911&
&7463085539556354188641513168478436313080237596295&
&773983001708984375e-324'
read (buff, *) got
print *, 'Expected:', SUB2, ', got:', got, ',', SUB2 == got
!
end program test
I tried the program on Windows 7 with gfortran 11.2.0 and on
Windows 10 with gfortran 12.2.0. Both are from MSYS2/MinGW64
and the results are identical:
Expected: 4.9406564584124654E-324 , got: 4.9406564584124654E-324, T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324, T
Expected: 9.8813129168249309E-324 , got: 2.2250738585072014E-308, F
The third print should give the same result as the second one.
The result looks like the smallest normal double precision number
which is not quiet correct.
On Windows 10 with the Intel ifort 64-Bit compiler version 2021.5.0
Build 20211109_000000:
Expected: 4.940656458412465E-324 , got: 4.940656458412465E-324, T
Expected: 9.881312916824931E-324 , got: 9.881312916824931E-324, T
Expected: 9.881312916824931E-324 , got: 9.881312916824931E-324, T
Can someone please test the program on Linux to verify that this
is not a Windows specific problem. Is this a known issue?
Regards
--
Thomas
On my Linux Ubuntu system, after putting in this line after the declarations:
print *, 'version = ',compiler_version()
I got these results with gfortran and ifort.
version = GCC version 12.1.0
Expected: 4.9406564584124654E-324 , got: 4.9406564584124654E-324 , T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324 , T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324 , T
version =
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel
(R) 64, Version 2021.6.0 Build 20220226_000000
Expected: 4.940656458412465E-324 , got: 4.940656458412465E-324 , T
Expected: 9.881312916824931E-324 , got: 9.881312916824931E-324 , T
Expected: 9.881312916824931E-324 , got: 9.881312916824931E-324 , T
On Thursday, October 13, 2022 at 5:42:40 AM UTC-7, Thomas Schnurrenberger wrote:
Expected: 4.9406564584124654E-324 , got: 4.9406564584124654E-324, T
Expected: 9.8813129168249309E-324 , got: 9.8813129168249309E-324, T
Expected: 9.8813129168249309E-324 , got: 2.2250738585072014E-308, F
The third print should give the same result as the second one.
The result looks like the smallest normal double precision number
which is not quiet correct.
I checked your decimal numbers, and they are accurate. But the results *are* correct -- when the
default "round to nearest" mode is in effect, and the exact value is exactly between two representable
numbers, IEEE-754 rounds to the *even* one -- i.e., the one whose least-significant bit is zero. And
in this case, that's z'2', not z'1'.
This is the output of the third print:
Expected: 9.8813129168249309E-324 , got: 2.2250738585072014E-308, F
The result is in my opinion not correct, and it is not a "simple"
rounding error. If you change the last digit of the input number from
5 to 4, gfortran returns the correct result, which is z'1'. If you
change the last digit from 5 to 6, gfortran returns also the correct
result which is z'2'. It is only this exact number which gives a
incorrect result. What do you expect as result when reading the large
input number?
On Thu, 13 Oct 2022 14:42:37 +0200, Thomas Schnurrenberger wrote:
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
program test
!
use :: iso_fortran_env
!
implicit none
!
real(real64), parameter :: SUB1 = real(z'1', real64)
real(real64), parameter :: SUB2 = real(z'2', real64)
!
Why do you think that there is a problem? REAL64 from
ISO_FORTRAN_ENV does not mean IEEE 754 binary64 type.
It means the type occupies 64 bits. That's it. In fact,
the type many not support subnormals at all.
If you want IEEE binary64, then use the IEEE 754 facilities
available in modern Fortran.
On 14.10.2022 00:43, Steven G. Kargl wrote:
On Thu, 13 Oct 2022 14:42:37 +0200, Thomas Schnurrenberger wrote:You are of course right.
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
program test
!
use :: iso_fortran_env
!
implicit none
!
real(real64), parameter :: SUB1 = real(z'1', real64)
real(real64), parameter :: SUB2 = real(z'2', real64)
!
Why do you think that there is a problem? REAL64 from
ISO_FORTRAN_ENV does not mean IEEE 754 binary64 type.
It means the type occupies 64 bits. That's it. In fact,
the type many not support subnormals at all.
If you want IEEE binary64, then use the IEEE 754 facilitiesThe following modified program is an attempt to use the IEEE 754
available in modern Fortran.
facilities correctly:
Results from running the program on Windows 10
Compiler version = GCC version 12.2.0...
-----...
1: Expected: 4.9406564584124654E-324, got: 4.9406564584124654E-324, T
2: Expected: 9.8813129168249309E-324, got: 9.8813129168249309E-324, T
3: Expected: 9.8813129168249309E-324, got: 2.2250738585072014E-308, F
Compiler version =
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running
on Intel
(R) 64, Version 2021.7.0 Build 20220726_000000
-----
1: Expected: 4.940656458412465E-324, got: 4.940656458412465E-324, T
2: Expected: 9.881312916824931E-324, got: 9.881312916824931E-324, T
3: Expected: 9.881312916824931E-324, got: 9.881312916824931E-324, T
I would expect that the ifort and the gfortran compiler give the same results. I'm very grateful for any hint, if I am doing something wrong.
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
On Thursday, October 13, 2022 at 5:42:40 AM UTC-7, Thomas Schnurrenberger wrote:
The following program gives an unexpected result when compiled andReading the Wikipedia article:
executed with gfortran on Windows:
https://en.wikipedia.org/wiki/IEEE_754#Character_representation
suggests that the standard requires one be able to take an internal binary value,
convert it to printable decimal with sufficient digits, and convert back with proper
rounding to the original binary value.
On Thursday, October 13, 2022 at 5:42:40 AM UTC-7, Thomas Schnurrenberger wrote:
The following program gives an unexpected result when compiled and
executed with gfortran on Windows:
Reading the Wikipedia article:
https://en.wikipedia.org/wiki/IEEE_754#Character_representation
suggests that the standard requires one be able to take an internal binary value,
convert it to printable decimal with sufficient digits, and convert back with proper
rounding to the original binary value.
It does not seem to require 722 digit values to convert exactly.
But maybe that page doesn't have everything.
Is there a place where it says that 722 digits are required?
...
This eventually gets reduced to a library function from the OS. If you
look into gcc/libgfortran/libgfortran.h, you find
#ifdef __MINGW32__
extern float __strtof (const char *, char **);
#define gfc_strtof __strtof
extern double __strtod (const char *, char **);
#define gfc_strtod __strtod
extern long double __strtold (const char *, char **);
#define gfc_strtold __strtold
#else
#define gfc_strtof strtof
#define gfc_strtod strtod
#define gfc_strtold strtold
#endif
On 17.10.2022 19:48, Steven G. Kargl wrote:
...That was the crucial hint, thank you very much. A short test with a
This eventually gets reduced to a library function from the OS. If you
look into gcc/libgfortran/libgfortran.h, you find
#ifdef __MINGW32__
extern float __strtof (const char *, char **);
#define gfc_strtof __strtof
extern double __strtod (const char *, char **);
#define gfc_strtod __strtod
extern long double __strtold (const char *, char **);
#define gfc_strtold __strtold
#else
#define gfc_strtof strtof
#define gfc_strtod strtod
#define gfc_strtold strtold
#endif
C program showed indeed that the __strtod() function is responsible for
the faulty result. After some investigation on the internet, I'm coming
to the following conclusion:
The __strto(x) functions are used by the MinGW/MinGW64 projects as replacements for faulty implementations in the older Microsoft C
runtime libraries. Newer Microsoft C runtime libraries, such as the "Universal CRT", have correctly working implementations of these
functions and need no replacements. It shouldn't be necessary to link directly to the replacement functions, because the CRT's from
MinGW/MinGW64 decide themself if this is necessary.
I would suggest that the gfortran runtime library does not link to the replacement functions. That means, the post-processor definitions for __MINGW32__ can/should be removed.
I am not an expert in this area, so I would be very grateful if my conclusions can be verified by a more involved person. Thanks again
for your help.
Regards
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 159 |
Nodes: | 16 (0 / 16) |
Uptime: | 99:10:33 |
Calls: | 3,209 |
Files: | 10,563 |
Messages: | 3,009,783 |