The cause MIGHT be the incorrect value in <float.h> of LDBL_MIN_EXP:
the M68K system has -16382, whereas test output from every other
system in our farm that supports an 80-bit IEEE 754 format has -16381.
...
The cause MIGHT be the incorrect value in <float.h> of LDBL_MIN_EXP:
the M68K system has -16382, whereas test output from every other
system in our farm that supports an 80-bit IEEE 754 format has -16381.
This is correct. In the m68881 extended float format, denormals have an
exponent bias of 0x3fff, whereas in the i387 extended float format, the
bias is 0x3fffe. That means that a normalized number in m68881 ext
format can have a biased exponent of zero.
...
From the cited M68000 Family Programmer's Reference Manual, on page1-23, in Table 1-6. Extended-Precision Real Format Summary, the biased
...
The biasing constant is 127 for the single-precision format, 1023 for
the double-precision format, and 16,383 for the double
extended-precision format.
...
...
Total memory format width (bits) 32 64 80 128 >> Exponent bias +127 +1023 +16383 +16383 >> ...
The Intel IA-64 Application Developer's Architecture Guide, May 1999,
(Order Number: 245188-001) on page 5-1, in Table 5-1 has
...
Total memory format width (bits) 32 64 80 128 >>> Exponent bias +127 +1023 +16383 +16383
Here is the output from Debian 11 on M68k (identical with both gcc-9
and gcc-10):
k = -16380 x = 0x8.0000000000000000p-16383 = 1.344841257244837403e-4931 = 0x0003_80000000_00000000
k = -16381 x = 0x8.0000000000000000p-16384 = 6.724206286224187013e-4932 = 0x0002_80000000_00000000
k = -16382 x = 0x8.0000000000000000p-16385 = 3.362103143112093506e-4932 = 0x0001_80000000_00000000
---------- begin subnormals ----------
k = -16383 x = 0x4.0000000000000000p-16386 = 8.405257857780233766e-4933 = 0x0000_40000000_00000000
k = -16384 x = 0x2.0000000000000000p-16386 = 4.202628928890116883e-4933 = 0x0000_20000000_00000000
k = -16385 x = 0x1.0000000000000000p-16386 = 2.101314464445058441e-4933 = 0x0000_10000000_00000000
Here's the output of your program from a Mac IIci running Debian SID
(using gcc version 9.2.1):
-----
$ cat /proc/cpuinfo
CPU: 68030
MMU: 68030
FPU: 68882
Clocking: 23.1MHz
BogoMips: 5.78
Calibration: 28928 loops
$ cc bug-float80.c
$ ./a.out
Addressing is big-endian
sizeof(long double) = 12
LDBL_MANT_DIG = 64
LDBL_MIN_EXP = -16382
LDBL_MIN = 0x8.0000000000000000p-16386 = 1.681051571556046753e-4932
k = -16381 x = 0xd.eadbeefcafefeed0p-16385 = 5.848974526544159967e-4932 = 0x0001_deadbeef_cafefeed
k = -16381 x = 0xd.eadbeefcafefeed0p-16385 = 5.848974526544159967e-4932 = 0x0001_deadbeef_cafefeed
k = -16376 x = 0x8.0000000000000000p-16379 = 2.151746011591739844e-4930 = 0x0007_80000000_00000000
k = -16377 x = 0x8.0000000000000000p-16380 = 1.075873005795869922e-4930 = 0x0006_80000000_00000000
k = -16378 x = 0x8.0000000000000000p-16381 = 5.379365028979349610e-4931 = 0x0005_80000000_00000000
k = -16379 x = 0x8.0000000000000000p-16382 = 2.689682514489674805e-4931 = 0x0004_80000000_00000000
k = -16380 x = 0x8.0000000000000000p-16383 = 1.344841257244837403e-4931 = 0x0003_80000000_00000000
k = -16381 x = 0x8.0000000000000000p-16384 = 6.724206286224187013e-4932 = 0x0002_80000000_00000000
k = -16382 x = 0x8.0000000000000000p-16385 = 3.362103143112093506e-4932 = 0x0001_80000000_00000000
---------- begin subnormals ----------
k = -16383 x = 0x8.0000000000000000p-16386 = 1.681051571556046753e-4932 = 0x0000_80000000_00000000
k = -16384 x = 0x4.0000000000000000p-16386 = 8.405257857780233766e-4933 = 0x0000_40000000_00000000
k = -16385 x = 0x2.0000000000000000p-16386 = 4.202628928890116883e-4933 = 0x0000_20000000_00000000
k = -16386 x = 0x1.0000000000000000p-16386 = 2.101314464445058441e-4933 = 0x0000_10000000_00000000
k = -16387 x = 0x0.8000000000000000p-16386 = 1.050657232222529221e-4933 = 0x0000_08000000_00000000
k = -16388 x = 0x0.4000000000000000p-16386 = 5.253286161112646104e-4934 = 0x0000_04000000_00000000
k = -16389 x = 0x0.2000000000000000p-16386 = 2.626643080556323052e-4934 = 0x0000_02000000_00000000
...
At present, I have no other operating system than Debian 11 on M68K.
Web searches indicate that OpenBSD 5.1 ran on that CPU, but its
package archives have been been deleted. NetBSD 9.2 has an ISO image
for M68K, but I have not yet successfully created a VM for it.
Suggestions for other O/Ses to try are welcome.
NetBSD runs on m68k systems; see http://www.netbsd.org. You could also
try an earlier version of Debian (3.0 or 4.0) on m68k. And you might
want to compare the musl libc to glibc; see https://wiki.musl-libc.org/functional-differences-from-glibc.html
$ cat /proc/cpuinfo
CPU: 68030
MMU: 68030
FPU: 68882
I wonder if that hardware should be expected to give the same result as
68040 hardware (?) Both QEMU and Aranym emulate the latter:
CPU: 68040
MMU: 68040
FPU: 68040
On 7/22/21 5:57 PM, Brad Boyer wrote:
On Thu, Jul 22, 2021 at 07:32:49PM +1000, Finn Thain wrote:
$ cat /proc/cpuinfo
CPU: 68030
MMU: 68030
FPU: 68882
I wonder if that hardware should be expected to give the same result as
68040 hardware (?) Both QEMU and Aranym emulate the latter:
CPU: 68040
MMU: 68040
FPU: 68040
The m68k PRM does document some minor differences between the 68881/68882 and the built-in FPU in the 68040 (other than the obvious unimplemented instructions in the 68040), but I don't think any of it would rise to
this level. They're almost entirely compatible. My first guess would be
an emulation bug. This is the sort of thing that would likely be easy to get wrong.
My apologies for not having any of my 68040 systems available for a test
on the real hardware. I'm not even sure if any of them still work.
Brad Boyer
flar@allandria.com
Attached are three results of running bug-float80.c on m68k hardware:
1) 68040, Centris 650, Debian SID, gcc 9.2.1
2) 68040, Centris 650, NetBSD 9.1, gcc 7.5.0
3) 68030, Mac SE/30, NetBSD 9.1, gcc 7.5.0
The bug-float80.c program doesn't compile in its current form in A/UX;
not only does stdint.h not exist there, but both Aople's C compiler and
an early gcc (2.7.2) repported syntax errors.
On Fri, 23 Jul 2021, Stan Johnson wrote:
On 7/22/21 5:57 PM, Brad Boyer wrote:as
On Thu, Jul 22, 2021 at 07:32:49PM +1000, Finn Thain wrote:
$ cat /proc/cpuinfo
CPU: 68030
MMU: 68030
FPU: 68882
I wonder if that hardware should be expected to give the same result
68040 hardware (?) Both QEMU and Aranym emulate the latter:
CPU: 68040
MMU: 68040
FPU: 68040
68881/68882The m68k PRM does document some minor differences between the
toand the built-in FPU in the 68040 (other than the obvious unimplemented instructions in the 68040), but I don't think any of it would rise to this level. They're almost entirely compatible. My first guess would be an emulation bug. This is the sort of thing that would likely be easy
get wrong.
testMy apologies for not having any of my 68040 systems available for a
on the real hardware. I'm not even sure if any of them still work.
Brad Boyer
flar@allandria.com
Attached are three results of running bug-float80.c on m68k hardware:
1) 68040, Centris 650, Debian SID, gcc 9.2.1
It appears that your Motorola 68040 result agrees with your Motorola 68882 result, as Brad predicted.
2) 68040, Centris 650, NetBSD 9.1, gcc 7.5.0
3) 68030, Mac SE/30, NetBSD 9.1, gcc 7.5.0
The NetBSD test results are in agreement, but they differ from Linux. I wonder why?
The bug-float80.c program doesn't compile in its current form in A/UX;
not only does stdint.h not exist there, but both Aople's C compiler and
an early gcc (2.7.2) repported syntax errors.
The program can probably be ported to System V Release 2 without too much pain. You'll have to drop stdint.h. You may need to include limits.h. And
you may need to build it with "gcc -D__m68k__ bug-float80.c".
Debian/m68k 3 "woody" has gcc 2.95.4, and it fails like this:
sh-2.05a# cc -D__m68k__ bug-float80.c
bug-float80.c: In function `main':
bug-float80.c:45: hexadecimal floating constant has no exponent bug-float80.c:45: missing white space after number `0x0.deadbee' bug-float80.c:45: parse error before `cafefeedp'
So I think you'll want to start with a patch like this:
--- a/bug-float80.c 2021-07-22 19:08:30.000000000 +1000
+++ b/bug-float80.c 2021-07-24 23:40:27.000000000 +1000
@@ -13,9 +13,10 @@
***********************************************************************/
#include <float.h>
-#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
+#include <string.h>
+#include <limits.h>
#if defined(__m68k__)
typedef long double __float80;
@@ -28,7 +29,7 @@
{
__float80 x;
int k, big_endian;
- union { long double v; uint16_t i[6]; uint32_t j[3]; } u;
+ union { long double v; unsigned short i[6]; unsigned long j[3]; } u;
u.v = 0x1p0;
big_endian = (u.i[4] == 0);
@@ -42,7 +43,7 @@
PRINTF("LDBL_MIN = %0.16La = %0.19Lg\n", (long double)LDBL_MIN, (long double)LDBL_MIN);
PRINTF("\n");
- x = 0x0.deadbeefcafefeedp-16381L;
+ memcpy(&x, "\x00\x01\x00\x00\xDE\xAD\xBE\xEF\xCA\xFE\xFE\xED", sizeof(x));
u.v = x;
k = -16381;
BTW, limits.h in A/UX 3.0.1 has this:
#if !defined(DBL_MIN)
/* Minimum normalised double */
#ifndef __STDC__
# define DBL_MIN (2.2250738585072018e-308)
#else
# define DBL_MIN (2.2250738585072014e-308)
#endif
#endif
However, the LDBL_* definitions seem to be independent of other macros.
The results from your bug-float80.c program demonstrate two discrepancies: results from NetBSD and Linux (running on Motorola processors) are inconsistent, as are results from Aranym and QEMU (with Linux guests). Do
you know of any official bug reports about these two issues?
On Jul 21 2021, Nelson H. F. Beebe wrote:
The Intel IA-64 Application Developer's Architecture Guide, May 1999, (Order Number: 245188-001) on page 5-1, in Table 5-1 has
...
Total memory format width (bits) 32 64 80 128
Exponent bias +127 +1023 +16383 +16383
That are the biases for normalized numbers. What does it say about denormalized numbers? Note that the i387 format does not allow for a
biased exponent of zero when the explicit integer bit is one, unlike the m68881 format.
Here, the number has been divided by 4 instead of 2. The printf output
is correct. This is a QEMU bug in the multiplication with subnormals.
See the discussion in the MPFR mailing-list:
https://sympa.inria.fr/sympa/arc/mpfr/2022-12/msg00036.html
I run a large farm of physical and virtual machines that we use for
software testing. We have multiple versions of most of the major
operating systems, covering the major CPU families of the past 30
years, including M68K.
In testing some numerical software on Debian 11 on M68k (emulated by
QEMU 4.2.1), I discovered that 80-bit subnormals are printed
incorrectly: they are exactly HALF their correct values.
A test program is provided below, and a snippet of its identical and
correct output on x86_64 and IA-64 (Itanium) physical hardware looks
like this around the transition from tiny normal numbers to subnormal numbers:
k = -16380 x = 0x8.0000000000000000p-16383 = 1.344841257244837403e-4931 = 0x0003_80000000_00000000
k = -16381 x = 0x8.0000000000000000p-16384 = 6.724206286224187013e-4932 = 0x0002_80000000_00000000
k = -16382 x = 0x8.0000000000000000p-16385 = 3.362103143112093506e-4932 = 0x0001_80000000_00000000
---------- begin subnormals ----------
k = -16383 x = 0x4.0000000000000000p-16385 = 1.681051571556046753e-4932 = 0x0000_40000000_00000000
k = -16384 x = 0x2.0000000000000000p-16385 = 8.405257857780233766e-4933 = 0x0000_20000000_00000000
k = -16385 x = 0x1.0000000000000000p-16385 = 4.202628928890116883e-4933 = 0x0000_10000000_00000000
Here is the output from Debian 11 on M68k (identical with both gcc-9
and gcc-10):
k = -16380 x = 0x8.0000000000000000p-16383 = 1.344841257244837403e-4931 = 0x0003_80000000_00000000
k = -16381 x = 0x8.0000000000000000p-16384 = 6.724206286224187013e-4932 = 0x0002_80000000_00000000
k = -16382 x = 0x8.0000000000000000p-16385 = 3.362103143112093506e-4932 = 0x0001_80000000_00000000
---------- begin subnormals ----------
k = -16383 x = 0x4.0000000000000000p-16386 = 8.405257857780233766e-4933 = 0x0000_40000000_00000000
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 293 |
Nodes: | 16 (2 / 14) |
Uptime: | 235:04:08 |
Calls: | 6,624 |
Files: | 12,172 |
Messages: | 5,319,707 |