<div dir="auto"><span style="white-space:pre-wrap">Bests,</span></div><div dir="auto"><span style="white-space:pre-wrap">Giulio</span></div><div dir="auto"><br></div></div>[1] <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=981030#39"target="_blank" rel="noreferrer">https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=981030#39</a><div dir="auto"><div dir="auto">[2] <a href="https://gcc.gnu.org/onlinedocs/gcc-10.3.0/gcc/x86-Options.html#x86-Options" target="_blank" rel="noreferrer">
3) what is the most appropriate solution.
Dear mentors,What do they actually test, why do they use these assumptions?
while updating SCTK package I enabled the execution of the test suite
which was previously disabled. The tests are working fine on x86_64 architecture, but a couple of them are failing on i386.
After investigation [1] I found out that tests are failing because they
rely on the assumptions that, when a and b have the same double value:
1) "a < b" is false;
2) "a - b" is 0.0.
The double values refer to timing information. The specific format,Sounds like it just shouldn't read this data into a float type but use
known as CTM, stores information in seconds in decimals (e.g. "30.66" seconds) from the beginning of the stream.
The failing tool reads this information into double variables
On Thu, Nov 25, 2021 at 01:13:20PM +0100, Giulio Paci wrote:
The double values refer to timing information. The specific format,Sounds like it just shouldn't read this data into a float type but use
known as CTM, stores information in seconds in decimals (e.g. "30.66" seconds) from the beginning of the stream.
The failing tool reads this information into double variables
some fixed-point data type instead.
On Wed, Nov 24, 2021 at 06:38:07PM +0100, Giulio Paci wrote:
Dear mentors,What do they actually test, why do they use these assumptions?
while updating SCTK package I enabled the execution of the test suite which was previously disabled. The tests are working fine on x86_64 architecture, but a couple of them are failing on i386.
After investigation [1] I found out that tests are failing because they rely on the assumptions that, when a and b have the same double value:
1) "a < b" is false;
2) "a - b" is 0.0.
Moreover I am still wondering if the compiler behavior is correct in this case and why it is so unstable.It's correct when you don't care about the amount of precision, and it's unstable for the reasons described in gcc(1) for the options you
Giulio Paci wrote:
3) what is the most appropriate solution.
As I understand it, floating point values should not be compared
without some kind of accuracy/precision factor. Zero idea about the
best reference for how to do it correctly, but here is a random one:
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 83:55:22 |
Calls: | 6,658 |
Calls today: | 4 |
Files: | 12,203 |
Messages: | 5,333,599 |
Posted today: | 1 |