• Re: contradiction about the INFINITY macro

    From Keith Thompson@21:1/5 to Vincent Lefevre on Wed Sep 29 19:05:38 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In ISO C99:TC3 to C17, 7.12p4:

    The macro INFINITY expands to a constant expression of type float
    representing positive or unsigned infinity, if available; else to a
    positive constant of type float that overflows at translation time.

    Consider the "else" case. It is said that INFINITY expands to a
    constant and that it overflows, so that it is not in the range of representable values of float.

    But in 6.4.4p2:

    Each constant shall have a type and the value of a constant shall
    be in the range of representable values for its type.

    which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.

    Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).

    6.4.4p2 is a constraint. It doesn't make it impossible to write code
    that violates that constraint.

    If I understand correctly, it means that if an infinite value is not
    available, then a program that refers to the INFINITY macro (in a
    context where it's treated as a floating-point expression) violates that constraint, resulting in a required diagnostic.

    In fact I wrote the previous paragraph before I read the footnote on the definition of INFINITY (N1570 7.12p4, footnote 229):

    In this case, using INFINITY will violate the constraint in 6.4.4
    and thus require a diagnostic.

    There is no contradiction.

    (I wonder if it would have been more useful to require that INFINITY not
    be defined unless it can be defined as an actual infinity, but I haven't
    given it a lot of thought.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to All on Thu Sep 30 01:47:23 2021
    In ISO C99:TC3 to C17, 7.12p4:

    The macro INFINITY expands to a constant expression of type float
    representing positive or unsigned infinity, if available; else to a
    positive constant of type float that overflows at translation time.

    Consider the "else" case. It is said that INFINITY expands to a
    constant and that it overflows, so that it is not in the range of
    representable values of float.

    But in 6.4.4p2:

    Each constant shall have a type and the value of a constant shall
    be in the range of representable values for its type.

    which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.

    Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Vincent Lefevre on Thu Sep 30 03:20:23 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In ISO C99:TC3 to C17, 7.12p4:

    The macro INFINITY expands to a constant expression of type float
    representing positive or unsigned infinity, if available; else to a
    positive constant of type float that overflows at translation time.

    Consider the "else" case. It is said that INFINITY expands to a
    constant and that it overflows, so that it is not in the range of representable values of float.

    But in 6.4.4p2:

    Each constant shall have a type and the value of a constant shall
    be in the range of representable values for its type.

    which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.

    Right. But there is footnote that clarifies matters:

    "In this case, using INFINITY will violate the constraint in 6.4.4 and
    thus require a diagnostic."

    so any program using INFINITY has undefined behaviour because the intent
    is to violate 6.4.4. I agree that there should be a better way to
    specify it, but expanding to a constant that violates the constraints on
    such constants is clumsy but reasonably clear.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Keith Thompson on Thu Sep 30 11:24:08 2021
    In article <87pmsqizrh.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In ISO C99:TC3 to C17, 7.12p4:

    The macro INFINITY expands to a constant expression of type float
    representing positive or unsigned infinity, if available; else to a
    positive constant of type float that overflows at translation time.

    Consider the "else" case. It is said that INFINITY expands to a
    constant and that it overflows, so that it is not in the range of representable values of float.

    But in 6.4.4p2:

    Each constant shall have a type and the value of a constant shall
    be in the range of representable values for its type.

    which would imply that INFINITY expands to a value in the range of representable values of float, contradicted by 7.12p4.

    Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).

    6.4.4p2 is a constraint. It doesn't make it impossible to write code
    that violates that constraint.

    Yes, but the issue here is that the standard mandates the implementation
    to violate a constraint, which is rather different from the case where a
    user writes buggy code.

    If I understand correctly, it means that if an infinite value is not available, then a program that refers to the INFINITY macro (in a
    context where it's treated as a floating-point expression) violates that constraint, resulting in a required diagnostic.

    I think the consequence is more than a diagnostic (which may yield a compilation failure in practice, BTW): AFAIK, the standard does not
    give a particular definition for "overflows at translation time",
    which would make it undefined behavior as usual for overflows.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Vincent Lefevre on Thu Sep 30 08:38:24 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <87pmsqizrh.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In ISO C99:TC3 to C17, 7.12p4:

    The macro INFINITY expands to a constant expression of type float
    representing positive or unsigned infinity, if available; else to a
    positive constant of type float that overflows at translation time.

    Consider the "else" case. It is said that INFINITY expands to a
    constant and that it overflows, so that it is not in the range of
    representable values of float.

    But in 6.4.4p2:

    Each constant shall have a type and the value of a constant shall
    be in the range of representable values for its type.

    which would imply that INFINITY expands to a value in the range of
    representable values of float, contradicted by 7.12p4.

    Same issue in the current C2x draft N2596 (7.12p7 and 6.4.4p2).

    6.4.4p2 is a constraint. It doesn't make it impossible to write code
    that violates that constraint.

    Yes, but the issue here is that the standard mandates the implementation
    to violate a constraint, which is rather different from the case where a
    user writes buggy code.

    No, it doesn't force the implementation to violate a constraint. It
    says that a *program* that uses the INFINITY macro violates a constraint
    (if the implementation doesn't support infinities).

    Constraints apply to programs, not to implementations.

    It means that if a program assumes that INFINITY is meaningful, and it's compiled for a target system where it isn't, a diagnostic is guaranteed.
    And again, it might have made more sense to say that INFINITY is not
    defined for such implementations (as is done for the NAN macro), but
    perhaps there was existing practice.

    Here's what the C99 Rationale says:

    What is INFINITY on machines that do not support infinity? It should
    be defined along the lines of: #define INFINITY 9e99999f, where
    there are enough 9s in the exponent so that the value is too large
    to represent as a float, hence, violates the constraint of 6.4.4
    Constants. In addition, the number classification macro FP_INFINITE
    should not be defined. That allows an application to test for the
    existance of FP_INFINITE as a safe way to determine if infinity is
    supported; this is the feature test macro for support for infinity.

    The problem with this is that the standard itself doesn't say that
    FP_INFINITE is defined conditionally.

    If I understand correctly, it means that if an infinite value is not
    available, then a program that refers to the INFINITY macro (in a
    context where it's treated as a floating-point expression) violates that
    constraint, resulting in a required diagnostic.

    I think the consequence is more than a diagnostic (which may yield a compilation failure in practice, BTW): AFAIK, the standard does not
    give a particular definition for "overflows at translation time",
    which would make it undefined behavior as usual for overflows.

    The meaning seems clear enough to me. It's a floating constant whose mathematical value is outside the range of float, such as 9e99999f.
    And yes, evaluating it (if the mandatory diagnostic does not cause
    compilation to fail) causes undefined behavior. I think the intent is
    that a portable program should check whether infinities are supported
    before trying to evaluate INFINITY, but that intent is not well
    reflected in the standard.

    I don't think I have access to an implementation that doesn't support infinities, so I don't know how this is handled in practice. Given the
    near universal adoption of IEEE floating-point, it's probably reasonably
    safe to assume that infinities are supported unless your program needs
    to be painfully portable.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Keith Thompson on Fri Oct 1 09:05:38 2021
    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    No, it doesn't force the implementation to violate a constraint. It
    says that a *program* that uses the INFINITY macro violates a constraint
    (if the implementation doesn't support infinities).

    Then this means that the C standard defines something the user must
    not use (except when __STDC_IEC_559__ is defined, as in this case,
    INFINITY is guaranteed to expand to the true infinity).

    Constraints apply to programs, not to implementations.

    This is related, as programs will be transformed by an implementation.

    It means that if a program assumes that INFINITY is meaningful, and it's compiled for a target system where it isn't, a diagnostic is guaranteed.
    And again, it might have made more sense to say that INFINITY is not
    defined for such implementations (as is done for the NAN macro), but
    perhaps there was existing practice.

    Yes, currently there is no way of fallback (without things like
    autoconf tests).

    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)?
    This should not break existing programs.

    Here's what the C99 Rationale says:

    What is INFINITY on machines that do not support infinity? It should
    be defined along the lines of: #define INFINITY 9e99999f, where
    there are enough 9s in the exponent so that the value is too large
    to represent as a float, hence, violates the constraint of 6.4.4
    Constants. In addition, the number classification macro FP_INFINITE
    should not be defined. That allows an application to test for the
    existance of FP_INFINITE as a safe way to determine if infinity is
    supported; this is the feature test macro for support for infinity.

    The problem with this is that the standard itself doesn't say that FP_INFINITE is defined conditionally.

    Even if FP_INFINITE could be defined conditionally, this would not
    imply that INFINITY is usable, since for instance, long double may
    have an infinity but not float.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Vincent Lefevre on Fri Oct 1 12:20:06 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    [...]
    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)?
    This should not break existing programs.

    I agree. NAN is conditionally defined "if and only if the
    implementation supports quiet NaNs for the float type". I have no
    idea why the same wasn't done for INFINITY -- unless, as I mentioned
    upthread, there was existing practice. INFINITY was introduced
    in C99. Perhaps there were pre-C99 implementations that defined
    INFINITY as an extension in the way that's now specified in the
    standard. That's just speculation, and I still think making it
    conditional would have made more sense.

    [...]

    Even if FP_INFINITE could be defined conditionally, this would not
    imply that INFINITY is usable, since for instance, long double may
    have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I think the implication is that it assumes either all floating types have NaNs
    and/or infinities, or none do. That might be a valid assumption. (The
    Alpha supports both VAX and IEEE floating-point, and I don't think VAX
    FP supports infinities or NaNs, but I don't think an implementation
    would use, for example, VAX FP for float and IEEE for double.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jakob Bohm@21:1/5 to Vincent Lefevre on Fri Oct 1 22:55:07 2021
    On 2021-10-01 11:05, Vincent Lefevre wrote:
    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    No, it doesn't force the implementation to violate a constraint. It
    says that a *program* that uses the INFINITY macro violates a constraint
    (if the implementation doesn't support infinities).

    Then this means that the C standard defines something the user must
    not use (except when __STDC_IEC_559__ is defined, as in this case,
    INFINITY is guaranteed to expand to the true infinity).

    Constraints apply to programs, not to implementations.

    This is related, as programs will be transformed by an implementation.

    It means that if a program assumes that INFINITY is meaningful, and it's
    compiled for a target system where it isn't, a diagnostic is guaranteed.
    And again, it might have made more sense to say that INFINITY is not
    defined for such implementations (as is done for the NAN macro), but
    perhaps there was existing practice.

    Yes, currently there is no way of fallback (without things like
    autoconf tests).

    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)?
    This should not break existing programs.

    The fallback is to test for defined(FP_INFINITE), see below.


    Here's what the C99 Rationale says:

    What is INFINITY on machines that do not support infinity? It should
    be defined along the lines of: #define INFINITY 9e99999f, where
    there are enough 9s in the exponent so that the value is too large
    to represent as a float, hence, violates the constraint of 6.4.4
    Constants. In addition, the number classification macro FP_INFINITE
    should not be defined. That allows an application to test for the
    existance of FP_INFINITE as a safe way to determine if infinity is
    supported; this is the feature test macro for support for infinity.

    The problem with this is that the standard itself doesn't say that
    FP_INFINITE is defined conditionally.

    Even if FP_INFINITE could be defined conditionally, this would not
    imply that INFINITY is usable, since for instance, long double may
    have an infinity but not float.


    I don't know if there is a set of similar macros for double and long
    double types buried somewhere in the standard.


    Enjoy

    Jakob
    --
    Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
    Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
    This public discussion message is non-binding and may contain errors.
    WiseMo - Remote Service Management for PCs, Phones and Embedded

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Jakob Bohm on Fri Oct 1 14:26:09 2021
    Jakob Bohm <jb-usenet@wisemo.com.invalid> writes:
    On 2021-10-01 11:05, Vincent Lefevre wrote:
    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    No, it doesn't force the implementation to violate a constraint. It
    says that a *program* that uses the INFINITY macro violates a constraint >>> (if the implementation doesn't support infinities).
    Then this means that the C standard defines something the user must
    not use (except when __STDC_IEC_559__ is defined, as in this case,
    INFINITY is guaranteed to expand to the true infinity).

    Constraints apply to programs, not to implementations.
    This is related, as programs will be transformed by an
    implementation.

    It means that if a program assumes that INFINITY is meaningful, and it's >>> compiled for a target system where it isn't, a diagnostic is guaranteed. >>> And again, it might have made more sense to say that INFINITY is not
    defined for such implementations (as is done for the NAN macro), but
    perhaps there was existing practice.
    Yes, currently there is no way of fallback (without things like
    autoconf tests).
    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)?
    This should not break existing programs.

    The fallback is to test for defined(FP_INFINITE), see below.


    Here's what the C99 Rationale says:

    What is INFINITY on machines that do not support infinity? It should >>> be defined along the lines of: #define INFINITY 9e99999f, where
    there are enough 9s in the exponent so that the value is too large
    to represent as a float, hence, violates the constraint of 6.4.4
    Constants. In addition, the number classification macro FP_INFINITE >>> should not be defined. That allows an application to test for the
    existance of FP_INFINITE as a safe way to determine if infinity is
    supported; this is the feature test macro for support for infinity.

    The problem with this is that the standard itself doesn't say that
    FP_INFINITE is defined conditionally.
    Even if FP_INFINITE could be defined conditionally, this would not
    imply that INFINITY is usable, since for instance, long double may
    have an infinity but not float.


    I don't know if there is a set of similar macros for double and long
    double types buried somewhere in the standard.

    The INFINITY and NAN macros are defined only for float. I think the
    assumption is that either all floating-point types support infinities,
    or none do, and likewise for NaNs. On the other hand, fpclassify() is a
    macro that can be applied to an expression of any floating-point type.

    The problem, as I said, is that the standard doesn't say that
    FP_INFINITE is conditionally defined. Since they're specified in the
    same section that explicitly says that NAN is conditionally defined, I
    think the only reasonable reading of the standard's wording is that
    FP_INFINITE is defined whether infinities are supported or not.
    If they're not, it just means that fpclassify() will never return
    FP_INFINITE and isinf() always returns 0.

    The author(s) of the Rationale obviously *thought* that FP_INFINITE is conditionally defined. They were mistaken. The Rationale itself makes
    it clear that the Standard, not the Rationale, is what defines the
    language.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Keith Thompson on Mon Oct 4 09:26:19 2021
    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would not
    imply that INFINITY is usable, since for instance, long double may
    have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I think the implication is that it assumes either all floating types have NaNs
    and/or infinities, or none do. That might be a valid assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Vincent Lefevre on Mon Oct 4 10:34:18 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would not
    imply that INFINITY is usable, since for instance, long double may
    have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I think the
    implication is that it assumes either all floating types have NaNs
    and/or infinities, or none do. That might be a valid assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    Since the NAN macro is of type float (if it's define), it only makes
    sense to define it that way. Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    IMHO it would have been better if the assumption that all floating
    types behave similarly had been stated explicitly, and perhaps if there
    were three NAN macros for the three floating-point types.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Geoff Clare@21:1/5 to Keith Thompson on Tue Oct 5 13:53:06 2021
    Keith Thompson wrote:

    Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    IMHO it would have been better if the assumption that all floating
    types behave similarly had been stated explicitly, and perhaps if there
    were three NAN macros for the three floating-point types.

    NaN support for each floating type can be queried, and a NaN
    obtained if supported, by calling nanf(), nan(), and nanl().

    --
    Geoff Clare <netnews@gclare.org.uk>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Geoff Clare on Wed Oct 6 00:12:38 2021
    In article <ieut2i-07m.ln1@ID-313840.user.individual.net>,
    Geoff Clare <geoff@clare.see-my-signature.invalid> wrote:

    NaN support for each floating type can be queried, and a NaN
    obtained if supported, by calling nanf(), nan(), and nanl().

    But they are not required to be constant expressions, while
    the NAN macro expands to a constant expression.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Thu Oct 7 07:05:35 2021
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would
    not imply that INFINITY is usable, since for instance, long
    double may have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I
    think the implication is that it assumes either all floating types
    have NaNs and/or infinities, or none do. That might be a valid
    assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    Since the NAN macro is of type float (if it's define), it only makes
    sense to define it that way. Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    If float has a NaN then so do double and long double, because of
    6.2.5 paragraph 10. Similarly for infinity (or infinities).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Tim Rentsch on Thu Oct 7 07:51:03 2021
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would
    not imply that INFINITY is usable, since for instance, long
    double may have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I
    think the implication is that it assumes either all floating types
    have NaNs and/or infinities, or none do. That might be a valid
    assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    Since the NAN macro is of type float (if it's define), it only makes
    sense to define it that way. Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    If float has a NaN then so do double and long double, because of
    6.2.5 paragraph 10. Similarly for infinity (or infinities).

    Agreed. 6.2.5p10 says:

    There are three real floating types, designated as float, double,
    and long double. The set of values of the type float is a subset of
    the set of values of the type double; the set of values of the type
    double is a subset of the set of values of the type long double.

    (No need to make everyone look it up.)

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vincent Lefevre on Fri Oct 8 00:02:31 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87pmsqizrh.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    [.. use of the INFINITY macro in an implementation that does not
    have a float value for infinity..]

    If I understand correctly, it means that if an infinite value is
    not available, then a program that refers to the INFINITY macro (in
    a context where it's treated as a floating-point expression)
    violates that constraint, resulting in a required diagnostic.

    I think the consequence is more than a diagnostic (which may yield
    a compilation failure in practice, BTW): AFAIK, the standard does
    not give a particular definition for "overflows at translation
    time", which would make it undefined behavior as usual for
    overflows.

    The compound phrase does not need defining because its relevant
    constituent elements are defined in the C standard. The word
    "overflows" is defined for floating point types in 7.12.1
    paragraph 5

    A floating result overflows if the magnitude of the
    mathematical result is finite but so large that the
    mathematical result cannot be represented without
    extraordinary roundoff error in an object of the specified
    type. [...]

    The word "translation" is defined in detail in section 5.1.1.
    The compound phrase "overflows at translation time" is simply a
    combination of these defined terms under normal rules of English
    usage.

    Moreover, the C standard is quite clear that violating a
    constraint must evoke a diagnostic even if there is also
    undefined behavior. Section 5.1.1.3 paragraph 1 says this:

    A conforming implementation shall produce at least one
    diagnostic message (identified in an implementation-defined
    manner) if a preprocessing translation unit or translation
    unit contains a violation of any syntax rule or constraint,
    even if the behavior is also explicitly specified as
    undefined or implementation-defined. [...]

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vincent Lefevre on Fri Oct 8 08:30:22 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    [...]

    It means that if a program assumes that INFINITY is meaningful, and
    it's compiled for a target system where it isn't, a diagnostic is
    guaranteed. And again, it might have made more sense to say that
    INFINITY is not defined for such implementations (as is done for
    the NAN macro), but perhaps there was existing practice.

    Yes, currently there is no way of fallback (without things like
    autoconf tests).

    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)? [...]

    To me it seems better for INFINITY to be defined as it is rather
    than being conditionally defined. If what is needed is really an
    infinite value, just write INFINITY and the code either works or
    compiling it gives a diagnostic. If what is needed is just a very
    large value, write HUGE_VAL (or HUGE_VALF or HUGE_VALL, depending)
    and the code works whether infinite floating-point values are
    supported or not. If it's important that infinite values be
    supported but we don't want to risk a compilation failure, use
    HUGE_VAL combined with an assertion

    assert( HUGE_VAL == HUGE_VAL/2 );

    Alternatively, use INFINITY only in one small .c file, and give
    other sources a make dependency for a successful compilation
    (with of course a -pedantic-errors option) of that .c file. I
    don't see that having INFINITY be conditionally defined buys
    anything, except to more or less force use of #if/#else/#endif
    blocks in the preprocessor. I don't mind using the preprocessor
    when there is a good reason to do so, but here I don't see one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Keith Thompson on Fri Oct 8 11:41:22 2021
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would
    not imply that INFINITY is usable, since for instance, long
    double may have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I
    think the implication is that it assumes either all floating types
    have NaNs and/or infinities, or none do. That might be a valid
    assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    Since the NAN macro is of type float (if it's define), it only makes
    sense to define it that way. Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    If float has a NaN then so do double and long double, because of
    6.2.5 paragraph 10. Similarly for infinity (or infinities).

    Agreed. 6.2.5p10 says:

    There are three real floating types, designated as float, double,
    and long double. The set of values of the type float is a subset of
    the set of values of the type double; the set of values of the type
    double is a subset of the set of values of the type long double.

    (No need to make everyone look it up.)

    I just noticed that leaves open the possibility, for example, that
    double supports infinity but float doesn't.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Tim Rentsch on Fri Oct 8 11:40:09 2021
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    [...]
    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)? [...]

    To me it seems better for INFINITY to be defined as it is rather
    than being conditionally defined. If what is needed is really an
    infinite value, just write INFINITY and the code either works or
    compiling it gives a diagnostic. If what is needed is just a very
    large value, write HUGE_VAL (or HUGE_VALF or HUGE_VALL, depending)
    and the code works whether infinite floating-point values are
    supported or not. If it's important that infinite values be
    supported but we don't want to risk a compilation failure, use
    HUGE_VAL combined with an assertion

    assert( HUGE_VAL == HUGE_VAL/2 );

    Alternatively, use INFINITY only in one small .c file, and give
    other sources a make dependency for a successful compilation
    (with of course a -pedantic-errors option) of that .c file. I
    don't see that having INFINITY be conditionally defined buys
    anything, except to more or less force use of #if/#else/#endif
    blocks in the preprocessor. I don't mind using the preprocessor
    when there is a good reason to do so, but here I don't see one.

    I don't see how that's better than conditionally defining INFINITY.

    If you really need an infinite value, just write INFINITY and the code
    either works or compiling it gives a *clearer* diagnostic for the
    undeclared identifier.

    If you need to test whether infinities are supported, #ifdef INFINITY is
    a lot clearer than assert( HUGE_VAL == HUGE_VAL/2 ).

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Keith Thompson on Sat Oct 9 19:49:27 2021
    In article <874k9r7419.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would
    not imply that INFINITY is usable, since for instance, long
    double may have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I
    think the implication is that it assumes either all floating types >>>>> have NaNs and/or infinities, or none do. That might be a valid
    assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    Since the NAN macro is of type float (if it's define), it only makes
    sense to define it that way. Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    If float has a NaN then so do double and long double, because of
    6.2.5 paragraph 10. Similarly for infinity (or infinities).

    Agreed. 6.2.5p10 says:

    There are three real floating types, designated as float, double,
    and long double. The set of values of the type float is a subset of
    the set of values of the type double; the set of values of the type
    double is a subset of the set of values of the type long double.

    (No need to make everyone look it up.)

    I just noticed that leaves open the possibility, for example, that
    double supports infinity but float doesn't.

    This is what I had said above:

    "[...] for instance, long double may have an infinity but not float."

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Tim Rentsch on Sat Oct 9 20:05:38 2021
    In article <86sfxbpm9d.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    To me it seems better for INFINITY to be defined as it is rather
    than being conditionally defined. If what is needed is really an
    infinite value, just write INFINITY and the code either works or
    compiling it gives a diagnostic.

    diagnostic and undefined behavior. So this is not better than
    the case where INFINITY would be conditionally defined. At least,
    with a conditionally defined macro, one can test it and have a
    fallback, e.g. a more complex algorithm.

    If what is needed is just a very large value, write HUGE_VAL (or
    HUGE_VALF or HUGE_VALL, depending) and the code works whether
    infinite floating-point values are supported or not.

    This will not work if the main code requires infinity with a possible
    fallback. As a workaround, one could test HUGE_VAL as you said, but
    there are still potential issues. For instance, the standard does not
    guarantee that HUGE_VAL has the largest possible double value, and
    this can break algorithms based on comparisons / sorting.

    See note 232:

    "HUGE_VAL, HUGE_VALF, and HUGE_VALL can be positive infinities in
    an implementation that supports infinities."

    with just "can be" instead of "shall be".

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Tim Rentsch on Sat Oct 9 20:17:09 2021
    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Vincent Lefevre on Sat Oct 9 14:28:11 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <874k9r7419.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <877dewimc9.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <87lf3ehy4v.fsf@nosuchdomain.example.com>,
    Even if FP_INFINITE could be defined conditionally, this would
    not imply that INFINITY is usable, since for instance, long
    double may have an infinity but not float.

    The standard only defines INFINITY and NAN for type float. I
    think the implication is that it assumes either all floating types
    have NaNs and/or infinities, or none do. That might be a valid
    assumption.

    But the standard doesn't say that explicitly. It even just says
    "if and only if the implementation supports quiet NaNs for the
    float type". If the intent were to have NaN support for all the
    FP types or none, why doesn't it say "... for the floating types"
    instead of "... for the float type"?

    Since the NAN macro is of type float (if it's define), it only makes
    sense to define it that way. Presumably if an implementation had
    NaN for float but not for double, it would define NAN.

    If float has a NaN then so do double and long double, because of
    6.2.5 paragraph 10. Similarly for infinity (or infinities).

    Agreed. 6.2.5p10 says:

    There are three real floating types, designated as float, double,
    and long double. The set of values of the type float is a subset of
    the set of values of the type double; the set of values of the type
    double is a subset of the set of values of the type long double.

    (No need to make everyone look it up.)

    I just noticed that leaves open the possibility, for example, that
    double supports infinity but float doesn't.

    This is what I had said above:

    "[...] for instance, long double may have an infinity but not float."

    Yes. My initial assumption was that the three floating-point types
    could independently have or not have infinities. Tim cited 6.2.5p10,
    which implies that if float has infinities, then the wider types do
    also. (I wonder if the authors had infinities and NaNs in mind when
    they wrote that, but the implication is still there.)

    If long double has infinities, then float and double may or may not.
    If float has infinities, then double and long double must.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Mon Oct 11 12:40:02 2021
    On 10/9/21 4:17 PM, Vincent Lefevre wrote:
    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in
    an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2, the
    result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to James Kuyper on Mon Oct 11 12:39:50 2021
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    On 10/9/21 4:17 PM, Vincent Lefevre wrote:
    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in
    an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2, the
    result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    But does that apply when a constraint is violated?

    6.4.4p2, a constraint, says:

    Each constant shall have a type and the value of a constant shall be
    in the range of representable values for its type.

    A "constraint", aside from triggering a required diagnostic, is a
    "restriction, either syntactic or semantic, by which the exposition of
    language elements is to be interpreted", which is IMHO a bit vague.

    My mental model is that if a program violates a constraint and the implementation still accepts it (i.e., the required diagnostic is a
    non-fatal warning) the program's behavior is undefined -- but the
    standard doesn't say that. Of course if the implementation rejects the program, it has no behavior.

    For what it's worth, given this:

    double too_big = 1e1000;

    gcc, clang, and tcc all print a warning and set too_big to infinity.
    That's obviously valid if the behavior is undefined. I think it's also
    valid if the behavior is defined; the nearest representable value is
    DBL_MAX, and the larger representable value immediately adjacent to
    DBL_MAX is infinity.

    It doesn't seem to me to be particularly useful to say that a program
    can be rejected, but its behavior is defined if the implementation
    chooses not to reject it.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Keith Thompson on Mon Oct 11 21:04:32 2021
    On 10/11/21 3:39 PM, Keith Thompson wrote:
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    ...
    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in
    an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2, the
    result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    But does that apply when a constraint is violated?

    6.4.4p2, a constraint, says:

    Each constant shall have a type and the value of a constant shall be
    in the range of representable values for its type.

    A "constraint", aside from triggering a required diagnostic, is a "restriction, either syntactic or semantic, by which the exposition of language elements is to be interpreted", which is IMHO a bit vague.

    I can agree with the "a bit vague" description. I have previously said
    "I've never understood what it is that the part of that definition after
    the second comma was intended to convey."

    "If a ‘‘shall’’ or ‘‘shall not’’ requirement that appears outside of a
    constraint or runtime-constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard
    by the words ‘‘undefined behavior’’ or by the omission of any explicit definition of behavior." (4p2)

    There's no mention in there of a constraint violation automatically
    having undefined behavior. Most constraint violations do qualify as
    undefined behavior due to the "omission of any explicit definition of
    behavior" when the constraint is violated. But this isn't an example:
    6.4.4.2p3 provide a perfectly applicable definition for the behavior.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to James Kuyper on Mon Oct 11 18:33:24 2021
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    On 10/11/21 3:39 PM, Keith Thompson wrote:
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    ...
    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in >>> an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2, the
    result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    But does that apply when a constraint is violated?

    6.4.4p2, a constraint, says:

    Each constant shall have a type and the value of a constant shall be
    in the range of representable values for its type.

    A "constraint", aside from triggering a required diagnostic, is a
    "restriction, either syntactic or semantic, by which the exposition of
    language elements is to be interpreted", which is IMHO a bit vague.

    I can agree with the "a bit vague" description. I have previously said
    "I've never understood what it is that the part of that definition after
    the second comma was intended to convey."

    "If a ‘‘shall’’ or ‘‘shall not’’ requirement that appears outside of a
    constraint or runtime-constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this International Standard
    by the words ‘‘undefined behavior’’ or by the omission of any explicit
    definition of behavior." (4p2)

    There's no mention in there of a constraint violation automatically
    having undefined behavior. Most constraint violations do qualify as
    undefined behavior due to the "omission of any explicit definition of behavior" when the constraint is violated. But this isn't an example: 6.4.4.2p3 provide a perfectly applicable definition for the behavior.

    I don't disagree.

    On the other hand, one possible interpretation of the phrase "a
    restriction ... by which the exposition of language elements is to be interpreted" could be that if the constraint is violated, there is no meaningful interpretation. Or to put it another way, that the semantic description applies only if all constraints are satisfied.

    I've searched for the word "constraint" in the C89 and C99 Rationale
    documents. They were not helpful.

    I am admittedly trying to read into the standard what I think it
    *should* say. A rule that constraint violations cause undefined
    behavior would, if nothing else, make the standard a bit simpler.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Tue Oct 26 10:01:03 2021
    In article <sk1pd2$5e3$3@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/9/21 4:17 PM, Vincent Lefevre wrote:
    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in
    an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2, the
    result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    OK, but I was asking "where is the result of an overflow defined by
    the standard?" I don't see the word "overflow" in the above spec.

    Note that if the value is DBL_MAX, then it is in the range of
    representable values for its type, and the constraint is not
    violated.

    Note also that in case of overflow, "the nearest representable value"
    is not defined. IEEE 754 defines it as infinity. But what if Annex F
    is not supported and there is an infinity? Should the value still be
    the infinity or DBL_MAX (which is really the nearest, as the distance
    is finite, while the distance to the infinity is infinite).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Tue Oct 26 12:53:31 2021
    On 10/26/21 6:01 AM, Vincent Lefevre wrote:
    In article <sk1pd2$5e3$3@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/9/21 4:17 PM, Vincent Lefevre wrote:
    ...
    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in
    an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2, the
    result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    OK, but I was asking "where is the result of an overflow defined by
    the standard?" I don't see the word "overflow" in the above spec.

    Overflow occurs when a floating constant is created whose value is
    greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
    above description does not explicitly mention the word "overflow", it's perfectly clear what that description means when overflow occurs. If the constant is greater than DBL_MAX, the "nearest representable value" is
    always DBL_MAX. The next smaller representable value is
    nextafter(DBL_MAX, 0). If infinity is representable, the "larger ... representable value" is infinity; otherwise, there is no "larger
    representable value", and one of the other two must be chosen.

    Note also that in case of overflow, "the nearest representable value"
    is not defined.

    No definition by the standard is needed; the conventional mathematical definitions of "nearest" are sufficient. If infinity is representable,
    DBL_MAX is always nearer to any finite value than infinity is.
    Regardless of whether infinity is representable, any finite value
    greater than DBL_MAX is closer to DBL_MAX than it is to any other
    representable value.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Thu Oct 28 09:38:21 2021
    In article <sl9bqb$hf5$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/26/21 6:01 AM, Vincent Lefevre wrote:
    OK, but I was asking "where is the result of an overflow defined by
    the standard?" I don't see the word "overflow" in the above spec.

    Overflow occurs when a floating constant is created whose value is
    greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
    above description does not explicitly mention the word "overflow", it's perfectly clear what that description means when overflow occurs.

    Why "perfectly clear"??? This is even inconsistent with 7.12.1p5
    of N2596, which says:

    A floating result overflows if the magnitude (absolute value)
    of the mathematical result is finite but so large that the
    mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type.

    If you have a mathematical value (exact value) much larger than
    DBL_MAX and that rounds to DBL_MAX (e.g. with round-toward-zero),
    there should be an overflow, despite the fact that the FP result
    is not greater than DBL_MAX (since it is equal to DBL_MAX).

    Moreover, with the above definition, it is DBL_NORM_MAX that is
    more likely taken into account, not DBL_MAX. But this is probably
    not what is expected with floating-point constants.

    Note also that in case of overflow, "the nearest representable value"
    is not defined.

    No definition by the standard is needed; the conventional mathematical definitions of "nearest" are sufficient. If infinity is representable, DBL_MAX is always nearer to any finite value than infinity is.
    Regardless of whether infinity is representable, any finite value
    greater than DBL_MAX is closer to DBL_MAX than it is to any other representable value.

    The issue is that this may easily be confused with the result
    obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
    (where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
    despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Thu Oct 28 11:23:41 2021
    On 10/28/21 5:38 AM, Vincent Lefevre wrote:
    In article <sl9bqb$hf5$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/26/21 6:01 AM, Vincent Lefevre wrote:
    OK, but I was asking "where is the result of an overflow defined by
    the standard?" I don't see the word "overflow" in the above spec.

    Overflow occurs when a floating constant is created whose value is
    greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
    above description does not explicitly mention the word "overflow", it's
    perfectly clear what that description means when overflow occurs.

    Why "perfectly clear"??? This is even inconsistent with 7.12.1p5
    of N2596, which says:

    7.12.1p5 describes the math library, not the handling of floating point constants. While the C standard does recommended that "The
    translation-time conversion of floating constants should match the execution-time conversion of character strings by library functions,
    such as strtod , given matching inputs suitable for both conversions,
    the same result format, and default execution-time rounding."
    (6.4.4.2p11), it does not actually require such a match. Therefore, if
    there is any inconsistency it would not be problematic.

    A floating result overflows if the magnitude (absolute value)
    of the mathematical result is finite but so large that the
    mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type.

    7.12.1p5 goes on to say that "If a floating result overflows and default rounding is in effect, then the function returns the value of the macro HUGE_VAL ...".
    As cited above, the standard recommends, but does not require, the use
    of default execution-time rounding mode for floating point constants.
    HUGE_VAL is only required to be positive (7.12p6) - it could be as small
    as DBL_MIN. However, on implementations that support infinities, it is
    allowed to be a positive infinity (footnote 245), and when
    __STDC_IEC_559__ is pre#defined by the implementation, it's required to
    be positive infinity (F10p2). Even if it isn't positive infinity, it is
    allowed to be DBL_MAX. DBL_MAX and positive infinity are two of the
    three options allowed by 6.4.4.2p4 for constants larger than DBL_MAX, in
    which case there's no conflict.
    If HUGE_VAL is not one of those three values, then 6.4.4.2p4 still
    applies, but 7.12.1p5 need not apply, since a match to the behavior of
    strtod() is only recommended, not required..

    If you have a mathematical value (exact value) much larger than
    DBL_MAX and that rounds to DBL_MAX (e.g. with round-toward-zero),
    there should be an overflow, despite the fact that the FP result
    is not greater than DBL_MAX (since it is equal to DBL_MAX).

    Agreed. As a result, the overflow exception should be signaled. However,
    the C standard mandates that "Floating constants are converted to
    internal format as if at translation-time. The conversion of a floating constant shall not raise an exceptional condition or a floating-point
    exception at execution time." (6.4.4.2p8). If an implementation chooses
    to do the conversion at translation-time, the exception would be raised
    only within the compiler, which has no obligation to do anything with
    it. The implementation could generate a diagnostic, but such a constant
    is not, in itself, justification for rejecting the program.

    Therefore, if an implementation chooses to defer actual conversion until run-time, it's required to produce the same results, which means it must
    clear that overflow exception before turning control over to the user code.

    Moreover, with the above definition, it is DBL_NORM_MAX that is
    more likely taken into account, not DBL_MAX.

    According to 5.2.4.2.2p19, DBL_MAX is the maximum representable finite
    floating point value, while DBL_NORM_MAX is the maximum normalized
    number. 6.4.4.2p4 refers only to representable values, saying nothing
    about normalization. Neither 7.12.5p1 nor 7.12p6 say anything to require
    that the value be normalized. Therefore, as far as I can see, DBL_MAX is
    the relevant value.

    Note also that in case of overflow, "the nearest representable value"
    is not defined.

    No definition by the standard is needed; the conventional mathematical
    definitions of "nearest" are sufficient. If infinity is representable,
    DBL_MAX is always nearer to any finite value than infinity is.
    Regardless of whether infinity is representable, any finite value
    greater than DBL_MAX is closer to DBL_MAX than it is to any other
    representable value.

    The issue is that this may easily be confused with the result
    obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
    (where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
    despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).

    Yes, and DBL_MAX and +Inf are two of the three values permitted by
    6.4.4.2p4, so I don't see any conflict there. As far as I can see, the
    value required by IEEE 754 is always one of the three values permitted
    by 6.4.4.2p4, so there's never a conflict. Are you aware of any?

    For hexadecimal floating point constants on systems with FLT_RADIX a
    power of 2, 6.4.4.2p4 only allows one value - the one that is correctly
    rounded - but that's precisely the same value that IEEE 754 requires.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Fri Oct 29 12:12:02 2021
    In article <slef9t$98j$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/28/21 5:38 AM, Vincent Lefevre wrote:
    In article <sl9bqb$hf5$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/26/21 6:01 AM, Vincent Lefevre wrote:
    OK, but I was asking "where is the result of an overflow defined by
    the standard?" I don't see the word "overflow" in the above spec.

    Overflow occurs when a floating constant is created whose value is
    greater than DBL_MAX or less than -DBL_MAX. Despite the fact that the
    above description does not explicitly mention the word "overflow", it's
    perfectly clear what that description means when overflow occurs.

    Why "perfectly clear"??? This is even inconsistent with 7.12.1p5
    of N2596, which says:

    7.12.1p5 describes the math library, not the handling of floating point constants. While the C standard does recommended that "The
    translation-time conversion of floating constants should match the execution-time conversion of character strings by library functions,
    such as strtod , given matching inputs suitable for both conversions,
    the same result format, and default execution-time rounding."
    (6.4.4.2p11), it does not actually require such a match. Therefore, if
    there is any inconsistency it would not be problematic.

    Yes, but this means that any implicit use of overflow is not
    perfectly clear.

    A floating result overflows if the magnitude (absolute value)
    of the mathematical result is finite but so large that the
    mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type.

    7.12.1p5 goes on to say that "If a floating result overflows and default rounding is in effect, then the function returns the value of the macro HUGE_VAL ...".
    As cited above, the standard recommends, but does not require, the use
    of default execution-time rounding mode for floating point constants. HUGE_VAL is only required to be positive (7.12p6) - it could be as small
    as DBL_MIN.

    Note that C2x (in particular, the current draft N2731) requires that nextup(HUGE_VAL) be HUGE_VAL, probably assuming that HUGE_VAL is the
    maximum value. I've just sent a mail to the CFP list about that.

    Moreover, with the above definition, it is DBL_NORM_MAX that is
    more likely taken into account, not DBL_MAX.

    According to 5.2.4.2.2p19, DBL_MAX is the maximum representable finite floating point value, while DBL_NORM_MAX is the maximum normalized
    number. 6.4.4.2p4 refers only to representable values, saying nothing
    about normalization. Neither 7.12.5p1 nor 7.12p6 say anything to require
    that the value be normalized. Therefore, as far as I can see, DBL_MAX is
    the relevant value.

    But DBL_NORM_MAX is the relevant value for the general definition
    of "overflow" (on double). So in 7.12p4, "overflows" is not used
    correctly, at least not this the usual meaning.

    More than that, with the IEEE 754 overflow definition, you have
    numbers larger than DBL_MAX (up to those within 1 ulp) that do not
    overflow.

    Note also that in case of overflow, "the nearest representable value"
    is not defined.

    No definition by the standard is needed; the conventional mathematical
    definitions of "nearest" are sufficient. If infinity is representable,
    DBL_MAX is always nearer to any finite value than infinity is.
    Regardless of whether infinity is representable, any finite value
    greater than DBL_MAX is closer to DBL_MAX than it is to any other
    representable value.

    The issue is that this may easily be confused with the result
    obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
    (where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
    despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).

    Yes, and DBL_MAX and +Inf are two of the three values permitted by
    6.4.4.2p4, so I don't see any conflict there.

    My point is that this definition of "nearest" does not match the
    definition of IEEE 754's FE_TONEAREST. I'm not saying that there
    is a conflict, just that the text is ambiguous. If one follows
    the IEEE 754 definition, there are only two possible values
    (DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Sat Oct 30 02:08:20 2021
    On 10/29/21 8:12 AM, Vincent Lefevre wrote:
    In article <slef9t$98j$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/28/21 5:38 AM, Vincent Lefevre wrote:
    In article <sl9bqb$hf5$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/26/21 6:01 AM, Vincent Lefevre wrote:
    ...
    7.12.1p5 describes the math library, not the handling of floating point
    constants. While the C standard does recommended that "The
    translation-time conversion of floating constants should match the
    execution-time conversion of character strings by library functions,
    such as strtod , given matching inputs suitable for both conversions,
    the same result format, and default execution-time rounding."
    (6.4.4.2p11), it does not actually require such a match. Therefore, if
    there is any inconsistency it would not be problematic.

    Yes, but this means that any implicit use of overflow is not
    perfectly clear.

    What is unclear about it? It very explicitly allows three different
    values, deliberately failing to specify only one of them as valid, and
    it is perfectly clear what those three values are.

    ...
    7.12.1p5 goes on to say that "If a floating result overflows and default
    rounding is in effect, then the function returns the value of the macro
    HUGE_VAL ...".
    As cited above, the standard recommends, but does not require, the use
    of default execution-time rounding mode for floating point constants.
    HUGE_VAL is only required to be positive (7.12p6) - it could be as small
    as DBL_MIN.

    Note that C2x (in particular, the current draft N2731) requires that nextup(HUGE_VAL) be HUGE_VAL, probably assuming that HUGE_VAL is the
    maximum value. I've just sent a mail to the CFP list about that.

    I've just downloaded N2731.pdf. Yes, that is an improvement over the
    previous specification, and strengthens my argument: the value that is
    required by 7.12.1p5 for strtod() in the event of overflow is now always
    one of two or three values permitted by 6.4.4.2p4 for overflowing floating-point constants, regardless of whether the floating point
    format supports infinities or IEEE 754.

    ...
    about normalization. Neither 7.12.5p1 nor 7.12p6 say anything to require
    that the value be normalized. Therefore, as far as I can see, DBL_MAX is
    the relevant value.

    But DBL_NORM_MAX is the relevant value for the general definition
    of "overflow" (on double). So in 7.12p4, "overflows" is not used
    correctly, at least not this the usual meaning.

    What do you consider the "general definition of overflow"? I would have
    though you were referring to 7.12.1p5, but I see no wording there that distinguishes between normalized and unnormalized values.

    More than that, with the IEEE 754 overflow definition, you have
    numbers larger than DBL_MAX (up to those within 1 ulp) that do not
    overflow.

    I don't see how that's a problem.

    ...
    No definition by the standard is needed; the conventional mathematical >>>> definitions of "nearest" are sufficient. If infinity is representable, >>>> DBL_MAX is always nearer to any finite value than infinity is.
    Regardless of whether infinity is representable, any finite value
    greater than DBL_MAX is closer to DBL_MAX than it is to any other
    representable value.

    The issue is that this may easily be confused with the result
    obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
    (where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
    despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).

    Yes, and DBL_MAX and +Inf are two of the three values permitted by
    6.4.4.2p4, so I don't see any conflict there.

    My point is that this definition of "nearest" does not match the
    definition of IEEE 754's FE_TONEAREST.

    FE_TONEAREST is not "IEEE 754's". It is a macro defined by the C
    standard, and in the latest draft it's been changed so it now represents
    IEC 60559's "roundTiesToEven" rounding attribute.

    The C standard does not define "nearest", it merely uses it in the
    phrase "nearest representable value", the same exact phrase used for
    exactly the same purpose by IEC 60559 while describing the
    roundTiesToEven rounding attribute. Note that I'm not saying that roundTiesToEven is defined as producing the "nearest representable
    value" - only that the specification starts out from that phrase, and
    then adds complications to it, such as how ties and overflows are handled.

    Section 6.4.4.2p4 uses "nearest representable value" to identify one of
    the three permitted values, and uses that value to determine the other
    two permitted values. It does not define a rounding mode, and was not
    intended to do so. But every IEC 60559 rounding mode selects one of the
    three values permitted by 6.4.4.2p4.

    ... I'm not saying that there
    is a conflict, just that the text is ambiguous. If one follows
    the IEEE 754 definition, there are only two possible values
    (DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).

    Yes, that was deliberate - it was intended to be compatible with IEC
    60559, but also to be sufficiently loose to allow use of non-IEC 60559
    floating point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Mon Nov 8 02:44:17 2021
    In article <slingl$56v$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/29/21 8:12 AM, Vincent Lefevre wrote:
    In article <slef9t$98j$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/28/21 5:38 AM, Vincent Lefevre wrote:
    In article <sl9bqb$hf5$2@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/26/21 6:01 AM, Vincent Lefevre wrote:
    ...
    7.12.1p5 describes the math library, not the handling of floating point
    constants. While the C standard does recommended that "The
    translation-time conversion of floating constants should match the
    execution-time conversion of character strings by library functions,
    such as strtod , given matching inputs suitable for both conversions,
    the same result format, and default execution-time rounding."
    (6.4.4.2p11), it does not actually require such a match. Therefore, if
    there is any inconsistency it would not be problematic.

    Yes, but this means that any implicit use of overflow is not
    perfectly clear.

    What is unclear about it? It very explicitly allows three different
    values, deliberately failing to specify only one of them as valid, and
    it is perfectly clear what those three values are.

    These rules are not about overflow. They are general rules.

    What is not defined is when a value overflows (there are different definitions). And what is the consequence of the overflow (at runtime,
    there may be traps).

    But DBL_NORM_MAX is the relevant value for the general definition
    of "overflow" (on double). So in 7.12p4, "overflows" is not used
    correctly, at least not this the usual meaning.

    What do you consider the "general definition of overflow"?

    The one given by the standard in 7.12.1p5.

    I would have though you were referring to 7.12.1p5, but I see no
    wording there that distinguishes between normalized and unnormalized
    values.

    "A floating result overflows if the magnitude of the mathematical
    result is finite but so large that the mathematical result cannot
    be represented without extraordinary roundoff error in an object
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    of the specified type."

    If the exact result is above the maximum normal value, there is
    likely to be an extraordinary roundoff error.

    More than that, with the IEEE 754 overflow definition, you have
    numbers larger than DBL_MAX (up to those within 1 ulp) that do not overflow.

    I don't see how that's a problem.

    Your definition conflicts with IEEE 754.

    Note also that overflow is also used for any floating-point expression
    (not just math functions of the C library). See 7.6.2. And when Annex F
    is supported, the IEEE 754 definition necessarily applies to the
    associated FP types.

    ...
    No definition by the standard is needed; the conventional mathematical >>>> definitions of "nearest" are sufficient. If infinity is representable, >>>> DBL_MAX is always nearer to any finite value than infinity is.
    Regardless of whether infinity is representable, any finite value
    greater than DBL_MAX is closer to DBL_MAX than it is to any other
    representable value.

    The issue is that this may easily be confused with the result
    obtained in the FE_TONEAREST rounding mode with the IEEE 754 rules
    (where, for instance, 2*DBL_MAX rounds to +Inf, not to DBL_MAX,
    despite the fact that 2*DBL_MAX is closer to DBL_MAX than to +Inf).

    Yes, and DBL_MAX and +Inf are two of the three values permitted by
    6.4.4.2p4, so I don't see any conflict there.

    My point is that this definition of "nearest" does not match the
    definition of IEEE 754's FE_TONEAREST.

    FE_TONEAREST is not "IEEE 754's". It is a macro defined by the C
    standard, and in the latest draft it's been changed so it now represents
    IEC 60559's "roundTiesToEven" rounding attribute.

    If Annex F is supported, FE_TONEAREST corresponds to the IEEE 754-1985 round-to-nearest mode. This is what I mean.

    ... I'm not saying that there
    is a conflict, just that the text is ambiguous. If one follows
    the IEEE 754 definition, there are only two possible values
    (DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).

    Yes, that was deliberate - it was intended to be compatible with IEC
    60559, but also to be sufficiently loose to allow use of non-IEC 60559 floating point.

    But what is allowed is not clear for an IEEE 754 format (this does
    not affect the INFINITY macro, but users could write exact values
    larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be
    unexpected as the obtained value).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Mon Nov 8 01:46:18 2021
    On 11/7/21 9:44 PM, Vincent Lefevre wrote:
    In article <slingl$56v$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/29/21 8:12 AM, Vincent Lefevre wrote:
    ...
    Yes, but this means that any implicit use of overflow is not
    perfectly clear.

    What is unclear about it? It very explicitly allows three different
    values, deliberately failing to specify only one of them as valid, and
    it is perfectly clear what those three values are.

    These rules are not about overflow. They are general rules.

    Yes, and they are sufficiently general that it is perfectly clear how
    they apply to the case when there is overflow.

    What is not defined is when a value overflows (there are different definitions). And what is the consequence of the overflow (at runtime,
    there may be traps).

    We're talking about floating point constants here. The standard clearly specifies that "Floating constants are converted to internal format as
    if at translation-time. The conversion of a floating constant shall not
    raise an exceptional condition or a floating-point exception at
    execution time." Runtime behavior is not the issue, and traps are not
    allowed.

    The standard describes two cases: if infinities are supported (as they necessarily are when IEEE formats are used), INFINITY is required to
    expand to a constant expression that represents positive or unsigned
    infinity. This is not outside the range of representable values - that
    range includes either positive or unsigned infinity, so the constraint
    in 6.4.4p2 is not violated.

    If infinities are not supported (which is therefore necessarily not an
    IEEE format), then INFINITY is required to expand to a constant that
    will overflow. This does violate that constraint, which means that a
    diagnostic message is required.

    That's why it confuses me that you're talking about INFINITY violating
    the constraint in 6.4.4p2 and the requirements of IEEE 754 at the same
    time. If float uses an IEEE 754 floating point format, the way that
    INFINITY is required to be defined doesn't violate that constraint.

    It's normally the case that, when a constraint is violated, the behavior
    is undefined. However, that's not because of anything the standard says
    about constraint violations in general. It's because, in most cases, the behavior is undefined "by the omission of any explicit definition of
    behavior." (4p2). However, this is one of the rare exceptions: there is
    no such omission. There is a general definition of the behavior that
    continues to apply in a perfectly clear fashion even in the event of
    overflow. Therefore, an implementation is required to assign a value to
    such a constant that is one of the two identified by that definition,
    either FLT_MAX or nextdownf(FLT_MAX).

    But DBL_NORM_MAX is the relevant value for the general definition
    of "overflow" (on double). So in 7.12p4, "overflows" is not used
    correctly, at least not this the usual meaning.

    What do you consider the "general definition of overflow"?

    The one given by the standard in 7.12.1p5.

    I would have though you were referring to 7.12.1p5, but I see no
    wording there that distinguishes between normalized and unnormalized
    values.

    "A floating result overflows if the magnitude of the mathematical
    result is finite but so large that the mathematical result cannot
    be represented without extraordinary roundoff error in an object
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    of the specified type."

    If the exact result is above the maximum normal value, there is
    likely to be an extraordinary roundoff error.

    Your comment made me realize that I had no idea how DBL_NORM_MAX could
    possibly be less than DBL_MAX. I did some searching, and discovered
    official text of a committee decision indicating that they are normally
    the same - the only exception known to the committee was systems that implemented a long double as the sum of a pair of doubles, for which
    LDBL_MAX == 2.0L*DBL_MAX, while LDBL_NORM_MAX is just slightly larger
    than DBL_MAX.

    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
    normalized floating point number that is larger than LDBL_NORM_MAX,
    which strikes me as a contradiction.

    In any event, INFINITY is required to expand into an expression of type "float", so if the only known exception involves long double, it's not
    very relevant.

    ...
    ... I'm not saying that there
    is a conflict, just that the text is ambiguous. If one follows
    the IEEE 754 definition, there are only two possible values
    (DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).

    Yes, that was deliberate - it was intended to be compatible with IEC
    60559, but also to be sufficiently loose to allow use of non-IEC 60559
    floating point.

    But what is allowed is not clear for an IEEE 754 format (this does
    not affect the INFINITY macro, but users could write exact values
    larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be
    unexpected as the obtained value).

    It's unexpected because that would violate a requirement of IEEE 754,
    but the C standard doesn't require violating that requirement. Section 6.4.4.2p4 of the C standard allows such a constant to have any one of
    the three values (+infinity, FLT_MAX, or nextdownf(FLT_MAX)).
    Therefore, an implementation that wants to conform to both the C
    standard and IEEE 754 must select FLT_MAX. What's unclear or ambiguous
    about that?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Mon Nov 8 10:56:53 2021
    In article <smah3q$a9f$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/7/21 9:44 PM, Vincent Lefevre wrote:
    In article <slingl$56v$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 10/29/21 8:12 AM, Vincent Lefevre wrote:
    ...
    Yes, but this means that any implicit use of overflow is not
    perfectly clear.

    What is unclear about it? It very explicitly allows three different
    values, deliberately failing to specify only one of them as valid, and
    it is perfectly clear what those three values are.

    These rules are not about overflow. They are general rules.

    Yes, and they are sufficiently general that it is perfectly clear how
    they apply to the case when there is overflow.

    I've done some tests, and it is interesting to see that both GCC and
    Clang choose the IEEE 754 definition of overflow on floating-point
    constants, not yours (<sl9bqb$hf5$2@dont-email.me>). For instance,
    the exact value of 0x1.fffffffffffff7p1023 is larger than DBL_MAX,
    but it doesn't trigger an overflow warning with GCC and Clang.

    Note these are warnings really about overflow, and not about the range
    of floating-point numbers.

    What is not defined is when a value overflows (there are different definitions). And what is the consequence of the overflow (at runtime, there may be traps).

    We're talking about floating point constants here. The standard clearly specifies that "Floating constants are converted to internal format as
    if at translation-time. The conversion of a floating constant shall not
    raise an exceptional condition or a floating-point exception at
    execution time." Runtime behavior is not the issue, and traps are not allowed.

    I agree. But the question is whether the compiler may choose to
    stop the compilation.

    There is a confusion in the standard, because 6.4.4p2 says
    "the value of a constant" while "value" is defined by 3.19
    and means the value of the object, while I suspect that
    6.4.4p2 intends to mean the *exact* value.

    The standard describes two cases: if infinities are supported (as they necessarily are when IEEE formats are used), INFINITY is required to
    expand to a constant expression that represents positive or unsigned infinity. This is not outside the range of representable values - that
    range includes either positive or unsigned infinity, so the constraint
    in 6.4.4p2 is not violated.

    The range includes all real numbers, but not infinities. No issues
    with INFINITY, but my remark was about the case a user would write
    a constant like 0x1.0p1024 (or 1.0e999). Such constants are in the
    range of floating-point numbers (which is the set of real numbers in
    this case), but this constant overflows with the IEEE 754 meaning,
    and both GCC and Clang emits a warning for this reason.

    Note that if the intent were "exceeds the range", the C standard
    should have said that.

    If infinities are not supported (which is therefore necessarily not an
    IEEE format), then INFINITY is required to expand to a constant that
    will overflow. This does violate that constraint, which means that a diagnostic message is required.

    This point is not clear and does not match what implementations
    consider as overflow.

    It's normally the case that, when a constraint is violated, the behavior
    is undefined. However, that's not because of anything the standard says
    about constraint violations in general. It's because, in most cases, the behavior is undefined "by the omission of any explicit definition of behavior." (4p2). However, this is one of the rare exceptions: there is
    no such omission. There is a general definition of the behavior that continues to apply in a perfectly clear fashion even in the event of overflow. Therefore, an implementation is required to assign a value to
    such a constant that is one of the two identified by that definition,
    either FLT_MAX or nextdownf(FLT_MAX).

    I think that I was initially confused by the meaning of "value".
    in 6.4.4p2, as it seems to imply that a converted value may be
    outside the range of representable values. It seems that it was
    written mainly with integer constants in mind.

    But there's still the fact that "overflow" is not defined (this
    term is used only when there are no infinities, though).

    But DBL_NORM_MAX is the relevant value for the general definition
    of "overflow" (on double). So in 7.12p4, "overflows" is not used
    correctly, at least not this the usual meaning.

    What do you consider the "general definition of overflow"?

    The one given by the standard in 7.12.1p5.

    I would have though you were referring to 7.12.1p5, but I see no
    wording there that distinguishes between normalized and unnormalized
    values.

    "A floating result overflows if the magnitude of the mathematical
    result is finite but so large that the mathematical result cannot
    be represented without extraordinary roundoff error in an object
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    of the specified type."

    If the exact result is above the maximum normal value, there is
    likely to be an extraordinary roundoff error.

    Your comment made me realize that I had no idea how DBL_NORM_MAX could possibly be less than DBL_MAX. I did some searching, and discovered
    official text of a committee decision indicating that they are normally
    the same - the only exception known to the committee was systems that implemented a long double as the sum of a pair of doubles, for which
    LDBL_MAX == 2.0L*DBL_MAX, while LDBL_NORM_MAX is just slightly larger
    than DBL_MAX.

    The case LDBL_MAX == 2.0L*DBL_MAX is an hypothetical system, but
    allowed by the C standard. However, the more general fact that there
    may be finite values above the maximum normal floating-point number
    justifies the definition of macros like DBL_NORM_MAX. The intent is
    to say that if a computed value is larger than DBL_NORM_MAX, then
    there may have been a loss of accuracy. In error analysis, this is
    what is meant by "overflow". (Note that one may also have an overflow
    if one gets DBL_NORM_MAX, e.g. when DBL_NORM_MAX = DBL_MAX and
    rounding is toward 0, with an exact value ≥ DBL_MAX + 1 ulp.)

    The point of overflow and underflow exceptions is to signal that a
    conventional error analysis may no longer be valid.

    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, LDBL_MAX is represented by a value with f_1 = 1, and therefore is a normalized floating point number that is larger than LDBL_NORM_MAX,
    which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    In any event, INFINITY is required to expand into an expression of type "float", so if the only known exception involves long double, it's not
    very relevant.

    One could imaging a non-IEEE 754 system where float would not be
    a strict FP format. (I'm wondering whether there are attempts to
    replace the FP formats by unums, at least for testing purpose.)

    ...
    ... I'm not saying that there
    is a conflict, just that the text is ambiguous. If one follows
    the IEEE 754 definition, there are only two possible values
    (DBL_MAX and +Inf, thus excluding nextdown(DBL_MAX)).

    Yes, that was deliberate - it was intended to be compatible with IEC
    60559, but also to be sufficiently loose to allow use of non-IEC 60559
    floating point.

    But what is allowed is not clear for an IEEE 754 format (this does
    not affect the INFINITY macro, but users could write exact values
    larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be unexpected as the obtained value).

    It's unexpected because that would violate a requirement of IEEE 754,
    but the C standard doesn't require violating that requirement. Section 6.4.4.2p4 of the C standard allows such a constant to have any one of
    the three values (+infinity, FLT_MAX, or nextdownf(FLT_MAX)).
    Therefore, an implementation that wants to conform to both the C
    standard and IEEE 754 must select FLT_MAX. What's unclear or ambiguous
    about that?

    If Annex F is not claimed to be supported[*], this requirement would
    not be violated.

    [*] For instance, for systems that almost support this annex, but
    not completely.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Mon Nov 8 13:50:00 2021
    On 11/8/21 5:56 AM, Vincent Lefevre wrote:
    In article <smah3q$a9f$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/7/21 9:44 PM, Vincent Lefevre wrote:...
    These rules are not about overflow. They are general rules.

    Yes, and they are sufficiently general that it is perfectly clear how
    they apply to the case when there is overflow.

    I've done some tests, and it is interesting to see that both GCC and
    Clang choose the IEEE 754 definition of overflow on floating-point
    constants, not yours (<sl9bqb$hf5$2@dont-email.me>).

    The only definition for overflow that I discussed is not mine, it
    belongs to the C standard: "A floating result overflows if the magnitude (absolute value) of the mathematical result is finite but so large that
    the mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type." (7.12.1p5).

    ... For instance,
    the exact value of 0x1.fffffffffffff7p1023 is larger than DBL_MAX,
    but it doesn't trigger an overflow warning with GCC and Clang.

    No warning is mandated for overflows, so that doesn't contradict
    anything I said.

    I wasn't talking about overflow for it's own sake, but only in the
    context of what the standard says about the value of floating point
    constants. What value does that constant have? Is it one of the three
    values permitted by 6.4.4.2p4? Is it, in particular, the value required
    by IEEE 754? If the answers to both questions are yes, it's consistent
    with everything I said.

    What is not defined is when a value overflows (there are different
    definitions). And what is the consequence of the overflow (at runtime,
    there may be traps).

    We're talking about floating point constants here. The standard clearly
    specifies that "Floating constants are converted to internal format as
    if at translation-time. The conversion of a floating constant shall not
    raise an exceptional condition or a floating-point exception at
    execution time." Runtime behavior is not the issue, and traps are not
    allowed.

    I agree. But the question is whether the compiler may choose to
    stop the compilation.

    I don't remember that issue having previously been raised.

    "The implementation shall not successfully translate a preprocessing translation unit containing a #error preprocessing directive unless it
    is part of a group skipped by conditional inclusion." (4p4).

    "The implementation shall be able to translate and execute at least one
    program that contains at least one instance of every one of the
    following limits:" (5.2.4.1p1).

    In all other cases, stopping compilation is neither mandatory nor
    prohibited.

    There is a confusion in the standard, because 6.4.4p2 says
    "the value of a constant" while "value" is defined by 3.19
    and means the value of the object, while I suspect that
    6.4.4p2 intends to mean the *exact* value.

    The term "representable value" is used in 23 places in the standard,
    including 6.4.4.2p4. That term would be redundant if the term "value"
    only had meaning when it could be represented. That interpretation would
    render all 23 of those clauses meaningless, including 6.4.4.2p4.

    The standard frequently uses the term "value" to refer to the
    mathematical value of something, which isn't necessarily representable
    in any type, and in particular, need not be representable in the
    particular type relevant to the discussion. This is usually done in the
    context of defining how the requirements imposed by the C standard
    depend upon whether or not the mathematical value is representable or in
    the range of representable values, as is the case in 6.4.4p2.

    I will agree that it would be clearer to either modify the definition of "value" to include such usage, or to define and consistently use some
    other term (such as "mathematical result", which is used for this
    purpose only in the discussions of floating point overflow and underflow).

    Note that IEEE 754 uses the same idea in it's description of overflow:
    "...by what would have been the rounded floating point result were the
    exponent range unbounded."

    ...
    The standard describes two cases: if infinities are supported (as they
    necessarily are when IEEE formats are used), INFINITY is required to
    expand to a constant expression that represents positive or unsigned
    infinity. This is not outside the range of representable values - that
    range includes either positive or unsigned infinity, so the constraint
    in 6.4.4p2 is not violated.

    The range includes all real numbers, but not infinities.

    For an implementation that supports infinities (in other words, an implementation where infinities are representable), how do infinities
    fail to qualify as being within the range of representable values? Where
    is that exclusion specified? Such formats correspond to affinely
    extended real number systems, which differ from ordinary real number
    systems by including -infinity and +infinity. IEEE 754 specifies that infinities are to be interpreted in the affine sense.

    ... No issues
    with INFINITY, but my remark was about the case a user would write
    a constant like 0x1.0p1024 (or 1.0e999). Such constants are in the
    range of floating-point numbers (which is the set of real numbers in
    this case), but this constant overflows with the IEEE 754 meaning,

    It also overflows with the C standard's meaning.

    and both GCC and Clang emits a warning for this reason.

    Note that if the intent were "exceeds the range", the C standard
    should have said that.

    I'm sorry - I seem to have lost the thread of your argument. In which
    location in the current standard do you think the current wording would
    need to be changed to "exceeds the range", in to support my argument?
    Which current phrase would need to be replaced, and why?

    If infinities are not supported (which is therefore necessarily not an
    IEEE format), then INFINITY is required to expand to a constant that
    will overflow. This does violate that constraint, which means that a
    diagnostic message is required.

    This point is not clear and does not match what implementations
    consider as overflow.

    Which implementations did you test on, which don't support infinities,
    in order to justify that conclusion? In my experience, such
    implementations are rare. The only systems I've ever used that didn't
    support infinities, failed to do so because they didn't support floating
    point at all.

    ...
    I think that I was initially confused by the meaning of "value".
    in 6.4.4p2, as it seems to imply that a converted value may be
    outside the range of representable values.

    Correct. This is an example of a context where it is referring to the mathematical value, rather than a necessarily representable value.

    ... It seems that it was
    written mainly with integer constants in mind.

    I think not - constants with a value that cannot be represented occur in integer constants, floating constants, and character constants, which is
    why that paragraph appears in 6.4.4p2. If it were meant only for integer constants, it would have been under 6.4.4.1.

    But there's still the fact that "overflow" is not defined (this
    term is used only when there are no infinities, though).

    7.12.1p5 is not marked as a definition for "overflows", but has the form
    of a definition. There is no restriction within 7.12.1 to
    implementations that don't support infinities.

    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format,
    LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
    normalized floating point number that is larger than LDBL_NORM_MAX,
    which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    Yes, and DBL_MAX has e==e_max.

    ...
    But what is allowed is not clear for an IEEE 754 format (this does
    not affect the INFINITY macro, but users could write exact values
    larger than DBL_MAX + 1 ulp, for which nextdown(DBL_MAX) could be
    unexpected as the obtained value).

    It's unexpected because that would violate a requirement of IEEE 754,
    but the C standard doesn't require violating that requirement. Section
    6.4.4.2p4 of the C standard allows such a constant to have any one of
    the three values (+infinity, FLT_MAX, or nextdownf(FLT_MAX)).
    Therefore, an implementation that wants to conform to both the C
    standard and IEEE 754 must select FLT_MAX. What's unclear or ambiguous
    about that?

    I had originally intended that paragraph to be about INFINITY, where
    FLT_MAX is the relevant limit, but you were explicitly talking about DBL_MAX+1ulp, so I should have changed all instances of FLT in that
    paragraph to DBL_MAX.

    If Annex F is not claimed to be supported[*], this requirement would
    not be violated.

    And if Annex F were claimed to be supported, this requirement would
    still not be violated by giving that constant a value of DBL_MAX. That
    value satisfies all applicable requirements of either standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Tue Nov 9 02:48:19 2021
    In article <smbrgo$g4b$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/8/21 5:56 AM, Vincent Lefevre wrote:
    In article <smah3q$a9f$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/7/21 9:44 PM, Vincent Lefevre wrote:...
    These rules are not about overflow. They are general rules.

    Yes, and they are sufficiently general that it is perfectly clear how
    they apply to the case when there is overflow.

    I've done some tests, and it is interesting to see that both GCC and
    Clang choose the IEEE 754 definition of overflow on floating-point constants, not yours (<sl9bqb$hf5$2@dont-email.me>).

    The only definition for overflow that I discussed is not mine, it
    belongs to the C standard: "A floating result overflows if the magnitude (absolute value) of the mathematical result is finite but so large that
    the mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type." (7.12.1p5).

    That's in the C standard. But in <sl9bqb$hf5$2@dont-email.me>, you
    said: "Overflow occurs when a floating constant is created whose
    value is greater than DBL_MAX or less than -DBL_MAX."

    So... I don't understand what you consider as an overflow.

    ... For instance, the exact value of 0x1.fffffffffffff7p1023 is
    larger than DBL_MAX, but it doesn't trigger an overflow warning
    with GCC and Clang.

    No warning is mandated for overflows, so that doesn't contradict
    anything I said.

    But that's what it is implemented in practice, and in GCC, the
    condition is the same whether infinity is supported or not (see
    the code later).

    I wasn't talking about overflow for it's own sake, but only in the
    context of what the standard says about the value of floating point constants. What value does that constant have? Is it one of the three
    values permitted by 6.4.4.2p4? Is it, in particular, the value required
    by IEEE 754? If the answers to both questions are yes, it's consistent
    with everything I said.

    The second answer is not "yes", in case nextdown(DBL_MAX) would be
    returned.

    I agree. But the question is whether the compiler may choose to
    stop the compilation.

    I don't remember that issue having previously been raised.

    "The implementation shall not successfully translate a preprocessing translation unit containing a #error preprocessing directive unless it
    is part of a group skipped by conditional inclusion." (4p4).

    "The implementation shall be able to translate and execute at least one program that contains at least one instance of every one of the
    following limits:" (5.2.4.1p1).

    In all other cases, stopping compilation is neither mandatory nor
    prohibited.

    Well, from this point of view, an implementation is free to regard
    an overflowing constant as not having a defined behavior and stop
    compilation.

    ...
    The standard describes two cases: if infinities are supported (as they
    necessarily are when IEEE formats are used), INFINITY is required to
    expand to a constant expression that represents positive or unsigned
    infinity. This is not outside the range of representable values - that
    range includes either positive or unsigned infinity, so the constraint
    in 6.4.4p2 is not violated.

    The range includes all real numbers, but not infinities.

    For an implementation that supports infinities (in other words, an implementation where infinities are representable), how do infinities
    fail to qualify as being within the range of representable values? Where
    is that exclusion specified?

    5.2.4.2.2p5. Note that it seems that it is intended to exclude
    some representable values from the range. Otherwise such a long
    specification of the range would not be needed.

    That said, either this specification seems incorrect or there are
    several meanings of "range". For instance, 5.2.4.2.2p9 says "Except
    for assignment and cast (which remove all extra range and precision)",
    and here, the intent is to limit the range to the emax exponent of
    the considered type.

    Such formats correspond to affinely extended real number systems,
    which differ from ordinary real number systems by including
    -infinity and +infinity. IEEE 754 specifies that infinities are to
    be interpreted in the affine sense.

    Yes, but I'm not sure that the exclusion of infinities from the range
    has any consequence. For instance, 6.3.1.5 says:

    When a value of real floating type is converted to a real floating type,
    if the value being converted can be represented exactly in the new type,
    it is unchanged. If the value being converted is in the range of values
    that can be represented but cannot be represented exactly, the result is
    either the nearest higher or nearest lower representable value, chosen
    in an implementation-defined manner. If the value being converted is
    outside the range of values that can be represented, the behavior is
    undefined. [...]

    So, if infinity is representable in both types, we are in the first
    case ("can be represented exactly"), and the range is not used.

    and both GCC and Clang emits a warning for this reason.

    Note that if the intent were "exceeds the range", the C standard
    should have said that.

    I'm sorry - I seem to have lost the thread of your argument. In which location in the current standard do you think the current wording would
    need to be changed to "exceeds the range", in to support my argument?
    Which current phrase would need to be replaced, and why?

    I don't remember exactly, but I think that was 7.12p4 to make it
    consistent with its footnote (which refers to 6.4.4).

    Still, there would be an issue with 5.2.4.2.2p5 what would really
    mean.

    If infinities are not supported (which is therefore necessarily not an
    IEEE format), then INFINITY is required to expand to a constant that
    will overflow. This does violate that constraint, which means that a
    diagnostic message is required.

    This point is not clear and does not match what implementations
    consider as overflow.

    Which implementations did you test on, which don't support infinities,
    in order to justify that conclusion?

    Note that the notion of overflow as defined by 7.12.1p5 (which is
    consistent with the particular case of IEEE 754) exists whether
    infinities are supported or not.

    And for implementations without infinities, see the GCC code: gcc/c-family/c-lex.c

    if (REAL_VALUE_ISINF (real)
    || (const_type != type && REAL_VALUE_ISINF (real_trunc)))
    {
    *overflow = OT_OVERFLOW;
    if (!(flags & CPP_N_USERDEF))
    {
    if (!MODE_HAS_INFINITIES (TYPE_MODE (type)))
    pedwarn (input_location, 0,
    "floating constant exceeds range of %qT", type);
    else
    warning (OPT_Woverflow,
    "floating constant exceeds range of %qT", type);
    }
    }

    But there's still the fact that "overflow" is not defined (this
    term is used only when there are no infinities, though).

    7.12.1p5 is not marked as a definition for "overflows", but has the form
    of a definition. There is no restriction within 7.12.1 to
    implementations that don't support infinities.

    I agree. But see the beginning of this message.

    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
    normalized floating point number that is larger than LDBL_NORM_MAX,
    which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    Yes, and DBL_MAX has e==e_max.

    No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
    have a larger exponent. The C2x draft says:

    maximum representable finite floating-point number; if that number
    is normalized, its value is (1 − b^(−p)) b^(e_max).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Tue Nov 9 00:50:48 2021
    On 11/8/21 9:48 PM, Vincent Lefevre wrote:
    In article <smbrgo$g4b$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/8/21 5:56 AM, Vincent Lefevre wrote:
    In article <smah3q$a9f$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ...
    The only definition for overflow that I discussed is not mine, it
    belongs to the C standard: "A floating result overflows if the magnitude
    (absolute value) of the mathematical result is finite but so large that
    the mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type." (7.12.1p5).

    That's in the C standard. But in <sl9bqb$hf5$2@dont-email.me>, you
    said: "Overflow occurs when a floating constant is created whose
    value is greater than DBL_MAX or less than -DBL_MAX."

    So... I don't understand what you consider as an overflow.

    At that time, I was unaware of the existence of any floating point
    format where DBL_NORM_MAX < DBL_MAX. I've since acknowledged that such
    things can occur - but only in such obscure formats.

    I wasn't talking about overflow for it's own sake, but only in the
    context of what the standard says about the value of floating point
    constants. What value does that constant have? Is it one of the three
    values permitted by 6.4.4.2p4? Is it, in particular, the value required
    by IEEE 754? If the answers to both questions are yes, it's consistent
    with everything I said.

    The second answer is not "yes", in case nextdown(DBL_MAX) would be
    returned.

    I'm asking what value you observed - was it nextdown(DBL_MAX), DBL_MAX, +infinity, or something else? The first three are permitted by the C
    standard, the second one is mandated by IEEE 754, so I would expect an implementation that claimed conformance to both standards to choose
    DBL_MAX, and NOT nextdown(DBL_MAX). So - which value did you see?

    I agree. But the question is whether the compiler may choose to
    stop the compilation.

    I don't remember that issue having previously been raised.

    "The implementation shall not successfully translate a preprocessing
    translation unit containing a #error preprocessing directive unless it
    is part of a group skipped by conditional inclusion." (4p4).

    "The implementation shall be able to translate and execute at least one
    program that contains at least one instance of every one of the
    following limits:" (5.2.4.1p1).

    In all other cases, stopping compilation is neither mandatory nor
    prohibited.

    Well, from this point of view, an implementation is free to regard
    an overflowing constant as not having a defined behavior and stop compilation.

    What renders the behavior undefined? On an implementation that doesn't
    support infinities, it's a constraint violation - but constraint
    violations don't necessarily have undefined behavior. They usually have undefined behavior due to "ommission of any explicit definition of the behavior", but there is in fact an explicit definition of the behavior
    that continues to apply even when that constraint is violated.
    And on an implementation that does support infinities, it isn't even a constraint violation.
    Whether or not a constraint is violated, as I said above, stopping
    compilation is neither mandatory nor prohibited, just like most other
    programs.

    ...
    For an implementation that supports infinities (in other words, an
    implementation where infinities are representable), how do infinities
    fail to qualify as being within the range of representable values? Where
    is that exclusion specified?

    5.2.4.2.2p5. Note that it seems that it is intended to exclude
    some representable values from the range. Otherwise such a long
    specification of the range would not be needed.

    That clause correctly states that infinities do NOT qualify as floating
    point numbers. However, it also correctly refers to them as values. The relevant clauses refer to the range of representable values, not the
    range of representable floating point numbers. On such an
    implementation, infinities are representable and they are values.

    What are you referring to when you say "such a long specification"?

    That said, either this specification seems incorrect or there are
    several meanings of "range". For instance, 5.2.4.2.2p9 says "Except
    for assignment and cast (which remove all extra range and precision)",
    and here, the intent is to limit the range to the emax exponent of
    the considered type.

    I had to go back to n1570.pdf to find that wording. It was removed from n2310.pdf (2018-11-06). 1. In n2596.pdf (2020-12-11), wording about the
    extra range was placed in footnote 22, referred to by 5.2.4.2.2p4, and
    is still there in the latest draft I have, n2731.pdf (2021-10-18).

    I believe that "extra range" refers to extra representable values that
    are supported by the evaluation format, but not by the format of the
    type itself. The extra range consists entirely of finite values, even if
    the full range is infinite for both formats.

    Such formats correspond to affinely extended real number systems,
    which differ from ordinary real number systems by including
    -infinity and +infinity. IEEE 754 specifies that infinities are to
    be interpreted in the affine sense.

    Yes, but I'm not sure that the exclusion of infinities from the range
    has any consequence. For instance, 6.3.1.5 says:

    When a value of real floating type is converted to a real floating type,
    if the value being converted can be represented exactly in the new type,
    it is unchanged. If the value being converted is in the range of values
    that can be represented but cannot be represented exactly, the result is
    either the nearest higher or nearest lower representable value, chosen
    in an implementation-defined manner. If the value being converted is
    outside the range of values that can be represented, the behavior is
    undefined. [...]

    So, if infinity is representable in both types, we are in the first
    case ("can be represented exactly"), and the range is not used.

    I agree - with respect to conversions between floating point types. What
    does that have to do with the conversion from a decimal string to a
    floating point type, which is described in 6.4.4.2p4? The decimal
    strings allowed by that clause cannot represent infinity - they can
    acquire an infinite value only by rounding, depending upon the default
    rounding mode.

    and both GCC and Clang emits a warning for this reason.

    Note that if the intent were "exceeds the range", the C standard
    should have said that.

    I'm sorry - I seem to have lost the thread of your argument. In which
    location in the current standard do you think the current wording would
    need to be changed to "exceeds the range", in to support my argument?
    Which current phrase would need to be replaced, and why?

    I don't remember exactly, but I think that was 7.12p4 to make it
    consistent with its footnote (which refers to 6.4.4).

    In the latest draft standard that I have, that wording is now in 7.12p7.
    I've already conceded that "overflows" is not necessarily the same as
    "exceeds the range". However, the only known exception is for a long
    double type, which can't apply to INFINITY, which is what 7.12p7 describes.

    ...
    If infinities are not supported (which is therefore necessarily not an >>>> IEEE format), then INFINITY is required to expand to a constant that
    will overflow. This does violate that constraint, which means that a
    diagnostic message is required.

    This point is not clear and does not match what implementations
    consider as overflow.

    Which implementations did you test on, which don't support infinities,
    in order to justify that conclusion?

    Note that the notion of overflow as defined by 7.12.1p5 (which is
    consistent with the particular case of IEEE 754) exists whether
    infinities are supported or not.

    Yes, but INFINITY is only required to overflow, which is what you were
    talking about, on implementations that don't support infinities. So, in
    order to justify saying that it "does not match what implementations
    consider as overflow", you must necessarily be referring to
    implementations that don't support infinities.

    And for implementations without infinities, see the GCC code: gcc/c-family/c-lex.c

    if (REAL_VALUE_ISINF (real)
    || (const_type != type && REAL_VALUE_ISINF (real_trunc)))
    {
    *overflow = OT_OVERFLOW;
    if (!(flags & CPP_N_USERDEF))
    {
    if (!MODE_HAS_INFINITIES (TYPE_MODE (type)))
    pedwarn (input_location, 0,
    "floating constant exceeds range of %qT", type);
    else
    warning (OPT_Woverflow,
    "floating constant exceeds range of %qT", type);
    }
    }

    It's actually the behavior of that implementation in modes that do
    support infinities that is most relevant to this discussion - it labels
    and infinite value as exceeding the type's range, even if it does
    support infinities. Apparently they are using "range" to refer to the
    range of finite values - but I would consider the wording to be
    misleading without the qualifier "finite".

    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >>>> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
    normalized floating point number that is larger than LDBL_NORM_MAX,
    which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    Yes, and DBL_MAX has e==e_max.

    No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
    have a larger exponent. The C2x draft says:

    maximum representable finite floating-point number; if that number
    is normalized, its value is (1 − b^(−p)) b^(e_max).

    So, what is the value of e for LDBL_MAX in the pair-of-doubles format?
    What is the value of e_max? If LDBL_MAX does not have e==e_max, what is
    the largest representable value in that format that does have e==e_max?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Tue Nov 9 10:12:15 2021
    In article <smd27p$28v$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/8/21 9:48 PM, Vincent Lefevre wrote:
    In article <smbrgo$g4b$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/8/21 5:56 AM, Vincent Lefevre wrote:
    In article <smah3q$a9f$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ...
    The only definition for overflow that I discussed is not mine, it
    belongs to the C standard: "A floating result overflows if the magnitude >> (absolute value) of the mathematical result is finite but so large that
    the mathematical result cannot be represented without extraordinary
    roundoff error in an object of the specified type." (7.12.1p5).

    That's in the C standard. But in <sl9bqb$hf5$2@dont-email.me>, you
    said: "Overflow occurs when a floating constant is created whose
    value is greater than DBL_MAX or less than -DBL_MAX."

    So... I don't understand what you consider as an overflow.

    At that time, I was unaware of the existence of any floating point
    format where DBL_NORM_MAX < DBL_MAX. I've since acknowledged that such
    things can occur - but only in such obscure formats.

    Even with IEEE 754 formats, values less than DBL_MAX + 1/2 ulp
    in magnitude do not yield an overflow in round-to-nearest (the
    default rounding mode in IEEE 754).

    I wasn't talking about overflow for it's own sake, but only in the
    context of what the standard says about the value of floating point
    constants. What value does that constant have? Is it one of the three
    values permitted by 6.4.4.2p4? Is it, in particular, the value required
    by IEEE 754? If the answers to both questions are yes, it's consistent
    with everything I said.

    The second answer is not "yes", in case nextdown(DBL_MAX) would be returned.

    I'm asking what value you observed - was it nextdown(DBL_MAX), DBL_MAX, +infinity, or something else? The first three are permitted by the C standard, the second one is mandated by IEEE 754, so I would expect an implementation that claimed conformance to both standards to choose
    DBL_MAX, and NOT nextdown(DBL_MAX). So - which value did you see?

    This issue is not what one can observe on a subset of implementations,
    but what is possible. The value nextdown(DBL_MAX) does not make much
    sense when the implementation *knows* that the value is larger than
    DBL_MAX because it exceeds the range (there is a diagnostic to tell
    that to the user because of 6.4.4p2).

    [...]
    What renders the behavior undefined? On an implementation that doesn't support infinities, it's a constraint violation - but constraint
    violations don't necessarily have undefined behavior. They usually have undefined behavior due to "ommission of any explicit definition of the behavior", but there is in fact an explicit definition of the behavior
    that continues to apply even when that constraint is violated.
    And on an implementation that does support infinities, it isn't even a constraint violation.

    Actually it is when the mathematical result exceeds the range. 6.5p5
    says: "If an /exceptional condition/ occurs during the evaluation of
    an expression (that is, if the result is not mathematically defined or
    not in the range of representable values for its type), the behavior
    is undefined." So this appears to be an issue when infinity is not
    supported.

    I suppose that when the standard defines something, it assumes the
    case where such an exceptional condition does not occur, unless
    explicitly said otherwise (that's the whole point of 6.5p5). And in
    the definitions concerning floating-point expressions, the standard
    never distinguishes between an exceptional condition or not. For
    instance, for addition, the standatd just says "The result of the
    binary + operator is the sum of the operands." (on the real numbers,
    this operation is always mathematically well-defined, so the only
    issue is results that exceed the range, introduced by 6.5p5).

    ...
    For an implementation that supports infinities (in other words, an
    implementation where infinities are representable), how do infinities
    fail to qualify as being within the range of representable values? Where >> is that exclusion specified?

    5.2.4.2.2p5. Note that it seems that it is intended to exclude
    some representable values from the range. Otherwise such a long specification of the range would not be needed.

    That clause correctly states that infinities do NOT qualify as
    floating point numbers.

    Note that there are inconsistencies in the standard about what
    it means by "floating-point numbers". It is sometimes used to
    mean the value of a floating type. For instance, the standard
    says for fabs: "The fabs functions compute the absolute value
    of a floating-point number x." But I really don't think that
    this function is undefined on infinities.

    That's probably why it says "*finite* floating-point number"
    and not just "floating-point number" (if it were clear that
    infinities do not qualify as floating-point numbers, the word
    "finite" would not be necessary).

    However, it also correctly refers to them as values. The relevant
    clauses refer to the range of representable values, not the range of representable floating point numbers. On such an implementation,
    infinities are representable and they are values.

    My point is that it says *real* numbers. And infinities are not
    real numbers.

    What are you referring to when you say "such a long specification"?

    If I understand what you wish (to include all representable values
    in the range), the standard could have said: "The minimum range of representable values for a floating type is the most negative number
    in that type through the most positive number in that type." That's
    simpler and shorter than the current text of 5.2.4.2.2p5.

    So leaving representable values (which are not FP numbers) outside
    the range may be intended.

    That said, either this specification seems incorrect or there are
    several meanings of "range". For instance, 5.2.4.2.2p9 says "Except
    for assignment and cast (which remove all extra range and precision)",
    and here, the intent is to limit the range to the emax exponent of
    the considered type.

    I had to go back to n1570.pdf to find that wording. It was removed from n2310.pdf (2018-11-06). 1. In n2596.pdf (2020-12-11), wording about the
    extra range was placed in footnote 22, referred to by 5.2.4.2.2p4, and
    is still there in the latest draft I have, n2731.pdf (2021-10-18).

    I can still see this text in other similar places of the current
    draft N2731. For instance, 6.5.4p6 about cast operators:
    "[...] then the cast specifies a conversion even if the type of
    the expression is the same as the named type and removes any extra
    range and precision."

    I believe that "extra range" refers to extra representable values that
    are supported by the evaluation format, but not by the format of the
    type itself. The extra range consists entirely of finite values, even if
    the full range is infinite for both formats.

    This is what I believe too. But instead of "extra range and precision",
    the standard should have said values that are not representable exactly
    in the target floating type. Something like that.

    [...]
    I don't remember exactly, but I think that was 7.12p4 to make it
    consistent with its footnote (which refers to 6.4.4).

    In the latest draft standard that I have, that wording is now in 7.12p7.
    I've already conceded that "overflows" is not necessarily the same as "exceeds the range". However, the only known exception is for a long
    double type, which can't apply to INFINITY, which is what 7.12p7 describes.

    A value may overflow, but still be in the range of representable
    values (if infinities are not supported, 5.2.4.2.2p5 just specifies
    a minimum range). And conversely, something like DBL_MAX + a tiny
    number may not be regarded as an overflow, but be outside the range
    (if infinities are not supported).

    Note that the notion of overflow as defined by 7.12.1p5 (which is consistent with the particular case of IEEE 754) exists whether
    infinities are supported or not.

    Yes, but INFINITY is only required to overflow, which is what you were talking about, on implementations that don't support infinities. So, in
    order to justify saying that it "does not match what implementations
    consider as overflow", you must necessarily be referring to
    implementations that don't support infinities.

    I was saying 2 things:
    * What an implementation regards as an overflow, whether infinities
    are supported or not.
    * With GCC, what happens when infinities are not supported,
    according to its code.

    And for implementations without infinities, see the GCC code: gcc/c-family/c-lex.c

    if (REAL_VALUE_ISINF (real)
    || (const_type != type && REAL_VALUE_ISINF (real_trunc)))
    {
    *overflow = OT_OVERFLOW;
    if (!(flags & CPP_N_USERDEF))
    {
    if (!MODE_HAS_INFINITIES (TYPE_MODE (type)))
    pedwarn (input_location, 0,
    "floating constant exceeds range of %qT", type);
    else
    warning (OPT_Woverflow,
    "floating constant exceeds range of %qT", type);
    }
    }

    It's actually the behavior of that implementation in modes that do
    support infinities that is most relevant to this discussion - it labels
    and infinite value as exceeding the type's range, even if it does
    support infinities. Apparently they are using "range" to refer to the
    range of finite values - but I would consider the wording to be
    misleading without the qualifier "finite".

    Yes, the wording in the "else" case (where infinities are supported)
    is incorrect. I had reported a bug:

    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103123

    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >>>> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a
    normalized floating point number that is larger than LDBL_NORM_MAX,
    which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    Yes, and DBL_MAX has e==e_max.

    No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
    have a larger exponent. The C2x draft says:

    maximum representable finite floating-point number; if that number
    is normalized, its value is (1 − b^(−p)) b^(e_max).

    So, what is the value of e for LDBL_MAX in the pair-of-doubles format?

    It should be DBL_MAX_EXP. What happens with double-double is that
    for the maximum exponent of double, not all precision-p numbers
    are representable (here, p = 106 = 2 * 53 historically, though
    107 could actually be used thanks to the constraint below and the
    limitation on the exponent discussed here).

    The reason is that there is a constraint on the format in order
    to make the double-double algorithms fast enough: if (x1,x2) is
    a valid double-double number, then x1 must be equal to x1 + x2
    rounded to nearest. So LDBL_MAX has the form:

    .111...1110111...111 * 2^(DBL_MAX_EXP)

    where both sequences 111...111 have 53 bits. Values above this
    number would increase the exponent of x1 to DBL_MAX_EXP + 1,
    which is above the maximum exponent for double; thus such values
    are not representable.

    The consequence is that e_max < DBL_MAX_EXP.

    What is the value of e_max?

    DBL_MAX_EXP - 1

    If LDBL_MAX does not have e==e_max,

    (LDBL_MAX has exponent e = e_max + 1.)

    what is the largest representable value in that format that does
    have e==e_max?

    Some value very close to 2^e_max: x1 = 2^e_max and x2 = -DBL_TRUE_MIN.
    Note that it does not fit the floating-point model because it is not representable with a p-bit precision.

    And LDBL_NORM_MAX = (1 − 2^(−p)) 2^(e_max) as specified; it is
    represented by

    x1 = 2^(e_max) = .1 * 2^DBL_MAX_EXP
    x2 = -2^(e_max-p)

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vincent Lefevre on Tue Nov 9 07:13:02 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    I'm wondering if you have resolved your original uncertainty
    about the behavior of INFINITY in an implementation that does
    not support infinities?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Tue Nov 9 12:51:39 2021
    On 11/9/21 5:12 AM, Vincent Lefevre wrote:
    In article <smd27p$28v$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/8/21 9:48 PM, Vincent Lefevre wrote:
    In article <smbrgo$g4b$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ...
    I wasn't talking about overflow for it's own sake, but only in the
    context of what the standard says about the value of floating point
    constants. What value does that constant have? Is it one of the three
    values permitted by 6.4.4.2p4? Is it, in particular, the value required >>>> by IEEE 754? If the answers to both questions are yes, it's consistent >>>> with everything I said.

    The second answer is not "yes", in case nextdown(DBL_MAX) would be
    returned.

    I'm asking what value you observed - was it nextdown(DBL_MAX), DBL_MAX,
    +infinity, or something else? The first three are permitted by the C
    standard, the second one is mandated by IEEE 754, so I would expect an
    implementation that claimed conformance to both standards to choose
    DBL_MAX, and NOT nextdown(DBL_MAX). So - which value did you see?

    This issue is not what one can observe on a subset of implementations,
    but what is possible.

    Why does it matter to you that such implementations are possible? No
    such implementation can qualify as conforming to IEEE 754 - so what? The
    C standard very deliberately does NOT require conformance to IEEE 754,
    and what it requires in areas that are also covered by IEEE 754 is
    deliberately more lenient than what IEEE 754 requires, precisely so C
    can be implemented on platforms where floating point hardware that can't
    meet IEEE 754's accuracy is installed. That's why the __STDC_IEC_*
    macros exist - to allow a program to determine whether a implementation
    claims to conform to some or all of the requirements of IEC 60559
    (==IEEE 754). That's why those macros are described in the section
    titled "Conditional feature macros."

    Two standards do not (as you claim in the Subject: header of this
    thread) contradict each other just because they say different things
    about the same situation. If one standard provides a set containing one
    or more options, and the other standard provides a different set of one
    or more options, the two standards contradict each other only if there's
    no overlap between the two sets of options. So long as there is at least
    one option that meets the requirements of both standards, they don't
    contradict each other.

    People do not create full implementations of C just for the fun of it
    (well, most people don't). In particular, they don't create an
    implementation that conforms to the C standard but not to IEC 60559 by
    accident or laziness. In general, you can safely assume that any such implementation did so because there was some inconvenience associated
    with conforming to IEC 60559 that they wished to avoid. If the C
    standard were changed to mandate conformance with IEC 60559, some of
    those implementations might change to conform with that standard, but
    many (possibly most) such implementations would respond by deciding to
    not bother conforming to that version of the C standard, because
    conforming would be too inconvenient.

    ... The value nextdown(DBL_MAX) does not make much
    sense when the implementation *knows* that the value is larger than
    DBL_MAX because it exceeds the range (there is a diagnostic to tell
    that to the user because of 6.4.4p2).

    You misunderstand the purpose of the specification in 6.4.4.2p4. It was
    not intended that a floating point implementation would generate the
    nearest representable value, and that the implementation of C would then arbitrarily chose to pick one of the other two adjacent representable
    values. The reason was to accommodate floating point implementations
    that couldn't meet the accuracy requirements of IEC 60559. The
    implementation asks the floating point hardware to calculate what the
    value is, the hardware does it's best to accurately calculate the value,
    but it's best isn't good enough to qualify as conforming to IEC 60559.
    It might take some shortcuts or simplifications that make it faster or
    simpler than an IEC 60559, at the cost of being less accurate. It
    returns a value that, incorrectly, is not greater than DBL_MAX, and the
    wording in 6.4.4.2p4 gives the implementation permission to use that
    incorrect number, so long as it isn't smaller than nextdown(DBL_MAX).

    ...
    Actually it is when the mathematical result exceeds the range. 6.5p5
    says: "If an /exceptional condition/ occurs during the evaluation of
    an expression (that is, if the result is not mathematically defined or
    not in the range of representable values for its type), the behavior
    is undefined." So this appears to be an issue when infinity is not
    supported.

    Conversion of a floating point constant into a floating point value is
    not "evaluation of an expression", and therefore is not covered by
    6.5p5. Such conversions are required to occur "as-if at translation
    time", and exceptional conditions are explicitly prohibited.


    I suppose that when the standard defines something, it assumes the
    case where such an exceptional condition does not occur, unless
    explicitly said otherwise (that's the whole point of 6.5p5). And in
    the definitions concerning floating-point expressions, the standard
    never distinguishes between an exceptional condition or not. For
    instance, for addition, the standatd just says "The result of the
    binary + operator is the sum of the operands." (on the real numbers,
    this operation is always mathematically well-defined, so the only
    issue is results that exceed the range, introduced by 6.5p5).

    The standard is FAR more lenient with regard to floating point
    operations than it is for floating point constants:
    "The accuracy of the floating-point operations ( + , - , * , / ) and of
    the library functions in <math.h> and <complex.h> that return
    floating-point results is implementation-defined, as is the accuracy of
    the conversion between floating-point internal representations and
    string representations performed by the library functions in <stdio.h> , <stdlib.h> , and <wchar.h> . The implementation may state that the
    accuracy is unknown." (5.2.4.2.2p8).

    That wording allows an implementation to implement floating point
    arithmetic so inaccurately that it can conclude that the expression
    LDBL_MAX - LDBL_MIN < LDBL_MIN - LDBL_MAX is true. Note: the comparison operators (== != < > <= >=) are not covered by 5.2.4.2.2p8, but the
    subtraction operator is.

    I don't approve of this situation; I can't imagine any good reason for implementing floating point operations as inaccurately as the standard
    allows them to be implemented. The standard should provide some more
    meaningful requirements, They don't have to be very strong - they could
    be weak enough that every known serious floating point implementation
    could meet them, and still be immensely stronger than the current
    requirements. Any platform where floating point isn't actually needed
    should simply be allowed to opt out of supporting floating point
    entirely, rather than being required to support it but allowed to
    implement it that badly. That would be safer for all concerned.

    However, those incredibly loose requirements are what the standard
    actually says.

    ...
    For an implementation that supports infinities (in other words, an
    implementation where infinities are representable), how do infinities
    fail to qualify as being within the range of representable values? Where >>>> is that exclusion specified?

    5.2.4.2.2p5. Note that it seems that it is intended to exclude
    some representable values from the range. Otherwise such a long
    specification of the range would not be needed.

    That clause correctly states that infinities do NOT qualify as
    floating point numbers.

    Note that there are inconsistencies in the standard about what
    it means by "floating-point numbers". It is sometimes used to
    mean the value of a floating type. For instance, the standard
    says for fabs: "The fabs functions compute the absolute value
    of a floating-point number x." But I really don't think that
    this function is undefined on infinities.

    If __STDC_IEC_60559_BFP__ is pre#defined by the implementation, F10.4.3
    not only allows fabs (±∞), it explicitly mandates that it return +∞.
    Note: if you see odd symbols on the previous line, they were supposed to
    be infinities).

    However, it also correctly refers to them as values. The relevant
    clauses refer to the range of representable values, not the range of
    representable floating point numbers. On such an implementation,
    infinities are representable and they are values.

    My point is that it says *real* numbers. And infinities are not
    real numbers.

    In n2731.pdf, 5.2.4.2.2p5 says "An implementation may give zero and
    values that are not floating-point numbers (such as infinities
    and NaNs) a sign or may leave them unsigned. Wherever such values are
    unsigned, any requirement in this document to retrieve the sign shall
    produce an unspecified sign, and any requirement to set the sign shall
    be ignored."
    Nowhere in that clause does it use the term "real".
    Are you perhaps referring to 5.2.4.2.2p7?

    ...
    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format, >>>>>> LDBL_MAX is represented by a value with f_1 = 1, and therefore is a >>>>>> normalized floating point number that is larger than LDBL_NORM_MAX, >>>>>> which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    Yes, and DBL_MAX has e==e_max.

    No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
    have a larger exponent. The C2x draft says:

    maximum representable finite floating-point number; if that number
    is normalized, its value is (1 − b^(−p)) b^(e_max).

    So, what is the value of e for LDBL_MAX in the pair-of-doubles format?

    It should be DBL_MAX_EXP. What happens with double-double is that
    for the maximum exponent of double, not all precision-p numbers
    are representable (here, p = 106 = 2 * 53 historically, though
    107 could actually be used thanks to the constraint below and the
    limitation on the exponent discussed here).

    The reason is that there is a constraint on the format in order
    to make the double-double algorithms fast enough: if (x1,x2) is
    a valid double-double number, then x1 must be equal to x1 + x2
    rounded to nearest. So LDBL_MAX has the form:

    .111...1110111...111 * 2^(DBL_MAX_EXP)

    where both sequences 111...111 have 53 bits. Values above this
    number would increase the exponent of x1 to DBL_MAX_EXP + 1,
    which is above the maximum exponent for double; thus such values
    are not representable.

    The consequence is that e_max < DBL_MAX_EXP.

    What is the value of e_max?

    DBL_MAX_EXP - 1

    If LDBL_MAX does not have e==e_max,

    (LDBL_MAX has exponent e = e_max + 1.)

    That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point
    numbers must have e_min <= e && e <= e_max. LDBL_MAX is defined as the
    "maximum finite floating point number". A value for which e > e_max
    can't qualify as a floating point number, and therefore in particular
    can't qualify as the maximum finite floating point number. An
    implementation that uses the sum-of-pair-of-doubles floating point
    format has two options: increase e_max high enough to include the value
    you specify for LDBL_MAX, or decrease LDBL_MAX to a value low enough to
    have e<=e_max.

    Key point: most items in 5.2.4.2.2 have two parts: a description, and an expression involving the parameters of the floating point format. For
    formats that are a good fit to the C standard's floating point model,
    those formulas give the exactly correct result. For other formats, the description is what specifies what the result must be, the formula
    should be treated only as an example that might not apply.

    Those formulas were written on an implicit assumption that becomes
    obvious only when you try to apply them to a format that violates the assumption: every base_b digit from f_1 to f_p can freely be set to any
    value from 0 to b-1. In particular, the formula for LDBL_MAX was based
    upon the assumption that all of those values were set to b-1, and e was
    set to e_max.
    A pair-of-doubles format could fit that assumption if a restriction were imposed that says that a pair (x1, x2) is allowed only if x2 == 0 || (
    1ulp on x1 > x2 && x2 >= 0.5 ulp on x1). (that condition needs to be
    modified to give the right requirements for negative numbers). Such an implementation could, with perfect accuracy, be described using
    LDBL_MANT_DIG == 2*DBL_MANT_DIG and LDBL_MAX_EXP == DBL_MAX_EXP.

    However, the pair-of-doubles format you've described doesn't impose such requirements. The value of p must be high enough that, for any pair (x1,
    x2) where x1 is finite and x2 is non-zero which is meant to qualify as representing a floating point number, p covers both the most significant
    digit of x1, and the least significant digit of any non-zero x2, no
    matter how large the ratio x1/x2 is. Whenever that ratio is high enough,
    f_k for most values of k can only be 0. As a result, one of the
    assumptions behind the formulas in 5.2.4.2.2 isn't met. so those
    formulas aren't always valid for such a format - but the descriptions
    still apply.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Wed Nov 10 12:48:34 2021
    In article <smecfc$jai$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    Why does it matter to you that such implementations are possible?

    When writing a portable program, one wants it to behave correctly
    even on untested implementations (which can be known implementations
    but without a machine available to test the program, implementations
    unknown to the developer, and possible future implementations).

    This is also useful for formal proofs that don't stick to a particular implementation.

    No such implementation can qualify as conforming to IEEE 754 - so
    what? The C standard very deliberately does NOT require conformance
    to IEEE 754,

    This is not the point. IEEE 754 had great properties as it more
    or less ensures a sane behavior. If the implementation does not
    conform to IEEE 754, one should still expect a sane behavior (if
    well-defined), and the C standard should ensure that.

    For instance, one should expect that HUGE_VALF ≤ INFINITY and
    FLT_MAX ≤ INFINITY.

    ... The value nextdown(DBL_MAX) does not make much
    sense when the implementation *knows* that the value is larger than
    DBL_MAX because it exceeds the range (there is a diagnostic to tell
    that to the user because of 6.4.4p2).

    You misunderstand the purpose of the specification in 6.4.4.2p4. It was
    not intended that a floating point implementation would generate the
    nearest representable value, and that the implementation of C would then arbitrarily chose to pick one of the other two adjacent representable
    values. The reason was to accommodate floating point implementations
    that couldn't meet the accuracy requirements of IEC 60559.

    You didn't understand. I repeat. The implementation *knows* that the
    value is larger than DBL_MAX. This knowledge is *required* by the C
    standard so that the required diagnostic can be emitted (due to the
    constraint in 6.4.4p2). So there is no reason that the implementation
    would assume that the value can be less than DBL_MAX.

    This is not an accuracy issue, or if there is one, it occurs at the
    level of the 6.4.4p2 constraint.

    ...
    Actually it is when the mathematical result exceeds the range. 6.5p5
    says: "If an /exceptional condition/ occurs during the evaluation of
    an expression (that is, if the result is not mathematically defined or
    not in the range of representable values for its type), the behavior
    is undefined." So this appears to be an issue when infinity is not supported.

    Conversion of a floating point constant into a floating point value is
    not "evaluation of an expression", and therefore is not covered by
    6.5p5. Such conversions are required to occur "as-if at translation
    time", and exceptional conditions are explicitly prohibited.

    But what about constant expressions?

    For instance, assuming no IEEE 754 support, what is the behavior of
    the following code?

    static double x = DBL_MAX + DBL_MAX;

    (We are in the case of a result that is mathematically defined, but
    not in the range of representable values for its type.)

    If one ignores 6.5p5 because this is a translation-time computation,
    I find the standard rather ambiguous on what is required.

    Note that there is a constraint 6.6p4 "Each constant expression shall
    evaluate to a constant that is in the range of representable values
    for its type." but this is of the same kind as 6.4.4p2 for constants.

    And what about the following?

    static int i = 2 || 1 / 0;

    Here, 1 / 0 is a constant expression that doesn't meet constraint
    6.6p4. So a diagnostic would be required (even though the behavior
    is well-defined)?

    Note: There was a DR resolution to justify that there should not
    be a diagnostic. This DR said that since 1 / 0 was not meeting the
    constraint, it was not regarded as a constant expression. But if
    one applies this interpretation on constants, this means that if
    the value is not in the range of representable values, then it is
    not regarded as a constant, thus making the behavior undefined in
    this case.

    Note that there are inconsistencies in the standard about what
    it means by "floating-point numbers". It is sometimes used to
    mean the value of a floating type. For instance, the standard
    says for fabs: "The fabs functions compute the absolute value
    of a floating-point number x." But I really don't think that
    this function is undefined on infinities.

    If __STDC_IEC_60559_BFP__ is pre#defined by the implementation, F10.4.3
    not only allows fabs (±∞), it explicitly mandates that it return +∞.

    The issue is when __STDC_IEC_60559_BFP__ is not defined but infinities
    are supported (as allowed by the standard).

    However, it also correctly refers to them as values. The relevant
    clauses refer to the range of representable values, not the range of
    representable floating point numbers. On such an implementation,
    infinities are representable and they are values.

    My point is that it says *real* numbers. And infinities are not
    real numbers.

    In n2731.pdf, 5.2.4.2.2p5 says "An implementation may give zero and
    values that are not floating-point numbers (such as infinities
    and NaNs) a sign or may leave them unsigned. Wherever such values are unsigned, any requirement in this document to retrieve the sign shall
    produce an unspecified sign, and any requirement to set the sign shall
    be ignored."
    Nowhere in that clause does it use the term "real".
    Are you perhaps referring to 5.2.4.2.2p7?

    Sorry for the confusion, I should have said that this came from C17.
    Indeed, this was renumbered to 5.2.4.2.2p7 in N2731.

    (I'm considering both C17 and C2x N2731, but perhaps one should
    consider only C2x N2731 since it fixes FP issues from the past
    standards.)

    ...
    However, I'm confused about how this connects to the standard's
    definition of normalized floating-point numbers: "f_1 > 0"
    (5.2.4.2.2p4). It seems to me that, even for the pair-of-doubles format,
    LDBL_MAX is represented by a value with f_1 = 1, and therefore is a >>>>>> normalized floating point number that is larger than LDBL_NORM_MAX, >>>>>> which strikes me as a contradiction.

    Note that there is a requirement on the exponent: e ≤ e_max.

    Yes, and DBL_MAX has e==e_max.

    No, not necessarily. DBL_NORM_MAX has e == e_max. But DBL_MAX may
    have a larger exponent. The C2x draft says:

    maximum representable finite floating-point number; if that number
    is normalized, its value is (1 − b^(−p)) b^(e_max).

    So, what is the value of e for LDBL_MAX in the pair-of-doubles format?

    It should be DBL_MAX_EXP. What happens with double-double is that
    for the maximum exponent of double, not all precision-p numbers
    are representable (here, p = 106 = 2 * 53 historically, though
    107 could actually be used thanks to the constraint below and the limitation on the exponent discussed here).

    The reason is that there is a constraint on the format in order
    to make the double-double algorithms fast enough: if (x1,x2) is
    a valid double-double number, then x1 must be equal to x1 + x2
    rounded to nearest. So LDBL_MAX has the form:

    .111...1110111...111 * 2^(DBL_MAX_EXP)

    where both sequences 111...111 have 53 bits. Values above this
    number would increase the exponent of x1 to DBL_MAX_EXP + 1,
    which is above the maximum exponent for double; thus such values
    are not representable.

    The consequence is that e_max < DBL_MAX_EXP.

    What is the value of e_max?

    DBL_MAX_EXP - 1

    If LDBL_MAX does not have e==e_max,

    (LDBL_MAX has exponent e = e_max + 1.)

    That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point numbers must have e_min <= e && e <= e_max.

    Yes, *floating-point numbers*.

    LDBL_MAX is defined as the "maximum finite floating point number".

    I'd see this as a defect in N2731. As I was saying earlier, the
    standard does not use "floating-point number" in a consistent way.
    This was discussed, but it seems that not everything was fixed.
    As an attempt to clarify this point, "normalized" was added, but
    this may not have been the right thing.

    The purpose of LDBL_MAX is to be able to be a finite value larger
    than LDBL_NORM_MAX, which is the maximum floating-point number
    following the 5.2.4.2.2p3 definition. LDBL_NORM_MAX was introduced
    precisely because LDBL_MAX does not necessarily follow the model
    of 5.2.4.2.2p3 (i.e. LDBL_MAX isn't necessarily a floating-point
    number).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Tim Rentsch on Wed Nov 10 13:16:40 2021
    In article <861r3pbbwh.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    I'm wondering if you have resolved your original uncertainty
    about the behavior of INFINITY in an implementation that does
    not support infinities?

    I suspect that by saying "overflow", the standard actually meant that
    the result is not in the range of representable values. This is the
    only way the footnote "In this case, using INFINITY will violate the
    constraint in 6.4.4 and thus require a diagnostic." can make sense
    (the constraint in 6.4.4 is about the range, not overflow). But IMHO,
    the failing constraint makes the behavior undefined, actually makes
    the program erroneous.

    Similarly, on

    static int i = 1 / 0;
    int main (void)
    {
    return 0;
    }

    GCC fails to translate the program due to the failing constraint:

    tst.c:1:16: error: initializer element is not constant
    1 | static int i = 1 / 0;
    | ^

    (this is not just a diagnostic, GCC does not generate an executable).

    Ditto with Clang:

    tst.c:1:18: error: initializer element is not a compile-time constant
    static int i = 1 / 0;
    ~~^~~

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vincent Lefevre on Wed Nov 10 08:02:24 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <861r3pbbwh.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    I'm wondering if you have resolved your original uncertainty
    about the behavior of INFINITY in an implementation that does
    not support infinities?

    I suspect that by saying "overflow", the standard actually meant that
    the result is not in the range of representable values. This is the
    only way the footnote "In this case, using INFINITY will violate the constraint in 6.4.4 and thus require a diagnostic." can make sense
    (the constraint in 6.4.4 is about the range, not overflow). But IMHO,
    the failing constraint makes the behavior undefined, actually makes
    the program erroneous.

    Suppose we have an implementation that does not support
    infinities, a range of double and long double up to about ten to
    the 99999, and ask it to translate the following .c file

    double way_too_big = 1.e1000000;

    This constant value violates the constraint in 6.4.4. Do you
    think this .c file (and any program it is part of) has undefined
    behavior? If so, do you think any constraint violation implies
    undefined behavior, or just some of them?

    Similarly, on

    static int i = 1 / 0;
    int main (void)
    {
    return 0;
    }

    GCC fails to translate the program due to the failing constraint:

    tst.c:1:16: error: initializer element is not constant
    1 | static int i = 1 / 0;
    | ^

    (this is not just a diagnostic, GCC does not generate an
    executable).

    Note that the C standard does not distinguish between "errors"
    and "warnings" in diagnostic messages. Either is allowed
    regardless of whether undefined behavior is present.

    Ditto with Clang:

    tst.c:1:18: error: initializer element is not a compile-time constant
    static int i = 1 / 0;
    ~~^~~

    The messages indicate that the failure is not about exceeding the
    range of a type, but rather about satisfying the constraints for
    constant expressions, in particular 6.6 p4, which says in part

    Each constant expression shall evaluate to a constant [...]

    The problem here is that 1/0 doesn't evaluate to anything,
    because division by 0 is not defined. Any question of range of
    representable values doesn't enter into it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Wed Nov 10 12:03:03 2021
    On 11/10/21 7:48 AM, Vincent Lefevre wrote:
    In article <smecfc$jai$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    Why does it matter to you that such implementations are possible?

    When writing a portable program, one wants it to behave correctly
    even on untested implementations (which can be known implementations
    but without a machine available to test the program, implementations
    unknown to the developer, and possible future implementations).

    Yes, but it's also necessary to understand as "correct" any behavior
    that conforms to the relevant requirements, which may include
    conformance with relevant standards. Such implementations produce
    behavior that is correct according to the C standard. Such
    implementations should make no claim to conform to IEEE 754. If so, the
    fact that don't conform to it doesn't render their result incorrect.

    ...
    ... The value nextdown(DBL_MAX) does not make much
    sense when the implementation *knows* that the value is larger than
    DBL_MAX because it exceeds the range (there is a diagnostic to tell
    that to the user because of 6.4.4p2).

    You misunderstand the purpose of the specification in 6.4.4.2p4. It was
    not intended that a floating point implementation would generate the
    nearest representable value, and that the implementation of C would then
    arbitrarily chose to pick one of the other two adjacent representable
    values. The reason was to accommodate floating point implementations
    that couldn't meet the accuracy requirements of IEC 60559.

    You didn't understand. I repeat. The implementation *knows* that the
    value is larger than DBL_MAX. This knowledge is *required* by the C
    standard so that the required diagnostic can be emitted (due to the constraint in 6.4.4p2). So there is no reason that the implementation
    would assume that the value can be less than DBL_MAX.

    This is not an accuracy issue, or if there is one, it occurs at the
    level of the 6.4.4p2 constraint.

    I assume we've been talking about implementations that conform to the C standard, right? Otherwise there's nothing meaningful that can be said.

    6.4.4.2p4 describes accuracy requirements that allow the result you find objectionable. I've been talking about the fact that those requirements
    are a little bit more lenient than those imposed by IEEE 754, because
    those looser requirements allow a slightly simpler implementation, one
    which might use up less code space or execute somewhat faster, at the
    cost of lower accuracy.

    However, it's not a lot less accuracy. Please be specific in your
    answers to the following questions. Identify a specific IEEE 754 format
    and the actual numerical values for that format, printed with enough
    digits to see the differences between them:

    * How big is the largest value described by a floating point constant,
    that must be rounded down to DBL_MAX?
    * What is the value of DBL_MAX?
    * What is the value of nextdown(DBL_MAX)?
    * How big is the difference between the first two values?
    * How big is the difference between the second and third values?

    As you should see, the maximum error allowed by the C standard is not enormously larger than the maximum error allowed by IEEE 754.

    You're worried about the possibility of an implementation conforming to
    the C standard by returning nextdown(DBL_MAX), despite the fact that, in
    order to conform, the implementation would also have to generate that diagnostic message? This means that there must be a block of code in the compiler somewhere, which issues that diagnostic, and which only gets
    executed when that constraint is violated, but for some reason the
    implementor choose not to add code to that block to set the value to
    DBL_MAX. If you're worried about that possibility, that implies that you
    can imagine a reason why someone might do that. What might that reason be?

    For the sake of argument, let's postulate that a given implementor does
    in fact have some reason to do that. If that's the case, there's
    something I can guarantee to you: that implementor considers such an
    error to be acceptably small, and believes that a sufficiently large
    fraction of the users of his implementation will agree. If the
    implementor is wrong about that second point, people will eventually
    stop using his implementation. If he's right about that point - if both
    he and the users of his implementation consider such inaccuracy
    acceptable - why should he change his implementation just because you
    consider it unacceptable? You wouldn't be a user of such an
    implementation anyway, right?

    ...
    Actually it is when the mathematical result exceeds the range. 6.5p5
    says: "If an /exceptional condition/ occurs during the evaluation of
    an expression (that is, if the result is not mathematically defined or
    not in the range of representable values for its type), the behavior
    is undefined." So this appears to be an issue when infinity is not
    supported.

    Conversion of a floating point constant into a floating point value is
    not "evaluation of an expression", and therefore is not covered by
    6.5p5. Such conversions are required to occur "as-if at translation
    time", and exceptional conditions are explicitly prohibited.

    But what about constant expressions?

    They are expressions, not constants, so I agree that they are covered by
    ed 6.5p5. That doesn't quite lead you to exactly the result you want.
    See below.

    For instance, assuming no IEEE 754 support, what is the behavior of
    the following code?

    static double x = DBL_MAX + DBL_MAX;

    That involves addition, and is therefore covered by 5.2.4.2.2p8, which I
    quoted in my previous message..

    If one ignores 6.5p5 because this is a translation-time computation,
    I find the standard rather ambiguous on what is required.

    Floating point constants are required to be evaluated as-if at translation-time.
    Constant expressions are permitted to be evaluated at translation-time,
    but it is not required. If it is performed at translation time, the
    recommended practice when __STDC_IEC_60559_BFP__ is pre#defined is: "The implementation should produce a diagnostic message for each
    translation-time floating-point exception, other than "inexact"; 396)
    the implementation should then proceed with the translation of the
    program." (F.8.2p2). I would presume that this is also allowed, but not required, even if __STDC_IEC_60559_BFP__ is not pre#defined.

    Note that there is a constraint 6.6p4 "Each constant expression shall evaluate to a constant that is in the range of representable values
    for its type." but this is of the same kind as 6.4.4p2 for constants.

    Because of 5.2.4.2.2p8, it's implementation-defined whether or not the
    addition is carried out with sufficient inaccuracy to produce a result
    that is within the range of representable values. I would not recommend
    having any specific expectations, good or bad, about the behavior of
    such code.

    And what about the following?

    static int i = 2 || 1 / 0;

    Integer division is far more tightly constrained by the C standard than floating point division (it would be really difficult, bordering on
    impossible, for something to constrain floating point division more
    loosely than the C standard does).

    Note that there are inconsistencies in the standard about what
    it means by "floating-point numbers". It is sometimes used to
    mean the value of a floating type. For instance, the standard
    says for fabs: "The fabs functions compute the absolute value
    of a floating-point number x." But I really don't think that
    this function is undefined on infinities.

    If __STDC_IEC_60559_BFP__ is pre#defined by the implementation, F10.4.3
    not only allows fabs (±∞), it explicitly mandates that it return +∞.

    The issue is when __STDC_IEC_60559_BFP__ is not defined but infinities
    are supported (as allowed by the standard).

    True, but the fact that F10.4.3 is there implies that the behavior it
    specifies is not considered to violate the clause that you referred to,
    so it would not be prohibited for an implementation to provide such
    behavior even if __STDC_IEC_60559_BFP__ were not pre#defined.

    ...
    (LDBL_MAX has exponent e = e_max + 1.)

    That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point
    numbers must have e_min <= e && e <= e_max.

    Yes, *floating-point numbers*.

    LDBL_MAX is defined as the "maximum finite floating point number".

    I'd see this as a defect in N2731. As I was saying earlier, the
    standard does not use "floating-point number" in a consistent way.
    This was discussed, but it seems that not everything was fixed.
    As an attempt to clarify this point, "normalized" was added, but
    this may not have been the right thing.

    The purpose of LDBL_MAX is to be able to be a finite value larger
    than LDBL_NORM_MAX,

    No, LDBL_MAX is allowed to be larger than LDBL_NORM_MAX, but the
    committee made it clear that they expected LDBL_MAX and LDBL_NORM_MAX to
    have the same value on virtually all real-world implementations.

    ... which is the maximum floating-point number
    following the 5.2.4.2.2p3 definition. LDBL_NORM_MAX was introduced> precisely because LDBL_MAX does not necessarily follow the model
    of 5.2.4.2.2p3 (i.e. LDBL_MAX isn't necessarily a floating-point
    number).

    I don't believe that was the intent. I believe that the standard was
    saying precisely what it meant to say when describing LDBL_MAX as the
    largest finite floating point number, while describing LDBL_NORM_MAX as
    the largest finite normalized floating point number. What precisely are
    the definitions for those two macros that you think the committee
    intended to describe?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Tim Rentsch on Wed Nov 10 15:01:46 2021
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <861r3pbbwh.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    Vincent Lefevre <vincent-news@vinc17.net> writes:
    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    I'm wondering if you have resolved your original uncertainty
    about the behavior of INFINITY in an implementation that does
    not support infinities?

    I suspect that by saying "overflow", the standard actually meant that
    the result is not in the range of representable values. This is the
    only way the footnote "In this case, using INFINITY will violate the
    constraint in 6.4.4 and thus require a diagnostic." can make sense
    (the constraint in 6.4.4 is about the range, not overflow). But IMHO,
    the failing constraint makes the behavior undefined, actually makes
    the program erroneous.

    Suppose we have an implementation that does not support
    infinities, a range of double and long double up to about ten to
    the 99999, and ask it to translate the following .c file

    double way_too_big = 1.e1000000;

    This constant value violates the constraint in 6.4.4. Do you
    think this .c file (and any program it is part of) has undefined
    behavior? If so, do you think any constraint violation implies
    undefined behavior, or just some of them?

    (Jumping in though the question was addressed to someone else.)

    I think it's a tricky question. I think the language would be cleaner
    if the standard explicitly stated that violating a constraint always
    results in undefined behavior -- or if it explicitly stated that it
    doesn't. (The former is my personal preference.)

    Clearly a compiler is allowed (but not required) to reject a program
    that violates a constraint. If it does so, there is no behavior. So
    the question is whether the behavior is undefined if the implementation
    chooses not to reject it. (I personally don't see a whole lot of value
    in defining the behavior of code that could have been rejected outright.
    I'm also not a big fan of the fact that required diagnostics don't have
    to be fatal, but that's not likely to change.)

    The semantics of floating constants specify that the value is "either
    the nearest representable value, or the larger or smaller representable
    value immediately adjacent to the nearest representable value, chosen in
    an implementation-defined manner". Given that infinities are not
    supported, that would be DBL_MAX or its predecessor. Based on that, I'd
    say that:

    - A diagnostic is required.
    - A compiler may reject the program.
    - If the compiler doesn't reject the program, the value of way_too_big
    must be DBL_MAX or its predecessor. (Making it the predecessor of
    DBL_MAX would be weird but conforming.)

    *Except* that the definition of "constraint" is "restriction, either
    syntactic or semantic, by which the exposition of language elements is
    to be interpreted". I find that rather vague, but it could be
    interpreted to mean that if a constraint is violated, there is no valid interpretation of language elements.

    Rejecting 1.e1000000 with a fatal diagnostic is clearly conforming.

    Issuing a non-fatal warning for 1.e1000000 and setting way_too_big to
    DBL_MAX is conforming (even if the behavior is/were undefined, that's
    perfectly valid).

    An implementation that issues a non-fatal warning for 1.e1000000 and
    sets way_too_big to 42.0 is arguably non-conforming, but if its
    implementers argue that the behavior is undefined because of their interpretation of the standard's definition of "constraint", I'd have a
    hard time claiming they're wrong.

    [...]

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Tim Rentsch on Fri Nov 12 23:55:37 2021
    In article <86wnlg9ey7.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Suppose we have an implementation that does not support
    infinities, a range of double and long double up to about ten to
    the 99999, and ask it to translate the following .c file

    double way_too_big = 1.e1000000;

    This constant value violates the constraint in 6.4.4. Do you
    think this .c file (and any program it is part of) has undefined
    behavior? If so, do you think any constraint violation implies
    undefined behavior, or just some of them?

    I think that constraints are there to define conditions under which specifications make sense. Thus, if a constraint is not satisfied,
    behavior is undefined (unless the standard *specifically* defines
    cases for which the constraint would not be satisfied; this is true
    for other kinds of undefined behavior, such as with Annex F, which
    defines the result of 1.0 / 0.0).

    That's also why there are diagnostics, which would otherwise be
    useless in the standard.

    In case this is not clear, the intent of the diagnostic in the above
    case is not about an inaccuracy, because the inaccuracy also exists
    when infinities are supported (and in this case, the constraint is
    satisfied, i.e. no required diagnostics).

    Similarly, on

    static int i = 1 / 0;
    int main (void)
    {
    return 0;
    }

    GCC fails to translate the program due to the failing constraint:

    tst.c:1:16: error: initializer element is not constant
    1 | static int i = 1 / 0;
    | ^

    (this is not just a diagnostic, GCC does not generate an
    executable).

    Note that the C standard does not distinguish between "errors"
    and "warnings" in diagnostic messages. Either is allowed
    regardless of whether undefined behavior is present.

    Agreed (but I think that GCC will generally generate an error
    when this is undefined behavior, though one needs -pedantic-errors
    to make sure that no extensions are used to define things that
    are normally undefined).

    Ditto with Clang:

    tst.c:1:18: error: initializer element is not a compile-time constant static int i = 1 / 0;
    ~~^~~

    The messages indicate that the failure is not about exceeding the
    range of a type, but rather about satisfying the constraints for
    constant expressions, in particular 6.6 p4, which says in part

    Each constant expression shall evaluate to a constant [...]

    Yes, this was my point.

    The problem here is that 1/0 doesn't evaluate to anything,
    because division by 0 is not defined. Any question of range of
    representable values doesn't enter into it.

    My point is that 1 / 0 is not regarded as a constant expression
    (here because 1 / 0 isn't mathematically defined, but the cause
    could also be that the result is out of range, as seen below).

    Ditto with

    static int i = 2147483647 + 2147483647;

    but -pedantic-errors is needed as I've said above. On

    int main (void)
    {
    static int i = 2147483647 + 2147483647;
    return 0;
    }

    "gcc -pedantic-errors" gives

    tst.c:3:3: error: overflow in constant expression [-Woverflow]
    3 | static int i = 2147483647 + 2147483647;
    | ^~~~~~

    but no errors (= no diagnostics due to a failing constraint) for

    int main (void)
    {
    int i = 2147483647 + 2147483647;
    return 0;
    }

    because 2147483647 + 2147483647 is not regarded as a constant
    expression (as explained in DR 261).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Fri Nov 12 23:17:39 2021
    In article <smgu08$3r1$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/10/21 7:48 AM, Vincent Lefevre wrote:
    In article <smecfc$jai$1@dont-email.me>,
    ...
    ... The value nextdown(DBL_MAX) does not make much
    sense when the implementation *knows* that the value is larger than
    DBL_MAX because it exceeds the range (there is a diagnostic to tell
    that to the user because of 6.4.4p2).

    You misunderstand the purpose of the specification in 6.4.4.2p4. It was
    not intended that a floating point implementation would generate the
    nearest representable value, and that the implementation of C would then >> arbitrarily chose to pick one of the other two adjacent representable
    values. The reason was to accommodate floating point implementations
    that couldn't meet the accuracy requirements of IEC 60559.

    You didn't understand. I repeat. The implementation *knows* that the
    value is larger than DBL_MAX. This knowledge is *required* by the C standard so that the required diagnostic can be emitted (due to the constraint in 6.4.4p2). So there is no reason that the implementation
    would assume that the value can be less than DBL_MAX.

    This is not an accuracy issue, or if there is one, it occurs at the
    level of the 6.4.4p2 constraint.

    I assume we've been talking about implementations that conform to the C standard, right? Otherwise there's nothing meaningful that can be said.

    The issue is more related to (strictly) conforming programs.

    6.4.4.2p4 describes accuracy requirements that allow the result you find objectionable. I've been talking about the fact that those requirements
    are a little bit more lenient than those imposed by IEEE 754, because
    those looser requirements allow a slightly simpler implementation, one
    which might use up less code space or execute somewhat faster, at the
    cost of lower accuracy.

    Except that with what you assume, it does not make the implementation
    simpler: If an implementation has determined that a value is
    larger than DBL_MAX (from 6.4.4p2), I don't see why allowing the
    implementation to return a value less than DBL_MAX would make it
    simpler.

    [...]
    As you should see, the maximum error allowed by the C standard is not enormously larger than the maximum error allowed by IEEE 754.

    I know, but the accuracy is not the main issue here. The main issue
    is *consistency*. If an implementation says at the same time that a
    value is considered being strictly larger than DBL_MAX and strictly
    smaller than DBL_MAX, then something is wrong! Note: by "considered",
    I mean than the implementation may be inaccurate when evaluating the
    value (but there's only one evaluation attempt).

    You're worried about the possibility of an implementation conforming to
    the C standard by returning nextdown(DBL_MAX), despite the fact that, in order to conform, the implementation would also have to generate that diagnostic message?

    Yes.

    This means that there must be a block of code in the compiler
    somewhere, which issues that diagnostic, and which only gets
    executed when that constraint is violated, but for some reason the implementor choose not to add code to that block to set the value to
    DBL_MAX. If you're worried about that possibility, that implies that
    you can imagine a reason why someone might do that. What might that
    reason be?

    I don't see why the C standard would allow an implementation to make
    results inconsistent... unless the diagnostic in 6.4.4p2 is regarded
    as undefined behavior.

    For the sake of argument, let's postulate that a given implementor does
    in fact have some reason to do that. If that's the case, there's
    something I can guarantee to you: that implementor considers such an
    error to be acceptably small, and believes that a sufficiently large
    fraction of the users of his implementation will agree. If the
    implementor is wrong about that second point, people will eventually
    stop using his implementation. If he's right about that point - if both
    he and the users of his implementation consider such inaccuracy
    acceptable - why should he change his implementation just because you consider it unacceptable? You wouldn't be a user of such an
    implementation anyway, right?

    Wrong reasoning.

    1. The implementor doesn't necessarily know all the possible issues
    with his choices. That's why standards should give restrictions and
    do that when needed, in many cases.

    2. A bit related to (1), the implementor doesn't know all programs.

    3. When there are issues in implementations, people don't stop using
    such implementations. See the number of GCC bugs... for most of them,
    much worse than the above issue.

    A bit similar to the above issue, if x and y have the same value,
    sin(x) and sin(y) may give different results due to different
    contexts (which is initially just an accuracy issue, and ditto
    with other math functions), and because of that, with GCC, one can
    get an integer variable that appears to have two different values
    at the same time:

    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102930

    What I mean here is that in practice, inaccuracy can yield more
    serious issues than just inaccurate results.

    For instance, assuming no IEEE 754 support, what is the behavior of
    the following code?

    static double x = DBL_MAX + DBL_MAX;

    That involves addition, and is therefore covered by 5.2.4.2.2p8, which I quoted in my previous message..

    My point was not about the accuracy, but the fact that the result
    is out of range. Assume that the accuracy is large enough so that
    the result is actually out of range (this is the case everywhere,
    I suppose).

    If one ignores 6.5p5 because this is a translation-time computation,
    I find the standard rather ambiguous on what is required.

    Floating point constants are required to be evaluated as-if at translation-time.
    Constant expressions are permitted to be evaluated at translation-time,
    but it is not required.

    Even with "static"??? If evaluation is not done at translation-time,
    how can the implementation know whether to generate a diagnostic
    due to 6.6p4 ("Each constant expression shall evaluate to a constant
    that is in the range of representable values for its type.")?

    And what about the following?

    static int i = 2 || 1 / 0;

    Integer division is far more tightly constrained by the C standard than floating point division (it would be really difficult, bordering on impossible, for something to constrain floating point division more
    loosely than the C standard does).

    Without Annex F, it isn't. But this wasn't the point. The point is
    that "1 / 0" is not regarded as a constant expression, just because
    the constraint 6.6p4 is not satisfied. And with the same argument,
    one may consider that 1.0e99999 (out of range in practice) is not
    regarded as a constant, so that the rules associated with constants
    will not apply, thus implying undefined behavior.

    ...
    (LDBL_MAX has exponent e = e_max + 1.)

    That doesn't work. 5.2.4.2.2p2 and p3 both specify that floating point
    numbers must have e_min <= e && e <= e_max.

    Yes, *floating-point numbers*.

    LDBL_MAX is defined as the "maximum finite floating point number".

    I'd see this as a defect in N2731. As I was saying earlier, the
    standard does not use "floating-point number" in a consistent way.
    This was discussed, but it seems that not everything was fixed.
    As an attempt to clarify this point, "normalized" was added, but
    this may not have been the right thing.

    The purpose of LDBL_MAX is to be able to be a finite value larger
    than LDBL_NORM_MAX,

    No, LDBL_MAX is allowed to be larger than LDBL_NORM_MAX,

    This is not possible, because LDBL_NORM_MAX is the maximum value of
    the set of all (normalized) floating-point numbers (i.e. the numbers
    that satisfy the model 5.2.4.2.2p3), LDBL_MAX is the maximum value
    of the set of all finite numbers, and the former set is a subset of
    the latter set.

    but the committee made it clear that they expected LDBL_MAX and
    LDBL_NORM_MAX to have the same value on virtually all real-world implementations.

    This is not true for double-double, which exists in practice.

    ... which is the maximum floating-point number
    following the 5.2.4.2.2p3 definition. LDBL_NORM_MAX was introduced> precisely because LDBL_MAX does not necessarily follow the model
    of 5.2.4.2.2p3 (i.e. LDBL_MAX isn't necessarily a floating-point
    number).

    I don't believe that was the intent.

    It was. See N2092[*], in particular:

    [*_NORM_MAX macros]

    Existing practice

    For most implementations, these three macros will be the same as the
    corresponding *_MAX macros. The only known case where that is not
    true is those where long double is implemented as a pair of doubles
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    (and then only LDBL_MAX will differ from LDBL_NORM_MAX).

    [*] http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2092.htm

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Fri Nov 12 21:03:18 2021
    On 11/12/21 6:17 PM, Vincent Lefevre wrote:
    In article <smgu08$3r1$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ...
    I assume we've been talking about implementations that conform to the C
    standard, right? Otherwise there's nothing meaningful that can be said.

    The issue is more related to (strictly) conforming programs.

    It can't be. Strictly conforming programs are prohibited from having
    output that depends upon behavior that the standard leaves unspecified. 6.4.4.2p4 identifies what is usually three different possible values for
    each floating point constant (four if the constant describes a value
    exactly half-way between two consecutive representable values, but only
    two if it describes a value larger than DBL_MAX or smaller than -DBL_MAX
    on a platform that doesn't support infinities), and leaves it
    unspecified which one of those values is chosen. Since that is precisely
    the freedom of choice that you're complaining about, we can't be
    discussing strictly conforming programs - if your program's output
    didn't depend upon which choice was made, you'd have no reason to worry
    about which choice was made.

    And at this point, I've officially grown weary of this discussion, and
    am bowing out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Mon Nov 15 09:18:31 2021
    In article <smn6d8$c92$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/12/21 6:17 PM, Vincent Lefevre wrote:
    In article <smgu08$3r1$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    ...
    I assume we've been talking about implementations that conform to the C
    standard, right? Otherwise there's nothing meaningful that can be said.

    The issue is more related to (strictly) conforming programs.

    It can't be. Strictly conforming programs are prohibited from having
    output that depends upon behavior that the standard leaves unspecified.

    This is not how I interpret the standard. Otherwise there would be
    an obvious contradiction with note 3, which uses

    #ifdef __STDC_IEC_559__

    while the value of __STDC_IEC_559__ is not specified in the standard.

    What matters is that the program needs to take every possibility into
    account and make sure that the (visible) behavior is the same in each
    case. So...

    6.4.4.2p4 identifies what is usually three different possible values for
    each floating point constant (four if the constant describes a value
    exactly half-way between two consecutive representable values, but only
    two if it describes a value larger than DBL_MAX or smaller than -DBL_MAX
    on a platform that doesn't support infinities), and leaves it
    unspecified which one of those values is chosen.

    The program can deal with that in order to get the same behavior in
    each case, so that it could be strictly conforming. However, if the
    behavior is undefined (assumed as a consequence of the failed
    constraint), there is *nothing* that one can do.

    That said, since the floating-point accuracy is not specified, can be
    extremely low and is not even checkable by the program (so that there
    is no possible fallback in case of low accuracy), there is not much
    one can do with floating point.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vincent Lefevre on Mon Nov 15 07:59:00 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <86wnlg9ey7.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Suppose we have an implementation that does not support
    infinities, a range of double and long double up to about ten to
    the 99999, and ask it to translate the following .c file

    double way_too_big = 1.e1000000;

    This constant value violates the constraint in 6.4.4. Do you
    think this .c file (and any program it is part of) has undefined
    behavior? If so, do you think any constraint violation implies
    undefined behavior, or just some of them?

    I think that constraints are there to define conditions under which specifications make sense. Thus, if a constraint is not satisfied,
    behavior is undefined [...]

    Suppose again we have an implementation that does not support
    infinities and has a range of double and long double up to about
    ten to the 99999. Question one: as far as the C standard is
    concerned, is the treatment of this .c file

    double way_too_big = 1.e1000000;

    and of this .c file

    #include <math.h>

    double way_too_big = INFINITY;

    the same in the two cases? (The question is meant to disregard
    differences that are purely implementation choices, as for
    example possibly labelling one case a "warning" and the other
    case an "error".)

    Question two: does the C standard require that at least one
    diagnostic be issued for each of the above .c files?

    Note that both of these are yes/no questions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Mon Nov 15 14:25:37 2021
    On 11/15/21 4:18 AM, Vincent Lefevre wrote:
    In article <smn6d8$c92$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/12/21 6:17 PM, Vincent Lefevre wrote:
    ...
    The issue is more related to (strictly) conforming programs.

    It can't be. Strictly conforming programs are prohibited from having
    output that depends upon behavior that the standard leaves unspecified.

    This is not how I interpret the standard.

    I don't see how there's room for interpretation: "A strictly conforming
    program ... shall not produce output dependent on any unspecified ...
    behavior, ..." (4p6).

    Otherwise there would be
    an obvious contradiction with note 3, which uses



    #ifdef __STDC_IEC_559__

    while the value of __STDC_IEC_559__ is not specified in the standard.

    What matters is that the program needs to take every possibility into
    account and make sure that the (visible) behavior is the same in each
    case. So...

    The example in that footnote is based upon the fact that it's
    unspecified whether the macro FE_UPWARD is #defined in <fenv.h>. The
    call to fesetround(FE_UPWARD) would refer to an undeclared identifier if
    it wasn't. The technique shown in Footnote 3 ensures that fsetround()
    doesn't even get called unless __STDC_IEC_60559_BFP__ is already
    #defined, thereby ensuring that FE_UPWARD is #defined, and as a result
    the output doesn't change just because that call is made. Note: it would
    have been better to write

    #ifdef FE_UPWARD
    fesetround(FE_UPWARD);
    #endif

    Implementations that don't fully support IEC 60559 might still support FE_UPWARD.

    The example code in that footnote is, however, rather badly chosen,
    because it's pretty nearly impossible to make any meaningful use of
    floating point operations without producing output that depends upon
    things that are unspecified. While the technique shown in Footnote 3
    does prevent the call to fesetround() from being problematic in itself,
    any situation where the developer cares about the rounding direction
    implies that the output from the program will depend upon how rounding
    is performed. If that weren't the case, why bother calling it?

    That's even more true of __STD_IEC_60559_BFP__. Any program that does
    anything with floating point values other than comparing floating point constants for relative order might have greatly different output
    depending upon whether an implementation conforms to IEC_60599, or takes maximal advantage of the freedom the standard gives them when __STDC_IEC_60559_BFP__ is not pre#defined. You can write code that
    doesn't care whether LDBL_MAX - LDBL_MIN > LDBL_MIN - LDBL_MAX is true
    or false, but only by, for all practical purposes, making no meaningful
    use of floating point operations.

    6.4.4.2p4 identifies what is usually three different possible values for
    each floating point constant (four if the constant describes a value
    exactly half-way between two consecutive representable values, but only
    two if it describes a value larger than DBL_MAX or smaller than -DBL_MAX
    on a platform that doesn't support infinities), and leaves it
    unspecified which one of those values is chosen.

    The program can deal with that in order to get the same behavior in
    each case, so that it could be strictly conforming.

    Agreed - and if your program were so written, you'd have no cause to
    complain about which of the three was chosen. But you are complaining
    about the possibility that a different one might be chosen than the one
    you think should be.

    ... However, if the
    behavior is undefined (assumed as a consequence of the failed
    constraint), there is *nothing* that one can do.

    Yes, but nowhere does the standard specify that violating a constraint
    does, in itself, render the behavior undefined. Most constraint
    violations do render the behavior undefined "by omission of any explicit definition of the behavior", but not this one. You might not like the definition that 6.4.4.2p4 provides, but it does provide one.

    That said, since the floating-point accuracy is not specified, can be extremely low and is not even checkable by the program (so that there
    is no possible fallback in case of low accuracy), there is not much
    one can do with floating point.

    ??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
    pretty much the highest possible accuracy is required.
    Are you worried about __STDC_IEC_60559_BFP__ being falsely pre#defined? Accuracy lower that required by IEC 60559 is pretty easily detected,
    unless an implementation takes truly heroic efforts to cover it up. To
    render the inaccuracy uncheckable would require almost as much hard work
    and ingenuity as producing the right result.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Tim Rentsch on Mon Nov 15 23:39:59 2021
    In article <86v90t8l6j.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Suppose again we have an implementation that does not support
    infinities and has a range of double and long double up to about
    ten to the 99999. Question one: as far as the C standard is
    concerned, is the treatment of this .c file

    double way_too_big = 1.e1000000;

    and of this .c file

    #include <math.h>

    double way_too_big = INFINITY;

    the same in the two cases?

    IMHO, this is undefined behavior in both cases, due to the
    unsatisfied constraint. So, yes.

    Question two: does the C standard require that at least one
    diagnostic be issued for each of the above .c files?

    Yes: The constraint is unsatisfied in both cases, so at least one
    diagnostic is required in both cases.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Mon Nov 15 20:00:53 2021
    On 11/15/21 6:39 PM, Vincent Lefevre wrote:
    In article <86v90t8l6j.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Suppose again we have an implementation that does not support
    infinities and has a range of double and long double up to about
    ten to the 99999. Question one: as far as the C standard is
    concerned, is the treatment of this .c file

    double way_too_big = 1.e1000000;

    and of this .c file

    #include <math.h>

    double way_too_big = INFINITY;

    the same in the two cases?

    IMHO, this is undefined behavior in both cases, due to the
    unsatisfied constraint. So, yes.

    So, of the three ways used by the standard to indicate that the behavior
    is undefined, which one was used in this case?

    "If a "shall" or "shall not" requirement that appears outside of a
    constraint or runtime-constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this document by the words "undefined behavior" or by the omission of any explicit definition of
    behavior. There is no difference in emphasis among these three; they all describe "behavior that is undefined"." (4p2).

    If it's a "shall" or an explicit "undefined behavior", please identify
    the clause containing those words.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Tue Nov 16 01:17:23 2021
    In article <smuc7i$6hq$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/15/21 4:18 AM, Vincent Lefevre wrote:
    In article <smn6d8$c92$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/12/21 6:17 PM, Vincent Lefevre wrote:
    ...
    The issue is more related to (strictly) conforming programs.

    It can't be. Strictly conforming programs are prohibited from having
    output that depends upon behavior that the standard leaves unspecified.

    This is not how I interpret the standard.

    I don't see how there's room for interpretation: "A strictly conforming program ... shall not produce output dependent on any unspecified ... behavior, ..." (4p6).

    I'm not sure what you intended to mean, but IMHO, the "It can't be."
    is wrong based on the unsatisfied constraint and definition 3.8 of
    "constraint" (but this should really be clarified).

    [...]
    ... However, if the
    behavior is undefined (assumed as a consequence of the failed
    constraint), there is *nothing* that one can do.

    Yes, but nowhere does the standard specify that violating a constraint
    does, in itself, render the behavior undefined. Most constraint
    violations do render the behavior undefined "by omission of any explicit definition of the behavior", but not this one. You might not like the definition that 6.4.4.2p4 provides, but it does provide one.

    But the fact that a restriction is not fulfilled (definition 3.8)
    is what matters.

    Another example:

    6.5.2.2 Function calls

    Constraints
    [...]
    2 If the expression that denotes the called function has a type that
    includes a prototype, the number of arguments shall agree with the
    number of parameters. [...]

    IMHO, if one provides an additional argument, this is undefined
    behavior, even though the semantics describe the behavior in this
    case.

    Another one:

    6.5.3.3 Unary arithmetic operators

    Constraints
    [...]
    1 The operand of the unary + or - operator shall have arithmetic type
    [...]

    Even though the semantics for +X still makes sense for any object type,
    IMHO, this is undefined behavior if X does not have an arithmetic type.

    It happens that the compilers reject such code. But what if they
    chose not to reject it? Would they be forced to use the defined
    semantics or be allowed to have some other behavior as an extension?
    I would say the latter.

    The example 6.5.16.1p6 regards a constraint violation as invalid code.

    That said, since the floating-point accuracy is not specified, can be extremely low and is not even checkable by the program (so that there
    is no possible fallback in case of low accuracy), there is not much
    one can do with floating point.

    ??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
    pretty much the highest possible accuracy is required.

    Indeed, well, almost I think. One should also check that
    FLT_EVAL_METHOD is either 0 or 1. Otherwise the accuracy
    becomes unknown.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Tue Nov 16 01:28:26 2021
    In article <smuvs5$6qh$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/15/21 6:39 PM, Vincent Lefevre wrote:
    In article <86v90t8l6j.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Suppose again we have an implementation that does not support
    infinities and has a range of double and long double up to about
    ten to the 99999. Question one: as far as the C standard is
    concerned, is the treatment of this .c file

    double way_too_big = 1.e1000000;

    and of this .c file

    #include <math.h>

    double way_too_big = INFINITY;

    the same in the two cases?

    IMHO, this is undefined behavior in both cases, due to the
    unsatisfied constraint. So, yes.

    So, of the three ways used by the standard to indicate that the behavior
    is undefined, which one was used in this case?

    "If a "shall" or "shall not" requirement that appears outside of a
    constraint or runtime-constraint is violated, the behavior is undefined. Undefined behavior is otherwise indicated in this document by the words "undefined behavior" or by the omission of any explicit definition of behavior. There is no difference in emphasis among these three; they all describe "behavior that is undefined"." (4p2).

    Omission of any explicit definition of behavior. There is a constraint (restriction) that is not satisfied. Thus the code becomes invalid and
    nothing gets defined as a consequence. This is like, in math, applying
    a theorem where the hypotheses are not satisfied.

    I would expect the implementation to reject the code, or accept it
    in a way unspecified by the standard (but the implementation could
    document what happens, as an extension).

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Vincent Lefevre on Tue Nov 16 01:57:03 2021
    In article <20211116011941$1337@zira.vinc17.org>,
    Vincent Lefevre <vincent-news@vinc17.net> wrote:

    I would expect the implementation to reject the code, or accept it
    in a way unspecified by the standard (but the implementation could
    document what happens, as an extension).

    As a useful example, I would say that an implementation that doesn't
    support infinities but has NaNs would be allow to track out-of-range
    values to try to emulate infinities (e.g. for safety reasons).

    For instance,

    INFINITY - INFINITY

    could yield NaN instead of 0.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Derek Jones@21:1/5 to All on Tue Nov 16 11:32:54 2021
    All,

    I don't see how there's room for interpretation: "A strictly conforming program ... shall not produce output dependent on any unspecified ... behavior, ..." (4p6).

    Indeed.

    Now the order of evaluation of binary operators is
    unspecified. But this does not mean that all programs containing
    at least one binary operator is not strictly conforming.

    For instance, the order of evaluation of the two
    operands in the following expression-statement is unspecified.
    But unless they are volatile qualified the output does
    not depend on the unspecified behavior:

    x+y;

    But in:

    a[printf("Hello")]+a[printf(" World")];

    the output does depend on the order of evaluation,
    and a program containing this code is not strictly conforming.

    #ifdef __STDC_IEC_559__

    while the value of __STDC_IEC_559__ is not specified in the standard.

    The output of a strictly conforming program does not depend on the implementation used.

    Since the value of __STDC_IEC_559__ depends on the implementation,
    its use can produce a program that is not strictly conforming.

    ps. This whole discussion has been very interesting.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Tue Nov 16 09:52:43 2021
    On 11/15/21 8:28 PM, Vincent Lefevre wrote:
    In article <smuvs5$6qh$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/15/21 6:39 PM, Vincent Lefevre wrote:
    In article <86v90t8l6j.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Suppose again we have an implementation that does not support
    infinities and has a range of double and long double up to about
    ten to the 99999. Question one: as far as the C standard is
    concerned, is the treatment of this .c file

    double way_too_big = 1.e1000000;

    and of this .c file

    #include <math.h>

    double way_too_big = INFINITY;

    the same in the two cases?

    IMHO, this is undefined behavior in both cases, due to the
    unsatisfied constraint. So, yes.

    So, of the three ways used by the standard to indicate that the behavior
    is undefined, which one was used in this case?

    "If a "shall" or "shall not" requirement that appears outside of a
    constraint or runtime-constraint is violated, the behavior is undefined.
    Undefined behavior is otherwise indicated in this document by the words
    "undefined behavior" or by the omission of any explicit definition of
    behavior. There is no difference in emphasis among these three; they all
    describe "behavior that is undefined"." (4p2).

    Omission of any explicit definition of behavior.

    The fact that a constraint is violated does not erase the definition
    provided by 6.4.4.2p4, or render it any less applicable.

    ... There is a constraint
    (restriction) that is not satisfied.

    Agreed.

    ... Thus the code becomes invalid and
    nothing gets defined as a consequence.

    This is, I presume, what makes you think that 6.4.4.2p4 is effectively
    erased?

    The standard says nothing to that effect. The only meaningful thing it
    says is that a diagnostic is required (5.1.1.3p1). I do not consider the standard's definition of "constraint" to be meaningful: "restriction,
    either syntactic or semantic, by which the exposition of language
    elements is to be interpreted" (3.8). What that sentence means,if
    anything, is not at all clear, but one thing is clear - it says nothing
    about what should happen if the restriction is violated. 5.1.1.3p1 is
    the only clause that says anything about that issue.
    Note: the requirement specified in 5.1.1.3p1 would also be erased, if a constraint violation is considered to effectively erase unspecified
    parts of the rest of the standard. Surely you don't claim that 3.8
    specifies which parts get erased?

    I would expect the implementation to reject the code, or accept it
    in a way unspecified by the standard (but the implementation could
    document what happens, as an extension).

    While an implementation is not required to accept and translate such
    code (that's only required for the "one program"), if it does translate
    such code, then (in the absence of any other problems) the resulting
    executable must produce the same observable behavior as if such a
    constant was given a value of either DBL_MAX or nextdown(DBL_MAX) - any
    other result fails to meet the requirements of 6.4.4.2p4.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Derek Jones on Tue Nov 16 10:35:32 2021
    On 11/16/21 6:32 AM, Derek Jones wrote:
    All,

    I don't see how there's room for interpretation: "A strictly conforming
    program ... shall not produce output dependent on any unspecified ...
    behavior, ..." (4p6).

    Indeed.

    Now the order of evaluation of binary operators is
    unspecified. But this does not mean that all programs containing
    at least one binary operator is not strictly conforming.

    For instance, the order of evaluation of the two
    operands in the following expression-statement is unspecified.
    But unless they are volatile qualified the output does
    not depend on the unspecified behavior:

    x+y;

    But in:

    a[printf("Hello")]+a[printf(" World")];

    the output does depend on the order of evaluation,
    and a program containing this code is not strictly conforming.

    #ifdef __STDC_IEC_559__

    while the value of __STDC_IEC_559__ is not specified in the standard.

    The output of a strictly conforming program does not depend on the implementation used.

    Since the value of __STDC_IEC_559__ depends on the implementation,
    its use can produce a program that is not strictly conforming.

    Agreed. But it also can produce program that is strictly conforming,
    just like your example of x+y above.

    However, given how horrible the accuracy requirements are when __STDC_IEC_60559_BFP__ is not pre#defined, the only way that a program
    could make any meaningful use of floating point and still be strictly conforming is if it limits such use to comparing floating point
    constants for relative order - and even then, that's only true if the
    constants are sufficiently far apart in value to guarantee the result of
    that comparison.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vincent Lefevre on Tue Nov 16 10:29:19 2021
    On 11/15/21 8:17 PM, Vincent Lefevre wrote:
    In article <smuc7i$6hq$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/15/21 4:18 AM, Vincent Lefevre wrote:
    In article <smn6d8$c92$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 11/12/21 6:17 PM, Vincent Lefevre wrote:
    ...
    The issue is more related to (strictly) conforming programs.

    It can't be. Strictly conforming programs are prohibited from having
    output that depends upon behavior that the standard leaves unspecified. >>>
    This is not how I interpret the standard.

    I don't see how there's room for interpretation: "A strictly conforming
    program ... shall not produce output dependent on any unspecified ...
    behavior, ..." (4p6).

    I'm not sure what you intended to mean, but IMHO, the "It can't be."
    is wrong based on the unsatisfied constraint and definition 3.8 of "constraint" (but this should really be clarified).

    I said that this issue cannot be "related to (strictly) conforming
    programs". This issue can't come up in strictly conforming programs, nor
    does the way in which this issue might be resolved have any effect on
    whether a program qualifies as strictly conforming. Not only is there inherently a constraint violation, but the value of such a constant
    would be unspecified even if there were no constraint, and the only
    reason to care about which value is selected by the implementation would
    be if the value affects the observable behavior of your program, which
    would mean that it's not strictly conforming.

    ...
    That said, since the floating-point accuracy is not specified, can be
    extremely low and is not even checkable by the program (so that there
    is no possible fallback in case of low accuracy), there is not much
    one can do with floating point.

    ??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
    pretty much the highest possible accuracy is required.

    Indeed, well, almost I think. One should also check that
    FLT_EVAL_METHOD is either 0 or 1. Otherwise the accuracy
    becomes unknown.

    A value of 2 tells you that the implementation will evaluate "all
    operations and constants to the range and precision of the long double
    type", which is pretty specific about what the accuracy is. It has
    precisely the same accuracy that it would have had on an otherwise
    identical implementation where FLT_EVAL_METHOD == 0, if you explicitly converted all double operands to long double, and then converted the
    final result back to double. Would you consider the accuracy of such
    code to be unknown?

    A value of -1 leaves some uncertainty about the accuracy. However, the evaluation format is allowed to have range or precision that is greater
    than that of the expression's type. The accuracy of such a type might be greater than that of the expression's type, but it's not allowed to be
    worse. That's far less uncertainty than what is allowed if __STDC_IEC_60559_BFP__ is NOT pre#defined by the implementation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to James Kuyper on Tue Nov 16 19:00:02 2021
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    On 11/15/21 8:28 PM, Vincent Lefevre wrote:
    In article <smuvs5$6qh$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    [...]
    "If a "shall" or "shall not" requirement that appears outside of a
    constraint or runtime-constraint is violated, the behavior is undefined. >>> Undefined behavior is otherwise indicated in this document by the words
    "undefined behavior" or by the omission of any explicit definition of
    behavior. There is no difference in emphasis among these three; they all >>> describe "behavior that is undefined"." (4p2).

    Omission of any explicit definition of behavior.

    The fact that a constraint is violated does not erase the definition
    provided by 6.4.4.2p4, or render it any less applicable.

    I suggest that that may be an open question.

    ... There is a constraint
    (restriction) that is not satisfied.

    Agreed.

    ... Thus the code becomes invalid and
    nothing gets defined as a consequence.

    This is, I presume, what makes you think that 6.4.4.2p4 is effectively erased?

    The standard says nothing to that effect. The only meaningful thing it
    says is that a diagnostic is required (5.1.1.3p1). I do not consider the standard's definition of "constraint" to be meaningful: "restriction,
    either syntactic or semantic, by which the exposition of language
    elements is to be interpreted" (3.8). What that sentence means,if
    anything, is not at all clear, but one thing is clear - it says nothing about what should happen if the restriction is violated. 5.1.1.3p1 is
    the only clause that says anything about that issue.
    Note: the requirement specified in 5.1.1.3p1 would also be erased, if a constraint violation is considered to effectively erase unspecified
    parts of the rest of the standard. Surely you don't claim that 3.8
    specifies which parts get erased?

    The standard's definition of "constraint" is uncomfortably vague -- but
    that doesn't mean I'm comfortable ignoring it.

    Given the definition of a "constraint" as a "restriction, either
    syntactic or semantic, by which the exposition of language elements is
    to be interpreted", it seems to me to be at least plausible that when a constraint is violated, the "exposition of language elements" cannot be interpreted.

    The implication would be that any program that violates a constraint has undefined behavior (assuming it survives translation). And yes, I'm
    proposing that violating any single constraint makes most of the rest of
    the standard moot.

    I'm not saying that this is the only way to interpret that wording.
    It's vague enough to permit a number of reasonable readings. But I
    don't think we can just ignore it.

    I'd like to see a future standard settle this one way or the other.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Keith Thompson on Thu Dec 2 22:14:43 2021
    Keith Thompson <Keith.S.Thompson+u@gmail.com> schrieb:

    I think it's a tricky question. I think the language would be cleaner
    if the standard explicitly stated that violating a constraint always
    results in undefined behavior -- or if it explicitly stated that it
    doesn't. (The former is my personal preference.)

    Other language standards use different concepts, and maybe the C
    standard could be improved by adopting them.

    The Fortran standard, for example, has numbered constraints and
    general prohibitions or requirements, denoted by "shall not"
    and "shall", respectively.

    If a numbered constraint is violated, the compiler has to detect
    and report this. If it fails to do so, it's a compiler bug.
    This is usually done for things that can easily be checked
    at compile time.

    If a "shall" or "shall not" directive is violated, then this is
    a bug in the program, and quite specifically the programmer's fault.
    A compiler may or may not report an error, this is then mostly
    a quality of implementation issue, and often a tradeoff with
    execution speed.

    It's cleaner than what C has, IMHO.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to James Kuyper on Wed Dec 8 10:09:03 2021
    Sorry for the late reply (not much time ATM).

    In article <sn0iog$gna$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    [...]
    ??? You can check for __STDC_IEC_60559_BFP__; if it's defined, then
    pretty much the highest possible accuracy is required.

    Indeed, well, almost I think. One should also check that
    FLT_EVAL_METHOD is either 0 or 1. Otherwise the accuracy
    becomes unknown.

    A value of 2 tells you that the implementation will evaluate "all
    operations and constants to the range and precision of the long double
    type", which is pretty specific about what the accuracy is. It has
    precisely the same accuracy that it would have had on an otherwise
    identical implementation where FLT_EVAL_METHOD == 0, if you explicitly converted all double operands to long double, and then converted the
    final result back to double. Would you consider the accuracy of such
    code to be unknown?

    Simply because the accuracy of long double is unknown and may be lower
    than the one of float. Annex F says for long double:

    F.2 Types
    [...]
    The long double type matches an IEC 60559 extended format,363) else
    a non-IEC 60559 extended format, else the IEC 60559 double format.

    Any non-IEC 60559 extended format used for the long double type
    shall have more precision than IEC 60559 double and at least the
    range of IEC 60559 double.364) The value of FLT_ROUNDS applies to
    all IEC 60559 types supported by the implementation, but need not
    apply to non-IEC 60559 types.

    Just consider a non-IEC 60559 extended format. Note that the standard
    says that it shall have more *precision* than IEC 60559 double, but
    does not say anything about accuracy.

    A value of -1 leaves some uncertainty about the accuracy. However, the evaluation format is allowed to have range or precision that is greater
    than that of the expression's type. The accuracy of such a type might be greater than that of the expression's type, but it's not allowed to be
    worse.

    I don't see where the standard says that it's not allowed to be worse.
    One just has:

    5.2.4.2.2 Characteristics of floating types <float.h>
    [...]
    6 The accuracy of the floating-point operations (+, -, *, /) and
    of the library functions in <math.h> and <complex.h> that return
    floating-point results is implementation-defined, as is the
    accuracy of the conversion between floating-point internal
    representations and string representations performed by the
    library functions in <stdio.h>, <stdlib.h>, and <wchar.h>. The
    implementation may state that the accuracy is unknown.

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vincent Lefevre@21:1/5 to Keith Thompson on Wed Dec 8 10:56:07 2021
    In article <87pmqzv64t.fsf@nosuchdomain.example.com>,
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    The standard's definition of "constraint" is uncomfortably vague -- but
    that doesn't mean I'm comfortable ignoring it.

    Given the definition of a "constraint" as a "restriction, either
    syntactic or semantic, by which the exposition of language elements is
    to be interpreted", it seems to me to be at least plausible that when a constraint is violated, the "exposition of language elements" cannot be interpreted.

    DR 261[*] goes in this way, IMHO. Here, the constraint on constant
    expressions is used to determine whether an expression may be
    regarded as a constant expression or not. Thus, if the constraint
    is not matched, then the expression is not a constant expression,
    and what falls under this constraint does not apply.

    Note that the Committee Response says "valid interpretation of the
    code", i.e. if the requirements of a constraint are not met, then
    there is no "valid interpretation of the code".

    [*] http://www.open-std.org/JTC1/SC22/WG14/www/docs/dr_261.htm

    --
    Vincent Lefèvre <vincent@vinc17.net> - Web: <https://www.vinc17.net/>
    100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
    Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vincent Lefevre on Fri Dec 17 21:02:40 2021
    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <86sfxbpm9d.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    To me it seems better for INFINITY to be defined as it is rather
    than being conditionally defined. If what is needed is really an
    infinite value, just write INFINITY and the code either works or
    compiling it gives a diagnostic.

    diagnostic and undefined behavior. [...]

    Not everyone agrees with this conclusion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Fri Dec 17 21:00:20 2021
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    [...]

    Shouldn't the standard by changed to make INFINITY conditionally
    defined (if not required to expand to a true infinity)? [...]

    To me it seems better for INFINITY to be defined as it is rather
    than being conditionally defined. If what is needed is really an
    infinite value, just write INFINITY and the code either works or
    compiling it gives a diagnostic. If what is needed is just a very
    large value, write HUGE_VAL (or HUGE_VALF or HUGE_VALL, depending)
    and the code works whether infinite floating-point values are
    supported or not. If it's important that infinite values be
    supported but we don't want to risk a compilation failure, use
    HUGE_VAL combined with an assertion

    assert( HUGE_VAL == HUGE_VAL/2 );

    Alternatively, use INFINITY only in one small .c file, and give
    other sources a make dependency for a successful compilation
    (with of course a -pedantic-errors option) of that .c file. I
    don't see that having INFINITY be conditionally defined buys
    anything, except to more or less force use of #if/#else/#endif
    blocks in the preprocessor. I don't mind using the preprocessor
    when there is a good reason to do so, but here I don't see one.

    I don't see how that's better than conditionally defining INFINITY.

    It's better only in the sense that it works with the existing
    C standards, and may give acceptable results in practice.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Mon Jan 3 12:03:53 2022
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    [ is a constraint violation always undefined behavior? ]

    [...] one possible interpretation of the phrase "a restriction
    ... by which the exposition of language elements is to be
    interpreted" could be that if the constraint is violated, there
    is no meaningful interpretation. Or to put it another way,
    that the semantic description applies only if all constraints
    are satisfied.

    I've searched for the word "constraint" in the C89 and C99
    Rationale documents. They were not helpful.

    I am admittedly trying to read into the standard what I think
    it *should* say. A rule that constraint violations cause
    undefined behavior would, if nothing else, make the standard a
    bit simpler.

    Note that constraint violations are not undefined behavior in a
    strict literal reading of the definition. Undefined behavior
    means there are no restrictions as to what an implemenation may
    do, but constraint violations require the implementation to
    issue at least one diagnostic, which is not the same as "no
    restrictions".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Mon Jan 3 11:55:53 2022
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    James Kuyper <jameskuyper@alumni.caltech.edu> writes:

    On 10/9/21 4:17 PM, Vincent Lefevre wrote:

    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint. A
    diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    "For decimal floating constants, and also for hexadecimal floating
    constants when FLT_RADIX is not a power of 2, the result is either
    the nearest representable value, or the larger or smaller
    representable value immediately adjacent to the nearest
    representable value, chosen in an implementation-defined manner.
    For hexadecimal floating constants when FLT_RADIX is a power of 2,
    the result is correctly rounded." (6.4.4.2p3)

    In the case of overflow, for a type that cannot represent infinity,
    there is only one "nearest representable value", which is DBL_MAX.

    But does that apply when a constraint is violated?

    6.4.4p2, a constraint, says:

    Each constant shall have a type and the value of a constant shall be
    in the range of representable values for its type.

    A "constraint", aside from triggering a required diagnostic, is a "restriction, either syntactic or semantic, by which the exposition
    of language elements is to be interpreted",

    Note that the C standard uses the word "constraint" in at least
    two different senses. One is the sense of the definition given
    above. Another is the sense of any stated restriction in a
    'Constraints' section (and nothing else). AFAICT (I have not
    tried to do a thorough search) the second sense never includes
    a syntactic restriction.

    which is IMHO a bit vague.

    Certainly it is at least ambiguous and subjective.

    My mental model is that if a program violates a constraint and the implementation still accepts it (i.e., the required diagnostic is a
    non-fatal warning) the program's behavior is undefined -- but the
    standard doesn't say that. Of course if the implementation rejects
    the program, it has no behavior.

    This paragraph uses the word "behavior" in two different senses.
    The C standard uses "behavior" to mean behavior in the abstract
    machine, or sometimes to mean a description of behavior in the
    abstract machine. In this sense the program has behavior whether
    it is rejected or not: if it has defined behavior, then that is
    the behavior, and if it has undefined behavior then the behavior is
    "undefined behavior". The sentence "if the implementation rejects
    the program, it has no behavior" uses the word behavior in the
    sense of "run-time behavior", which is a different sense than how
    "behavior" is used in the C standard. A C program has behavior,
    in the sense that the C standard uses the term, whether it is
    accepted or not, or even whether it is compiled or not.

    For what it's worth, given this:

    double too_big = 1e1000;

    gcc, clang, and tcc all print a warning and set too_big to infinity.
    That's obviously valid if the behavior is undefined. I think it's
    also valid if the behavior is defined; the nearest representable
    value is DBL_MAX, and the larger representable value immediately
    adjacent to DBL_MAX is infinity.

    Since the implementations listed all have infinities, the value of
    the constant is in the range of representable values for its type,
    so no constraint is violated, and the behavior is always defined.

    It doesn't seem to me to be particularly useful to say that a
    program can be rejected, but its behavior is defined if the
    implementation chooses not to reject it.

    Such cases occur often. Consider an implementation where SIZE_MAX
    is 4294967295. If translating a program that has a declaration

    static char blah[3221225472];

    then the implementation is free to reject it, but the behavior is
    defined whether the implementation accepts the program or rejects
    it. Behavior is a property of the program, not the implementation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Mon Jan 3 12:56:19 2022
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    James Kuyper <jameskuyper@alumni.caltech.edu> writes:

    On 11/15/21 8:28 PM, Vincent Lefevre wrote:

    In article <smuvs5$6qh$1@dont-email.me>,
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    [...]

    "If a "shall" or "shall not" requirement that appears outside of
    a constraint or runtime-constraint is violated, the behavior is
    undefined. Undefined behavior is otherwise indicated in this
    document by the words "undefined behavior" or by the omission of
    any explicit definition of behavior. There is no difference in
    emphasis among these three; they all describe "behavior that is
    undefined"." (4p2).

    Omission of any explicit definition of behavior.

    The fact that a constraint is violated does not erase the
    definition provided by 6.4.4.2p4, or render it any less applicable.

    I suggest that that may be an open question.

    ... There is a constraint
    (restriction) that is not satisfied.

    Agreed.

    ... Thus the code becomes invalid and
    nothing gets defined as a consequence.

    This is, I presume, what makes you think that 6.4.4.2p4 is
    effectively erased?

    The standard says nothing to that effect. The only meaningful
    thing it says is that a diagnostic is required (5.1.1.3p1). I do
    not consider the standard's definition of "constraint" to be
    meaningful: "restriction, either syntactic or semantic, by which
    the exposition of language elements is to be interpreted" (3.8).

    (Incidental remark: the problem is not that the definition of
    constraint is meaningless; the problem is that the meaning
    of the definition is ambiguous and subjective (but that's not
    the same as meaningless (or "not meaningful")).)

    What that sentence means,if anything, is not at all clear, but one
    thing is clear - it says nothing about what should happen if the
    restriction is violated. 5.1.1.3p1 is the only clause that says
    anything about that issue. Note: the requirement specified in
    5.1.1.3p1 would also be erased, if a constraint violation is
    considered to effectively erase unspecified parts of the rest of
    the standard. Surely you don't claim that 3.8 specifies which
    parts get erased?

    The standard's definition of "constraint" is uncomfortably vague
    -- but that doesn't mean I'm comfortable ignoring it.

    Given the definition of a "constraint" as a "restriction, either
    syntactic or semantic, by which the exposition of language
    elements is to be interpreted", it seems to me to be at least
    plausible that when a constraint is violated, the "exposition of
    language elements" cannot be interpreted.

    The implication would be that any program that violates a constraint
    has undefined behavior (assuming it survives translation). And yes,
    I'm proposing that violating any single constraint makes most of the
    rest of the standard moot.

    I'm not saying that this is the only way to interpret that wording.
    It's vague enough to permit a number of reasonable readings. [...]

    So you're saying that the meaning of the definition of constraint
    is to some extent subjective, ie, reader dependent?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Mon Jan 3 12:48:56 2022
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <861r3pbbwh.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Vincent Lefevre <vincent-news@vinc17.net> writes:

    In article <86wnmoov7c.fsf@linuxsc.com>,
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    What occurs is defined behavior and (for implementations that do
    not have the needed value for infinity) violates a constraint.
    A diagnostic must be produced.

    If this is defined behavior, where is the result of an overflow
    defined by the standard? (I can see only 7.12.1p5, but this is
    for math functions; here, this is a constant that overflows.)

    I'm wondering if you have resolved your original uncertainty
    about the behavior of INFINITY in an implementation that does
    not support infinities?

    I suspect that by saying "overflow", the standard actually meant that
    the result is not in the range of representable values. This is the
    only way the footnote "In this case, using INFINITY will violate the
    constraint in 6.4.4 and thus require a diagnostic." can make sense
    (the constraint in 6.4.4 is about the range, not overflow). But IMHO,
    the failing constraint makes the behavior undefined, actually makes
    the program erroneous.

    Suppose we have an implementation that does not support
    infinities, a range of double and long double up to about ten to
    the 99999, and ask it to translate the following .c file

    double way_too_big = 1.e1000000;

    This constant value violates the constraint in 6.4.4. Do you
    think this .c file (and any program it is part of) has undefined
    behavior? If so, do you think any constraint violation implies
    undefined behavior, or just some of them?

    (Jumping in though the question was addressed to someone else.)

    I think it's a tricky question. I think the language would be
    cleaner if the standard explicitly stated that violating a
    constraint always results in undefined behavior -- or if it
    explicitly stated that it doesn't. (The former is my personal
    preference.)

    Here are some statements that I believe are true:

    1. The C standard has no statement that says directly that
    constraint violations result in undefined behavior.

    2. The definition of "constraint" in the C standard is ambiguous
    and does not have a single objective meaning.

    3. There are no indications (at least none that I am aware of) in
    the C standard, or any other official writing of the WG14 group,
    of what meaning is intended by the ISO C group for the question
    in question.

    Clearly a compiler is allowed (but not required) to reject a program
    that violates a constraint. If it does so, there is no behavior.

    Same comment as I gave in my other recent posting - programs have
    behavior, in the sense used in the C standard, regardless of
    whether any implementation accepts or rejects them. (The behavior
    may be "undefined behavior".)

    So the question is whether the behavior is undefined if the
    implementation chooses not to reject it. (I personally don't see a
    whole lot of value in defining the behavior of code that could have
    been rejected outright.

    Again, there are lots of constructs that clearly have defined
    behavior, and yet implementations can choose to reject them.

    I'm also not a big fan of the fact that
    required diagnostics don't have to be fatal, but that's not likely
    to change.)

    IMO this view is short sighted. Implementations are allowed to
    define extensions as long as they are documented and don't change
    the meaning of any strictly conforming program. If any required
    diagnostic has to be fatal, that would disallow all kinds of
    useful extensions.

    Speaking just for myself, I would like implementations to provide
    an option under which any required diagnostic would result in the
    program being rejected. But only an option, and in any case that
    is in the area of QOI issues, which the C standard has explicitly
    chosen not to address.

    [analysis of possible interpretations of the above code fragment]

    My question was only about whether there is undefined behavior.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Tim Rentsch on Mon Jan 3 16:45:32 2022
    On 1/3/22 3:03 PM, Tim Rentsch wrote:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    [ is a constraint violation always undefined behavior? ]

    [...] one possible interpretation of the phrase "a restriction
    ... by which the exposition of language elements is to be
    interpreted" could be that if the constraint is violated, there
    is no meaningful interpretation. Or to put it another way,
    that the semantic description applies only if all constraints
    are satisfied.

    I've searched for the word "constraint" in the C89 and C99
    Rationale documents. They were not helpful.

    I am admittedly trying to read into the standard what I think
    it *should* say. A rule that constraint violations cause
    undefined behavior would, if nothing else, make the standard a
    bit simpler.

    Note that constraint violations are not undefined behavior in a
    strict literal reading of the definition. Undefined behavior
    means there are no restrictions as to what an implemenation may
    do, but constraint violations require the implementation to
    issue at least one diagnostic, which is not the same as "no
    restrictions".

    Although, after issuing that one diagnostic, if the implementation
    continues and generates an output program, and that program is run, then
    its behavior is explicitly defined to be undefined behavior.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Keith Thompson@21:1/5 to Richard Damon on Mon Jan 3 14:36:08 2022
    Richard Damon <Richard@Damon-Family.org> writes:
    On 1/3/22 3:03 PM, Tim Rentsch wrote:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    [ is a constraint violation always undefined behavior? ]

    [...] one possible interpretation of the phrase "a restriction
    ... by which the exposition of language elements is to be
    interpreted" could be that if the constraint is violated, there
    is no meaningful interpretation. Or to put it another way,
    that the semantic description applies only if all constraints
    are satisfied.

    I've searched for the word "constraint" in the C89 and C99
    Rationale documents. They were not helpful.

    I am admittedly trying to read into the standard what I think
    it *should* say. A rule that constraint violations cause
    undefined behavior would, if nothing else, make the standard a
    bit simpler.
    Note that constraint violations are not undefined behavior in a
    strict literal reading of the definition. Undefined behavior
    means there are no restrictions as to what an implemenation may
    do, but constraint violations require the implementation to
    issue at least one diagnostic, which is not the same as "no
    restrictions".

    Although, after issuing that one diagnostic, if the implementation
    continues and generates an output program, and that program is run,
    then its behavior is explicitly defined to be undefined behavior.

    Explicitly? Where does the standard say that.

    --
    Keith Thompson (The_Other_Keith) Keith.S.Thompson+u@gmail.com
    Working, but not speaking, for Philips
    void Void(void) { Void(); } /* The recursive call of the void */

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Tim Rentsch on Mon Jan 3 22:45:27 2022
    On Monday, January 3, 2022 at 3:56:22 PM UTC-5, Tim Rentsch wrote:
    Keith Thompson <Keith.S.T...@gmail.com> writes:

    James Kuyper <james...@alumni.caltech.edu> writes:
    ...
    The standard says nothing to that effect. The only meaningful
    thing it says is that a diagnostic is required (5.1.1.3p1). I do
    not consider the standard's definition of "constraint" to be
    meaningful: "restriction, either syntactic or semantic, by which
    the exposition of language elements is to be interpreted" (3.8).
    ...
    The standard's definition of "constraint" is uncomfortably vague
    -- but that doesn't mean I'm comfortable ignoring it.

    Given the definition of a "constraint" as a "restriction, either
    syntactic or semantic, by which the exposition of language
    elements is to be interpreted", it seems to me to be at least
    plausible that when a constraint is violated, the "exposition of
    language elements" cannot be interpreted.
    ...
    I'm not saying that this is the only way to interpret that wording.
    It's vague enough to permit a number of reasonable readings. [...]

    So you're saying that the meaning of the definition of constraint
    is to some extent subjective, ie, reader dependent?

    As I said above, it seems to me to be so poorly worded that it's not
    clear to me that it has any meaning, much less one that is subjective.
    Since other people disagree with me on that point, there would
    appear to be some subjectivity at play in that judgment, but after
    reading their arguments, I remain at a loss as to how they can
    interpret that phrase as being meaningful.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Richard Damon on Tue Jan 4 02:10:02 2022
    On 1/3/22 4:45 PM, Richard Damon wrote:
    On 1/3/22 3:03 PM, Tim Rentsch wrote:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    [ is a constraint violation always undefined behavior? ]

    [...] one possible interpretation of the phrase "a restriction
    ... by which the exposition of language elements is to be
    interpreted" could be that if the constraint is violated, there
    is no meaningful interpretation. Or to put it another way,
    that the semantic description applies only if all constraints
    are satisfied.

    I've searched for the word "constraint" in the C89 and C99
    Rationale documents. They were not helpful.

    I am admittedly trying to read into the standard what I think
    it *should* say. A rule that constraint violations cause
    undefined behavior would, if nothing else, make the standard a
    bit simpler.

    Note that constraint violations are not undefined behavior in a
    strict literal reading of the definition. Undefined behavior
    means there are no restrictions as to what an implemenation may
    do, but constraint violations require the implementation to
    issue at least one diagnostic, which is not the same as "no
    restrictions".

    Although, after issuing that one diagnostic, if the implementation
    continues and generates an output program, and that program is run, then
    its behavior is explicitly defined to be undefined behavior.

    Where? The standard specifies that there are only two explicit ways
    whereby undefined behaviro is indicated: a "shall" or "shall not" that
    appears outside of a constraint or runtime-constraint, or the use of the
    words "undefined behavior". In which clause to is there a relevant use
    of "shall", "shall not" or "undefined behavior"?

    There's one implicit method, and that's "by the omission of any explicit definition of behavior", but I've argued that there is an explicit
    definition of the behavior that applies in this case. In the absence of
    a general statement that "a constraint shall be satisfied" or "a
    constraint shall not be violated" or "violation of a constraint has
    undefined behavior", or some other corresponding wording that qualifies
    as explicitly making the behavior undefined, the existence of that
    explicit definition of behavior is not erased by the constraint violation.
    If a constraint violation does not render an explicit definition of the behavior inapplicable, the only consequences of that violation are a
    mandatory diagnostic and permission to reject the program. Should an implementation choose not to reject the program, and should the
    resulting executable be run, the behavior of that program is still
    constrained by all of the requirements of the standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to James Kuyper on Mon Jan 17 05:35:26 2022
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:

    On Monday, January 3, 2022 at 3:56:22 PM UTC-5, Tim Rentsch wrote:

    Keith Thompson <Keith.S.T...@gmail.com> writes:

    James Kuyper <james...@alumni.caltech.edu> writes:

    ...

    The standard says nothing to that effect. The only meaningful
    thing it says is that a diagnostic is required (5.1.1.3p1). I do
    not consider the standard's definition of "constraint" to be
    meaningful: "restriction, either syntactic or semantic, by which
    the exposition of language elements is to be interpreted" (3.8).

    ...

    The standard's definition of "constraint" is uncomfortably vague
    -- but that doesn't mean I'm comfortable ignoring it.

    Given the definition of a "constraint" as a "restriction, either
    syntactic or semantic, by which the exposition of language
    elements is to be interpreted", it seems to me to be at least
    plausible that when a constraint is violated, the "exposition of
    language elements" cannot be interpreted.

    ...

    I'm not saying that this is the only way to interpret that wording.
    It's vague enough to permit a number of reasonable readings. [...]

    So you're saying that the meaning of the definition of constraint
    is to some extent subjective, ie, reader dependent?

    As I said above, it seems to me to be so poorly worded that it's not
    clear to me that it has any meaning, much less one that is subjective.
    Since other people disagree with me on that point, there would
    appear to be some subjectivity at play in that judgment, but after
    reading their arguments, I remain at a loss as to how they can
    interpret that phrase as being meaningful.

    So is what you're saying that whether the meaning is subjective is
    itself a subjective question?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Richard Damon on Mon Jan 17 10:09:08 2022
    Richard Damon <Richard@Damon-Family.org> writes:

    On 1/3/22 3:03 PM, Tim Rentsch wrote:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    [ is a constraint violation always undefined behavior? ]

    [...] one possible interpretation of the phrase "a restriction
    ... by which the exposition of language elements is to be
    interpreted" could be that if the constraint is violated, there
    is no meaningful interpretation. Or to put it another way,
    that the semantic description applies only if all constraints
    are satisfied.

    I've searched for the word "constraint" in the C89 and C99
    Rationale documents. They were not helpful.

    I am admittedly trying to read into the standard what I think
    it *should* say. A rule that constraint violations cause
    undefined behavior would, if nothing else, make the standard a
    bit simpler.

    Note that constraint violations are not undefined behavior in a
    strict literal reading of the definition. Undefined behavior
    means there are no restrictions as to what an implemenation may
    do, but constraint violations require the implementation to
    issue at least one diagnostic, which is not the same as "no
    restrictions".

    Although, after issuing that one diagnostic, if the implementation
    continues and generates an output program, and that program is run,
    then its behavior is explicitly defined to be undefined behavior.

    I don't think so. It may be the case that the C standard is
    meant to imply that a constraint violation will necessarily
    also result in undefined behavior, but AFAICT there is no
    explicit statement to that effect.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)