From robin.vowels@gmail.com@21:1/5 to All on Wed Aug 12 03:21:43 2020
    from comp.lang.fortran

    On Wednesday, August 12, 2020 at 11:56:40 AM UTC+10, gah4 wrote:
    On Tuesday, August 11, 2020 at 1:45:16 PM UTC-7, Steve Lionel wrote:

    We decided not to do BITS for 202X. Adding a completely new datatype to
    the language causes great upheaval, not only for the standard but also
    for compiler writers. The general feeling was that enhancing BOZ plus
    the extensive existing bit intrinsics suffices for most uses.

    For comparison purposes, not that I necessarily think it would be a
    good way to do it in Fortran, PL/I has bit strings. The same operations (such as substring) work as on character strings. This goes back close
    to the beginning of PL/I.

    Bit strings and characters strings have been in PL/I since the beginning, including substrings and various other string operations including

    The built-in function UNSPEC converts a variable of any type to a bit string.

    It doesn't "convert" anything.
    It yields the internal bit representation of the variable.

    The pseudo-variable UNSPEC converts bit string to another type.

    It doesn't convert anything. It places the value of the RHS
    of an assignment as bits in the given variable.

    DCL I FIXED BIN(31,0), X FLOAT BIN(21);

    should get you something like:


    (I am not sure now which end it pads when doing the conversions to/from BIT strings.)

    UNSPEC gives the internal bit pattern.
    It does not pad.

    DCL Y FLOAT BIN(21);
    should set Y to 100.

    Otherwise, PL/I constants have the base, scale, mode, and precision
    in which they are written. Fortunately conversions most often work well.

    123.0 is FIXED DEC(4,1), that is fixed point decimal with one digit
    after the decimal point.
    1100100B is FIXED BIN(7,0) with the decimal value 100.

    Floating point constants have an exponent.
    123.0e0 is FLOAT DEC(4)

    Note that like Fortran's SELECTED_xxx_KIND, which specifies precision
    in decimal digits, PL/I's DECIMAL base is not necessarily implemented
    in decimal arithmetic, especially for floating point.

    The PL/I implementation depends on the available hardware.
    If decimal fixed-point hardware is available, PL/I will
    use it.[1]
    If decimal float hardware is available, PL/I will use it.
    In the initial implementation of the IBM 360, decimal
    floating-point was not available, but binary float was --
    so binary floating-point hardware was used for both
    binary float and decimal float.
    Current IBM System z hardware has decimal float and binary
    float, so PL/I uses both according to the declaration of

    [1] An exception was PL/C, a dialect, which used floating-point
    hardware for decimal fixed-point data.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)