• IBM 360/370 Floating Point Format

    From robin.vowels@gmail.com@21:1/5 to Konstantin Trachos on Tue May 12 05:09:44 2020
    On Tuesday, May 3, 1994 at 12:10:16 AM UTC+10, Konstantin Trachos wrote:
    Hi,

    I'm not quite sure, if this is the proper newsgroup, so if
    this is the wrong place for my querry, please be patient and don't
    blame me. Thank you.


    Recently I ran into the IBM 360/370 floating point format,
    which is quite unusual to me. It uses a sign bit S, a 7-bit
    characteristic Ch and a 32-bit binary represented fraction part F.
    The exponent is exp = Ch-64. Nothing unusual so far.

    Appart, the value of the number represented is 16**exp *F.

    I'm rankling with the choice to use hex as the base for this
    floating point number system instead of the conventional base 2.
    Why 16**exp *F rather than 2**exp *F?

    Can you give any reasons like range or system design that
    might be behind this decision?

    The reason was speed.
    When two values are added or subtracted, one or more
    high-order hex digits of the mantissa may become zero.
    Normalising the mantissa requires up to 6 shifts for
    single precision, and up to 14 shifts for double precision.

    In the case of a binary machine having a 24-bit mantissa,
    up to 23 shifts would be required, and up to 55 shifts
    for double precision.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From robin.vowels@gmail.com@21:1/5 to Robert Hyatt on Tue May 12 05:20:42 2020
    On Thursday, May 5, 1994 at 11:27:21 PM UTC+10, Robert Hyatt wrote:
    In article <2q9971$5ag@legato.Legato.COM> karl@legato.com (Karl Rowley) writes:
    In article <2q5opf$co4@nameserv.sys.hokudai.ac.jp> hiroshi@cgate.hipecs.hokudai.ac.jp (Hiroshi) writes:
    In article <2q31k8$rsa@news.uni-paderborn.de>,
    Konstantin Trachos <trachos@uni-paderborn.de> wrote:
    Hi,
    Recently I ran into the IBM 360/370 floating point format,
    which is quite unusual to me. It uses a sign bit S, a 7-bit >>>characteristic Ch and a 32-bit binary represented fraction part F.
    The exponent is exp = Ch-64. Nothing unusual so far.

    Appart, the value of the number represented is 16**exp *F.

    I'm rankling with the choice to use hex as the base for this >>>floating point number system instead of the conventional base 2.
    Why 16**exp *F rather than 2**exp *F?

    Can you give any reasons like range or system design that
    might be behind this decision?

    The base 16 can represent wider range of magnitude of numbers
    than base 2, so with smaller number of bits for the exponent will
    left the more number of bits for mantissa which might better accuracy >>they thought.

    Yes, the radix-16 exponent does free more bits for the mantissa. In the >best case, the mantissa has four more bits than it might otherwise.

    How about 3 bits more?


    However, the use of a radix-16 exponent means that many numbers must be >represented with mantissas that begin with zeroes. In the worst case, the >mantissa must begin with four zeroes.

    Not with normalized FP hardware. The /360 and /370 normalized results (assuming you did a normalized operation) so that the leftmost "nibble"
    (4 bits) was always non-zero. Worst case is when the leftmost mibble
    was "1" (0001) which lost 3 bits of mantissa precision. If it ever
    became 0 (0000) the FP normalize would shift left 4 bits (one nibble)
    and then decrement the exponent by 1.


    So, in the best case four more bits are gained for the mantissa, but in the >worst case, there is no gain.

    Also, this floating-point format has the odd property that the number of bits
    in the mantissa varies depending on the value of the exponent!!!

    How so?


    The real reason for this format must lie in a simplified hardware design.

    ==============================================================================
    Karl Rowley karl@legato.com >Legato Systems Inc. 3145 Porter Drive
    Palo Alto, CA 94304

    I think that the basic reason is that a 7 bit exponent (with a base-16 fraction) provides the same range of numbers as a 9 bit exponent with a base-2 fraction. Ergo, remove two bits from the exponent, and add 'em
    to the fraction. HOWEVER, since the exponent now shifts the fraction
    by 4 bits at a time, you have between 0 and 3 leading '0' bits in the fraction. Obviously, you therefore lose 1.5 bits in the fraction,

    You don't "lose" bits in the fraction, because you never really had them
    in the first place.[1]
    All that's guaranteed are 21 bits of precision, not 24.

    and
    gain two bits in the fraction with the reduced exponent size. Total
    gain is a whopping .5 bits of precision. This is somewhat increased
    when you consider operations like multiply and divide which will
    expand this .5 bits while the operation is being done. Of course,
    simply adding an additional 4 bits or whatever to the internal registers (unknown to the programmer) takes care of this (IBM called 'em guard
    digits.)

    [1] The Halve instruction was an exception. It did not post-normalise,
    and you might end up with only 20 significant bits (when the
    most significant hex digit was 1).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)