• SCALE intrinsic subprogram (aka a Fortran post)

    From Steven G. Kargl@21:1/5 to All on Sat Nov 4 21:21:53 2023
    The SCALE intrinsic allows one to change the
    floating point exponent for a REAL entity.
    For example,

    program foo
    real x
    x = 1
    print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point
    rounding error.

    Question. Anyone know why the Fortran standard (aka J3)
    restricted X to be a REAL entity? It would seem that X
    could be COMPLEX with obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?

    --
    steve

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Steven G. Kargl on Wed Nov 15 17:17:50 2023
    On 11/4/23 2:21 PM, Steven G. Kargl wrote:
    The SCALE intrinsic allows one to change the
    floating point exponent for a REAL entity.
    For example,

    program foo
    real x
    x = 1
    print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point
    rounding error.

    Question. Anyone know why the Fortran standard (aka J3)
    restricted X to be a REAL entity? It would seem that X
    could be COMPLEX with obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?


    Wow, no answer yet.

    It does seem that sometimes Fortran is slow to add features, especially
    when need for them isn't shown.

    It does make sense to have the complex version, though as you note, it
    isn't all that hard to get away without it.

    If I had a vote, it would be yes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Steven G. Kargl on Wed Nov 15 17:28:06 2023
    On 11/4/23 2:21 PM, Steven G. Kargl wrote:
    The SCALE intrinsic allows one to change the
    floating point exponent for a REAL entity.
    For example,

    program foo
    real x
    x = 1
    print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point
    rounding error.

    Question. Anyone know why the Fortran standard (aka J3)
    restricted X to be a REAL entity? It would seem that X
    could be COMPLEX with obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?


    Wow, no answer yet.

    It does seem that sometimes Fortran is slow to add features, especially
    when need for them isn't shown.

    It does make sense to have the complex version, though as you note, it
    isn't all that hard to get away without it.

    If I had a vote, it would be yes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pehache@21:1/5 to All on Thu Nov 16 11:17:31 2023
    Le 16/11/2023 à 02:28, gah4 a écrit :
    On 11/4/23 2:21 PM, Steven G. Kargl wrote:
    The SCALE intrinsic allows one to change the
    floating point exponent for a REAL entity.
    For example,

    program foo
    real x
    x = 1
    print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point
    rounding error.

    Question. Anyone know why the Fortran standard (aka J3)
    restricted X to be a REAL entity? It would seem that X
    could be COMPLEX with obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?


    Wow, no answer yet.

    It does seem that sometimes Fortran is slow to add features, especially
    when need for them isn't shown.

    The reason is maybe because the standard doesn't specify how a complex
    number is internally represented. In practice it is always represented by
    a pair (real,imag), but nothing would prevent a compiler representing it
    by (module,argument) for instance. Given that, the standard cannot
    guarantee the absence of rounding errors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Steven G. Kargl@21:1/5 to pehache on Thu Nov 16 20:01:02 2023
    On Thu, 16 Nov 2023 11:17:31 +0000, pehache wrote:

    Le 16/11/2023 à 02:28, gah4 a écrit :
    On 11/4/23 2:21 PM, Steven G. Kargl wrote:
    The SCALE intrinsic allows one to change the floating point exponent
    for a REAL entity.
    For example,

    program foo
    real x x = 1 print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point rounding error.

    Question. Anyone know why the Fortran standard (aka J3) restricted X
    to be a REAL entity? It would seem that X could be COMPLEX with
    obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?


    Wow, no answer yet.

    It does seem that sometimes Fortran is slow to add features, especially
    when need for them isn't shown.

    The reason is maybe because the standard doesn't specify how a complex
    number is internally represented. In practice it is always represented
    by a pair (real,imag), but nothing would prevent a compiler representing
    it by (module,argument) for instance. Given that, the standard cannot guarantee the absence of rounding errors.

    You are correct that the Fortran standard does not specify
    internal datails, and this could be extended to COMPLEX.
    It would however be quite strange for a Fortran vendor to
    use magnitude and phase given that the Fortran standard does
    quite often refer to the real and imaginary parts of a COMPLEX
    entity. Not to mention, the Fortran standard has introduced:

    3.60.1
    complex part designator

    9.4.4 Complex parts

    R915 complex-part-designator is designator % RE
    or designator % IM

    PS: If a Fortran vendor used magnitude and phase, then the vendor
    would need to specify a sign convention for the phasor. I'm mpt
    aware of any vendor that does.

    --
    steve

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pehache@21:1/5 to All on Thu Nov 16 23:51:52 2023
    Le 16/11/2023 à 21:01, Steven G. Kargl a écrit :

    The reason is maybe because the standard doesn't specify how a complex
    number is internally represented. In practice it is always represented
    by a pair (real,imag), but nothing would prevent a compiler representing
    it by (module,argument) for instance. Given that, the standard cannot
    guarantee the absence of rounding errors.

    You are correct that the Fortran standard does not specify
    internal datails, and this could be extended to COMPLEX.
    It would however be quite strange for a Fortran vendor to
    use magnitude and phase

    I fully agree that it would be strange, and I can't see any advantage to
    such implementation. Yet, it is not prohibited by the standard.

    given that the Fortran standard does
    quite often refer to the real and imaginary parts of a COMPLEX
    entity.

    Yes, but it's at the conceptual level

    Not to mention, the Fortran standard has introduced:

    3.60.1
    complex part designator

    9.4.4 Complex parts

    R915 complex-part-designator is designator % RE
    or designator % IM

    Yes again, but behind the hood c%re and c%im could be the functions
    m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im = <expr>
    the (m,p) pair could be fully recomputed.


    PS: If a Fortran vendor used magnitude and phase, then the vendor
    would need to specify a sign convention for the phasor. I'm mpt
    aware of any vendor that does.

    I don't think so, as the phase component would not be directly
    accessible by the user. The vendor could choose any convention as long
    as the whole internal stuff is consistent, he could also chose to store
    a scaled version of the phase in order to have a better accuracy...


    --
    "...sois ouvert aux idées des autres pour peu qu'elles aillent dans le
    même sens que les tiennes.", ST sur fr.bio.medecine
    ST passe le mur du çon : <j3nn2hFmqj7U1@mid.individual.net>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to pehache on Fri Nov 17 06:48:54 2023
    pehache <pehache.7@gmail.com> schrieb:
    Le 16/11/2023 à 02:28, gah4 a écrit :
    On 11/4/23 2:21 PM, Steven G. Kargl wrote:
    The SCALE intrinsic allows one to change the
    floating point exponent for a REAL entity.
    For example,

    program foo
    real x
    x = 1
    print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point
    rounding error.

    Question. Anyone know why the Fortran standard (aka J3)
    restricted X to be a REAL entity? It would seem that X
    could be COMPLEX with obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?


    Wow, no answer yet.

    It does seem that sometimes Fortran is slow to add features, especially
    when need for them isn't shown.

    The reason is maybe because the standard doesn't specify how a complex
    number is internally represented.

    I disagree almost entirely.

    Subclause 19.6.5 of F2018, "Events that cause variables to become
    defined" has

    (13) When a default complex entity becomes defined, all partially
    associated default real entities become defined.

    (14) When both parts of a default complex entity become defined as
    a result of partially associated default real or default complex
    entities becoming defined, the default complex entity becomes
    defined.

    Which means that something like

    real :: a(2)
    complex :: c
    equivalence (a,c)

    allows you to set values for a(1) and a(2) and you can expect the
    components of c to get the corresponding values.

    This is important for FFT.

    Now, you might argue that the compiler can invoke "as if", but there
    is no practical way to use any other complex representation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to pehache on Fri Nov 17 08:18:57 2023
    pehache wrote:

    Le 16/11/2023 à 21:01, Steven G. Kargl a écrit :

    The reason is maybe because the standard doesn't specify how a
    complex number is internally represented. In practice it is
    always represented by a pair (real,imag), but nothing would
    prevent a compiler representing it by (module,argument) for
    instance. Given that, the standard cannot guarantee the absence
    of rounding errors.

    You are correct that the Fortran standard does not specify
    internal datails, and this could be extended to COMPLEX.
    It would however be quite strange for a Fortran vendor to
    use magnitude and phase

    I fully agree that it would be strange, and I can't see any advantage
    to such implementation. Yet, it is not prohibited by the standard.

    given that the Fortran standard does
    quite often refer to the real and imaginary parts of a COMPLEX
    entity.

    Yes, but it's at the conceptual level

    Not to mention, the Fortran standard has introduced:

    3.60.1
    complex part designator

    9.4.4 Complex parts

    R915 complex-part-designator is designator % RE
    or designator % IM

    Yes again, but behind the hood c%re and c%im could be the functions
    m*cos(p) and m*sin(p). And on assignement c%re = <expr> or c%im =
    <expr> the (m,p) pair could be fully recomputed.


    PS: If a Fortran vendor used magnitude and phase, then the vendor
    would need to specify a sign convention for the phasor. I'm mpt
    aware of any vendor that does.

    I don't think so, as the phase component would not be directly
    accessible by the user. The vendor could choose any convention as
    long as the whole internal stuff is consistent, he could also chose
    to store a scaled version of the phase in order to have a better
    accuracy...

    There seems no reason why the standard might not be extended to allow
    the two different types of representations of complex variables to
    exist in the same program, as separate data-types, and to interact when required. Two major questions are:

    (i) whether there are any applications that would be more readily and
    usefully programmed using the modulus-phase representation?

    (ii) the relative speed of both addition and multiplication in the two representations.?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From pehache@21:1/5 to All on Fri Nov 17 09:22:53 2023
    Le 17/11/2023 à 07:48, Thomas Koenig a écrit :
    pehache <pehache.7@gmail.com> schrieb:
    Le 16/11/2023 à 02:28, gah4 a écrit :
    On 11/4/23 2:21 PM, Steven G. Kargl wrote:
    The SCALE intrinsic allows one to change the
    floating point exponent for a REAL entity.
    For example,

    program foo
    real x
    x = 1
    print *, scale(x,1) ! print 2
    end program

    This scaling does not incur a floating point
    rounding error.

    Question. Anyone know why the Fortran standard (aka J3)
    restricted X to be a REAL entity? It would seem that X
    could be COMPLEX with obvious equivalence of

    SCALE(X,N) = COMPLX(SCALE(X%RE,N),SCALE(X%IM,N),KIND(X%IM))

    Should the Fortran be amended?


    Wow, no answer yet.

    It does seem that sometimes Fortran is slow to add features, especially
    when need for them isn't shown.

    The reason is maybe because the standard doesn't specify how a complex
    number is internally represented.

    I disagree almost entirely.

    Subclause 19.6.5 of F2018, "Events that cause variables to become
    defined" has

    (13) When a default complex entity becomes defined, all partially
    associated default real entities become defined.

    (14) When both parts of a default complex entity become defined as
    a result of partially associated default real or default complex
    entities becoming defined, the default complex entity becomes
    defined.

    Which means that something like

    real :: a(2)
    complex :: c
    equivalence (a,c)

    allows you to set values for a(1) and a(2) and you can expect the
    components of c to get the corresponding values.

    I almost entirely disagree with your almost entire disagreement :)

    The standard requires the complex type to occupy 2 storage units, which
    allows the above equivalence, and the above clause tells that a complex
    is made of two adjacent reals. However it does not tell what are a(1)
    and a(2) precisely : they could be the module + phase (or the imaginary
    part and the real part in that order).


    This is important for FFT.

    We are all relying on the fact that in your above equivalence a(1) is
    the real part and a(2) is the imaginary part, all compilers follow this convention, and nobody would "buy" a compiler that would follow another convention. Nonetheless this is just a convention, this is not enforced
    by the standard.

    --
    "...sois ouvert aux idées des autres pour peu qu'elles aillent dans le
    même sens que les tiennes.", ST sur fr.bio.medecine
    ST passe le mur du çon : <j3nn2hFmqj7U1@mid.individual.net>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to David Jones on Sun Nov 19 13:28:18 2023
    David Jones <dajhawkxx@nowherel.com> schrieb:

    There seems no reason why the standard might not be extended to allow
    the two different types of representations of complex variables to
    exist in the same program, as separate data-types, and to interact when required. Two major questions are:

    (i) whether there are any applications that would be more readily and usefully programmed using the modulus-phase representation?

    (ii) the relative speed of both addition and multiplication in the two representations.?

    Multiplication and especially division would likely be faster - you
    would have to multiply the two moduli and add and normalize the
    modulus to lie between 0 and 2*pi.

    However, the normalization step can have unintended execution
    speed consequences if the processor implements it via branches,
    and branches can be quite expensive if mispredicted.

    _Addition_ is very expensive indeed in polar notation. You have
    to calculate the sin() and cos() of each number, add them, and then
    call atan2() (with a normalization) to get back the original
    representation.

    If you're doing a lot of multiplication, and not a lot of addition,
    that could actually pay off.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Steven G. Kargl@21:1/5 to Thomas Koenig on Sun Nov 19 16:03:22 2023
    On Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:

    David Jones <dajhawkxx@nowherel.com> schrieb:

    There seems no reason why the standard might not be extended to allow
    the two different types of representations of complex variables to
    exist in the same program, as separate data-types, and to interact when
    required. Two major questions are:

    (i) whether there are any applications that would be more readily and
    usefully programmed using the modulus-phase representation?

    (ii) the relative speed of both addition and multiplication in the two
    representations.?

    Multiplication and especially division would likely be faster - you
    would have to multiply the two moduli and add and normalize the modulus
    to lie between 0 and 2*pi.

    However, the normalization step can have unintended execution speed consequences if the processor implements it via branches, and branches
    can be quite expensive if mispredicted.

    _Addition_ is very expensive indeed in polar notation. You have to
    calculate the sin() and cos() of each number, add them, and then call
    atan2() (with a normalization) to get back the original representation.

    If you're doing a lot of multiplication, and not a lot of addition,
    that could actually pay off.

    If a vendor used magnitude and phase as the internal representation,
    then that vendor would not be around very long. Consider cmplx(0,1).
    The magnitude is easy. It is 1. Mathematically, the phase is
    pi/2, which is of course not exactly representable.

    % tlibm acos -f -a 0.
    x = 0.00000000e+00f, /* 0x00000000 */
    libm = 1.57079637e+00f, /* 0x3fc90fdb */
    mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
    ULP = 0.36668
    % tlibm cos -f -a 1.57079637
    x = 1.57079625e+00f, /* 0x3fc90fda */
    libm = 7.54979013e-08f, /* 0x33a22169 */
    mpfr = 7.54979013e-08f, /* 0x33a22169 */
    ULP = 0.24138

    7.549... is significantly different when compared to 0.

    --
    steve

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to Steven G. Kargl on Sun Nov 19 18:45:24 2023
    Steven G. Kargl wrote:

    On Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:

    David Jones <dajhawkxx@nowherel.com> schrieb:

    There seems no reason why the standard might not be extended to
    allow >> the two different types of representations of complex
    variables to >> exist in the same program, as separate data-types,
    and to interact when >> required. Two major questions are:

    (i) whether there are any applications that would be more readily
    and >> usefully programmed using the modulus-phase representation?

    (ii) the relative speed of both addition and multiplication in the
    two >> representations.?

    Multiplication and especially division would likely be faster - you
    would have to multiply the two moduli and add and normalize the
    modulus to lie between 0 and 2*pi.

    However, the normalization step can have unintended execution speed consequences if the processor implements it via branches, and
    branches can be quite expensive if mispredicted.

    Addition is very expensive indeed in polar notation. You have to
    calculate the sin() and cos() of each number, add them, and then
    call atan2() (with a normalization) to get back the original representation.

    If you're doing a lot of multiplication, and not a lot of addition,
    that could actually pay off.

    If a vendor used magnitude and phase as the internal representation,
    then that vendor would not be around very long. Consider cmplx(0,1).
    The magnitude is easy. It is 1. Mathematically, the phase is
    pi/2, which is of course not exactly representable.

    % tlibm acos -f -a 0.
    x = 0.00000000e+00f, /* 0x00000000 */
    libm = 1.57079637e+00f, /* 0x3fc90fdb */
    mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
    ULP = 0.36668
    % tlibm cos -f -a 1.57079637
    x = 1.57079625e+00f, /* 0x3fc90fda */
    libm = 7.54979013e-08f, /* 0x33a22169 */
    mpfr = 7.54979013e-08f, /* 0x33a22169 */
    ULP = 0.24138

    7.549... is significantly different when compared to 0.

    If it were worth doing, the obvious thing to do would be to use a
    formulation where you store a multiple of pi or 2*pi as the effective
    argument, with computations done to respect a standard range.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to David Jones on Sun Nov 19 22:00:29 2023
    David Jones <dajhawkxx@nowherel.com> schrieb:
    Steven G. Kargl wrote:

    On Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:

    David Jones <dajhawkxx@nowherel.com> schrieb:

    There seems no reason why the standard might not be extended to
    allow >> the two different types of representations of complex
    variables to >> exist in the same program, as separate data-types,
    and to interact when >> required. Two major questions are:

    (i) whether there are any applications that would be more readily
    and >> usefully programmed using the modulus-phase representation?

    (ii) the relative speed of both addition and multiplication in the
    two >> representations.?

    Multiplication and especially division would likely be faster - you
    would have to multiply the two moduli and add and normalize the
    modulus to lie between 0 and 2*pi.

    However, the normalization step can have unintended execution speed
    consequences if the processor implements it via branches, and
    branches can be quite expensive if mispredicted.

    Addition is very expensive indeed in polar notation. You have to
    calculate the sin() and cos() of each number, add them, and then
    call atan2() (with a normalization) to get back the original
    representation.

    If you're doing a lot of multiplication, and not a lot of addition,
    that could actually pay off.

    If a vendor used magnitude and phase as the internal representation,
    then that vendor would not be around very long. Consider cmplx(0,1).
    The magnitude is easy. It is 1. Mathematically, the phase is
    pi/2, which is of course not exactly representable.

    % tlibm acos -f -a 0.
    x = 0.00000000e+00f, /* 0x00000000 */
    libm = 1.57079637e+00f, /* 0x3fc90fdb */
    mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
    ULP = 0.36668
    % tlibm cos -f -a 1.57079637
    x = 1.57079625e+00f, /* 0x3fc90fda */
    libm = 7.54979013e-08f, /* 0x33a22169 */
    mpfr = 7.54979013e-08f, /* 0x33a22169 */
    ULP = 0.24138

    7.549... is significantly different when compared to 0.

    If it were worth doing, the obvious thing to do would be to use a
    formulation where you store a multiple of pi or 2*pi as the effective argument, with computations done to respect a standard range.

    It could also make sense to use a fixed-number representation for
    the phase; having special accuracy around zero, as floating point
    numbers do, may not be a large advantage.

    The normalization step could then be a simple "and", masking
    away the top bits.

    This is, however, more along the lines of what a user-defined
    complex type could look like, not what Fortran compilers could
    reasonably provide :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Jones@21:1/5 to Thomas Koenig on Mon Nov 20 10:41:17 2023
    Thomas Koenig wrote:

    David Jones <dajhawkxx@nowherel.com> schrieb:
    Steven G. Kargl wrote:

    On Sun, 19 Nov 2023 13:28:18 +0000, Thomas Koenig wrote:

    David Jones <dajhawkxx@nowherel.com> schrieb:

    There seems no reason why the standard might not be extended to
    allow >> the two different types of representations of complex
    variables to >> exist in the same program, as separate data-types,
    and to interact when >> required. Two major questions are:

    (i) whether there are any applications that would be more
    readily >> and >> usefully programmed using the modulus-phase
    representation? >> > >
    (ii) the relative speed of both addition and multiplication in
    the >> two >> representations.?

    Multiplication and especially division would likely be faster -
    you >> > would have to multiply the two moduli and add and normalize
    the >> > modulus to lie between 0 and 2*pi.

    However, the normalization step can have unintended execution
    speed >> > consequences if the processor implements it via branches,
    and >> > branches can be quite expensive if mispredicted.

    Addition is very expensive indeed in polar notation. You have to
    calculate the sin() and cos() of each number, add them, and then
    call atan2() (with a normalization) to get back the original
    representation.

    If you're doing a lot of multiplication, and not a lot of
    addition, >> > that could actually pay off.

    If a vendor used magnitude and phase as the internal
    representation, >> then that vendor would not be around very long.
    Consider cmplx(0,1). >> The magnitude is easy. It is 1.
    Mathematically, the phase is >> pi/2, which is of course not exactly representable. >>
    % tlibm acos -f -a 0.
    x = 0.00000000e+00f, /* 0x00000000 */
    libm = 1.57079637e+00f, /* 0x3fc90fdb */
    mpfr = 1.57079637e+00f, /* 0x3fc90fdb */
    ULP = 0.36668
    % tlibm cos -f -a 1.57079637
    x = 1.57079625e+00f, /* 0x3fc90fda */
    libm = 7.54979013e-08f, /* 0x33a22169 */
    mpfr = 7.54979013e-08f, /* 0x33a22169 */
    ULP = 0.24138

    7.549... is significantly different when compared to 0.

    If it were worth doing, the obvious thing to do would be to use a formulation where you store a multiple of pi or 2*pi as the
    effective argument, with computations done to respect a standard
    range.

    It could also make sense to use a fixed-number representation for
    the phase; having special accuracy around zero, as floating point
    numbers do, may not be a large advantage.

    The normalization step could then be a simple "and", masking
    away the top bits.

    This is, however, more along the lines of what a user-defined
    complex type could look like, not what Fortran compilers could
    reasonably provide :-)

    Any extension to the existing standard is at least possible. But the
    real question is whether there are enough (or any at all) applications
    that require only (or mainly) complex multiplications as opposed to
    additions. I can't think of any.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)