• Re: Fortran and Climate Models

    From cjcoats@21:1/5 to Clive Page on Tue Feb 15 09:31:20 2022
    On Monday, March 15, 2021 at 12:41:10 PM UTC-4, Clive Page wrote:
    [snip...]
    Many of these models have been around for a long time, I gather, so the parallelism must have been added fairly recently. So why MPI rather than co-arrays?

    For meteorology models, the parallelism was added in the late 1980's and early 1990's.

    Moreover, the thermodynamics/cloud/convection/chemistry related computations (which are most of the calculation, fwiw) are quite intricate calculations at each individual grid cell or each individual vertical grid-column (frequently with internal time
    steps) that do not translate cleanly into co-array form.

    FWIW

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Beliavsky@21:1/5 to cjcoats on Tue Feb 15 11:22:56 2022
    On Tuesday, February 15, 2022 at 12:31:22 PM UTC-5, cjcoats wrote:
    On Monday, March 15, 2021 at 12:41:10 PM UTC-4, Clive Page wrote:
    [snip...]
    Many of these models have been around for a long time, I gather, so the parallelism must have been added fairly recently. So why MPI rather than co-arrays?
    For meteorology models, the parallelism was added in the late 1980's and early 1990's.

    Moreover, the thermodynamics/cloud/convection/chemistry related computations (which are most of the calculation, fwiw) are quite intricate calculations at each individual grid cell or each individual vertical grid-column (frequently with internal time
    steps) that do not translate cleanly into co-array form.

    FWIW

    A recent Wall Street Journal article that mentions the Fortran code CESM2 is excerpted at Fortran Discourse https://fortran-lang.discourse.group/t/climate-scientists-encounter-limits-of-computer-models-bedeviling-policy/2756 .

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn McGuire@21:1/5 to cjcoats on Tue Feb 15 15:22:42 2022
    On 2/15/2022 11:31 AM, cjcoats wrote:
    On Monday, March 15, 2021 at 12:41:10 PM UTC-4, Clive Page wrote:
    [snip...]
    Many of these models have been around for a long time, I gather, so the parallelism must have been added fairly recently. So why MPI rather than co-arrays?

    For meteorology models, the parallelism was added in the late 1980's and early 1990's.

    Moreover, the thermodynamics/cloud/convection/chemistry related computations (which are most of the calculation, fwiw) are quite intricate calculations at each individual grid cell or each individual vertical grid-column (frequently with internal time
    steps) that do not translate cleanly into co-array form.

    FWIW

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor,
    liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable.

    Lynn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Lynn McGuire on Wed Feb 16 09:15:29 2022
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/15/2022 11:31 AM, cjcoats wrote:
    On Monday, March 15, 2021 at 12:41:10 PM UTC-4, Clive Page wrote:
    [snip...]
    Many of these models have been around for a long time, I gather, so the parallelism must have been added fairly recently. So why MPI rather than co-arrays?

    For meteorology models, the parallelism was added in the late 1980's and early 1990's.

    Moreover, the thermodynamics/cloud/convection/chemistry related computations (which are most of the calculation, fwiw) are quite intricate calculations at each individual grid cell or each individual vertical grid-column (frequently with internal time
    steps) that do not translate cleanly into co-array form.

    FWIW

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor,
    liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable.

    I can well imagine, especially if you are doing iterative calculations
    of your equilibria.

    (Side remark: I do hope your program converges better than Aspen Plus
    does. If you use a converged solution as a starting value there, that
    program will likely diverge. I have no polite words for that kind
    of numerics).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn McGuire@21:1/5 to Thomas Koenig on Wed Feb 16 14:05:26 2022
    On 2/16/2022 3:15 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/15/2022 11:31 AM, cjcoats wrote:
    On Monday, March 15, 2021 at 12:41:10 PM UTC-4, Clive Page wrote:
    [snip...]
    Many of these models have been around for a long time, I gather, so the parallelism must have been added fairly recently. So why MPI rather than co-arrays?

    For meteorology models, the parallelism was added in the late 1980's and early 1990's.

    Moreover, the thermodynamics/cloud/convection/chemistry related computations (which are most of the calculation, fwiw) are quite intricate calculations at each individual grid cell or each individual vertical grid-column (frequently with internal
    time steps) that do not translate cleanly into co-array form.

    FWIW

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor,
    liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable.

    I can well imagine, especially if you are doing iterative calculations
    of your equilibria.

    (Side remark: I do hope your program converges better than Aspen Plus
    does. If you use a converged solution as a starting value there, that program will likely diverge. I have no polite words for that kind
    of numerics).

    The problems happen when a solution ends up in multiple phases and one
    of the components is a strong polar like water and / or an alcohol. We
    fight with that all the time.

    Lynn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Lynn McGuire on Wed Feb 16 20:47:28 2022
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/16/2022 3:15 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor,
    liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable. >>
    I can well imagine, especially if you are doing iterative calculations
    of your equilibria.

    (Side remark: I do hope your program converges better than Aspen Plus
    does. If you use a converged solution as a starting value there, that
    program will likely diverge. I have no polite words for that kind
    of numerics).

    The problems happen when a solution ends up in multiple phases and one
    of the components is a strong polar like water and / or an alcohol. We
    fight with that all the time.

    With Aspen, it is more like every time you have a recycle stream :-(
    I am glad I no longer work with that program (and I only used it
    a few times).

    However, it is still not clear what goes wrong. Surely, if your
    flash calculation has converged so that energy and mass balances
    and equilibria are satisfied, you should be able to test for that?

    Or is it a problem with derivatives that you try to calculate where
    one point is inside the flash regime and the other one outside?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn McGuire@21:1/5 to Thomas Koenig on Wed Feb 16 16:06:24 2022
    On 2/16/2022 2:47 PM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/16/2022 3:15 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor,
    liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable.

    I can well imagine, especially if you are doing iterative calculations
    of your equilibria.

    (Side remark: I do hope your program converges better than Aspen Plus
    does. If you use a converged solution as a starting value there, that
    program will likely diverge. I have no polite words for that kind
    of numerics).

    The problems happen when a solution ends up in multiple phases and one
    of the components is a strong polar like water and / or an alcohol. We
    fight with that all the time.

    With Aspen, it is more like every time you have a recycle stream :-(
    I am glad I no longer work with that program (and I only used it
    a few times).

    However, it is still not clear what goes wrong. Surely, if your
    flash calculation has converged so that energy and mass balances
    and equilibria are satisfied, you should be able to test for that?

    Or is it a problem with derivatives that you try to calculate where
    one point is inside the flash regime and the other one outside?

    Yes, the worst is when one of the phases is on the knife edge of phase
    change, usually water. More water than normal is in the vapor phase and
    the rest of the water is in the aqueous liquid phase. Getting that to
    converge is a trick and can cause recycles to spin out of their
    convergence zone as the solution flip flops between the phases. We have
    tried to modify our software so that bad initialization is ok but there
    are always edge cases where the solution is backed into a corner.

    I equate it to the plane of convergence. The plane of convergence is
    not flat, there are hills and valleys. If a solver heads into a valley,
    there may not be a good solution in that valley. The good solution
    might be in the next valley or five valleys over. The solver must be
    able to force the calculations over the next hill, or the next five
    hills. Not trivial when you are working with nested solvers. And some
    hills and valleys look like west Texas and some hills and valleys look
    like the Himalayas.

    Lynn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Lynn McGuire on Thu Feb 17 07:33:02 2022
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/16/2022 2:47 PM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/16/2022 3:15 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor,
    liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable.

    I can well imagine, especially if you are doing iterative calculations >>>> of your equilibria.

    (Side remark: I do hope your program converges better than Aspen Plus
    does. If you use a converged solution as a starting value there, that >>>> program will likely diverge. I have no polite words for that kind
    of numerics).

    The problems happen when a solution ends up in multiple phases and one
    of the components is a strong polar like water and / or an alcohol. We
    fight with that all the time.

    With Aspen, it is more like every time you have a recycle stream :-(
    I am glad I no longer work with that program (and I only used it
    a few times).

    However, it is still not clear what goes wrong. Surely, if your
    flash calculation has converged so that energy and mass balances
    and equilibria are satisfied, you should be able to test for that?

    Or is it a problem with derivatives that you try to calculate where
    one point is inside the flash regime and the other one outside?

    Yes, the worst is when one of the phases is on the knife edge of phase change, usually water. More water than normal is in the vapor phase and
    the rest of the water is in the aqueous liquid phase. Getting that to converge is a trick and can cause recycles to spin out of their
    convergence zone as the solution flip flops between the phases. We have tried to modify our software so that bad initialization is ok but there
    are always edge cases where the solution is backed into a corner.

    [Aspen is _far_ worse than what you describe]

    I can imagine that this could be the case, especially if you have
    termination errors of an iterative solver for the phase distribution.

    One suggestion (from afar, and you may have tried this already):
    If the calculations are right on the boundary of phase change,
    you could calculate the liquid phase only, the equations will work
    fine if you simply assume no gas phase. After that is converged, you
    could then check if your total vapor pressure is larger than the
    pressure in the process unit you are looking at, and then, if that
    is the case, calculate the vapor phase under the assumption that
    there is one. For a suitably small amount of vapor, you can also
    assume that the concentrations in your liquid phase do not change,
    so the vapor composition is fixed.

    If you want to take this a little further, you might also be able
    to do a calculation for a small amount of vapor, assuming a linear
    change in concentration and temperature in your liquid phase with
    vapor fraction or pressure or... This could be somewhat easier if
    your thermodynamic quantities have explicit derivatives, which
    I obviously don't know.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn McGuire@21:1/5 to Thomas Koenig on Thu Feb 17 16:45:23 2022
    On 2/17/2022 1:33 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/16/2022 2:47 PM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:
    On 2/16/2022 3:15 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have the same problem in our chemical process simulator. The
    saturation calculations rule everything in the four phases (vapor, >>>>>> liquid hydrocarbon, aqueous liquid, solid) and are not very parallelizable.

    I can well imagine, especially if you are doing iterative calculations >>>>> of your equilibria.

    (Side remark: I do hope your program converges better than Aspen Plus >>>>> does. If you use a converged solution as a starting value there, that >>>>> program will likely diverge. I have no polite words for that kind
    of numerics).

    The problems happen when a solution ends up in multiple phases and one >>>> of the components is a strong polar like water and / or an alcohol. We >>>> fight with that all the time.

    With Aspen, it is more like every time you have a recycle stream :-(
    I am glad I no longer work with that program (and I only used it
    a few times).

    However, it is still not clear what goes wrong. Surely, if your
    flash calculation has converged so that energy and mass balances
    and equilibria are satisfied, you should be able to test for that?

    Or is it a problem with derivatives that you try to calculate where
    one point is inside the flash regime and the other one outside?

    Yes, the worst is when one of the phases is on the knife edge of phase
    change, usually water. More water than normal is in the vapor phase and
    the rest of the water is in the aqueous liquid phase. Getting that to
    converge is a trick and can cause recycles to spin out of their
    convergence zone as the solution flip flops between the phases. We have
    tried to modify our software so that bad initialization is ok but there
    are always edge cases where the solution is backed into a corner.

    [Aspen is _far_ worse than what you describe]

    I can imagine that this could be the case, especially if you have
    termination errors of an iterative solver for the phase distribution.

    One suggestion (from afar, and you may have tried this already):
    If the calculations are right on the boundary of phase change,
    you could calculate the liquid phase only, the equations will work
    fine if you simply assume no gas phase. After that is converged, you
    could then check if your total vapor pressure is larger than the
    pressure in the process unit you are looking at, and then, if that
    is the case, calculate the vapor phase under the assumption that
    there is one. For a suitably small amount of vapor, you can also
    assume that the concentrations in your liquid phase do not change,
    so the vapor composition is fixed.

    If you want to take this a little further, you might also be able
    to do a calculation for a small amount of vapor, assuming a linear
    change in concentration and temperature in your liquid phase with
    vapor fraction or pressure or... This could be somewhat easier if
    your thermodynamic quantities have explicit derivatives, which
    I obviously don't know.

    Nope, we use an iterative interpolation method for the adiabatic and
    isentropic flashes. For the isothermal flash we try to solve the knife
    edge before falling into the interpolation method. For the constant
    volume flashes we use the interpolation method after the modified
    Wegstein fails (I call it the brute force method).

    We have hundreds of solvers in our software. We are Frankenstein's
    monster, built of many pieces. Our three main recycle solvers are all
    flow based. I am going to add a pressure based recycle solver to that
    group in the next year or two.

    BTW, we have 60 different equations of state in our software. The
    details of each EOS is hidden from the recycle and flash solvers,
    otherwise they would be trying to do too much.

    Lynn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Lynn McGuire on Fri Feb 18 20:35:34 2022
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    Nope, we use an iterative interpolation method for the adiabatic and isentropic flashes. For the isothermal flash we try to solve the knife
    edge before falling into the interpolation method. For the constant
    volume flashes we use the interpolation method after the modified
    Wegstein fails (I call it the brute force method).

    This is getting a bit away from Fortran, but...

    To me, a "flash" is something adiabatic: You reduce the pressure
    from above the vapor pressure to below the vapor pressure of your
    volatile components. Part of the volatile components evaporate,
    leading to the formation of a gas phase and a drop in temperature due
    to the enthalpy of evaporation (which some folks call "heat" due
    to sloppy terminology).

    Now, I can see what an isothermal flash would be, you add enough heat
    to keep the temperature constant. I wouldn't call it that, but OK.

    As for a "constant volume" flash - I'm not sure what process that I
    would still consider a flash can be isochoric, every flash I know
    expands in volume. But I guess that's all a matter of terminology.

    We have hundreds of solvers in our software. We are Frankenstein's
    monster, built of many pieces. Our three main recycle solvers are all
    flow based. I am going to add a pressure based recycle solver to that
    group in the next year or two.

    Sounds like a challenge.

    BTW, we have 60 different equations of state in our software. The
    details of each EOS is hidden from the recycle and flash solvers,
    otherwise they would be trying to do too much.

    To bring this slightly back to Fortran: I think equations of
    state could profit from Fortran's object-oriented features.

    I would envisage an abstract type representing a substance, with
    type-bound procedures for calculating the state depending on
    whatever variables you want (let's say from p and T, or p and h,
    or h and s, or ...) and which would let the user inquire about
    the properties via other type-bound procedures.

    If I were to implement something like that, this is probably the
    approach I look at first. (The likelyhood of that happening is
    10^-x, it is a few decades since I last played around implementing
    an equation of state, it was when I was still at University).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn McGuire@21:1/5 to Thomas Koenig on Fri Feb 18 15:46:33 2022
    On 2/18/2022 2:35 PM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    Nope, we use an iterative interpolation method for the adiabatic and
    isentropic flashes. For the isothermal flash we try to solve the knife
    edge before falling into the interpolation method. For the constant
    volume flashes we use the interpolation method after the modified
    Wegstein fails (I call it the brute force method).

    This is getting a bit away from Fortran, but...

    To me, a "flash" is something adiabatic: You reduce the pressure
    from above the vapor pressure to below the vapor pressure of your
    volatile components. Part of the volatile components evaporate,
    leading to the formation of a gas phase and a drop in temperature due
    to the enthalpy of evaporation (which some folks call "heat" due
    to sloppy terminology).

    Now, I can see what an isothermal flash would be, you add enough heat
    to keep the temperature constant. I wouldn't call it that, but OK.

    As for a "constant volume" flash - I'm not sure what process that I
    would still consider a flash can be isochoric, every flash I know
    expands in volume. But I guess that's all a matter of terminology.

    We have hundreds of solvers in our software. We are Frankenstein's
    monster, built of many pieces. Our three main recycle solvers are all
    flow based. I am going to add a pressure based recycle solver to that
    group in the next year or two.

    Sounds like a challenge.

    BTW, we have 60 different equations of state in our software. The
    details of each EOS is hidden from the recycle and flash solvers,
    otherwise they would be trying to do too much.

    To bring this slightly back to Fortran: I think equations of
    state could profit from Fortran's object-oriented features.

    I would envisage an abstract type representing a substance, with
    type-bound procedures for calculating the state depending on
    whatever variables you want (let's say from p and T, or p and h,
    or h and s, or ...) and which would let the user inquire about
    the properties via other type-bound procedures.

    If I were to implement something like that, this is probably the
    approach I look at first. (The likelyhood of that happening is
    10^-x, it is a few decades since I last played around implementing
    an equation of state, it was when I was still at University).

    We have two constant volume flashes:
    1. constant volume and constant enthalpy (T and P float)
    2. constant volume and constant temperature (H and P float)

    Constant volume flashes are used for pressure vessels with internal
    reactions or external heat energy (endothermic or exothermic).

    Most of the modern equations of states are using temperature and density
    as their dependent properties. They are trying to get out of using
    vapor fraction as was used before as the density does vary as one goes
    across phase and critical boundaries.

    Lynn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Lynn McGuire on Sat Feb 19 10:35:17 2022
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have two constant volume flashes:
    1. constant volume and constant enthalpy (T and P float)
    2. constant volume and constant temperature (H and P float)

    Constant volume flashes are used for pressure vessels with internal
    reactions or external heat energy (endothermic or exothermic).

    So, you use "flash" in a different sense than I do. Faiir enough,
    computer science is not the only field beset by terminology
    differences :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to Thomas Koenig on Sat Feb 19 10:16:18 2022
    On 2/19/22 4:35 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have two constant volume flashes:
    1. constant volume and constant enthalpy (T and P float)
    2. constant volume and constant temperature (H and P float)

    Constant volume flashes are used for pressure vessels with internal
    reactions or external heat energy (endothermic or exothermic).

    So, you use "flash" in a different sense than I do. Faiir enough,
    computer science is not the only field beset by terminology
    differences :-)

    In my field of computational chemistry, I hear this called the "sudden approximation," which is kind of the opposite extreme from the adiabatic approximation. It shows up in all kinds of areas, from light-matter interactions to thermodynamics to scattering theory.

    https://en.wikipedia.org/wiki/Adiabatic_theorem

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lynn McGuire@21:1/5 to Thomas Koenig on Mon Feb 21 14:52:47 2022
    On 2/19/2022 4:35 AM, Thomas Koenig wrote:
    Lynn McGuire <lynnmcguire5@gmail.com> schrieb:

    We have two constant volume flashes:
    1. constant volume and constant enthalpy (T and P float)
    2. constant volume and constant temperature (H and P float)

    Constant volume flashes are used for pressure vessels with internal
    reactions or external heat energy (endothermic or exothermic).

    So, you use "flash" in a different sense than I do. Faiir enough,
    computer science is not the only field beset by terminology
    differences :-)

    A flash to me is taking a known mixture of components and deciding in
    which of the four phases that they belong in with respect to any two of
    the Pressure, Temperature, Enthalpy (H), Density, or Entropy (S)
    properties. The remaining properties and transport properties can be
    easily calculated then.

    And yes, there are mixtures that exist in all four phases at a given T
    and P.

    Lynn

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Walter Spector@21:1/5 to John McCue on Sun Mar 27 10:35:14 2022
    On Sunday, March 14, 2021 at 12:10:02 PM UTC-7, John McCue wrote:
    Hi,

    An interesting article I ran across about Fortran and Climate Models

    https://partee.io/2021/02/21/climate-model-response/

    For the most part, a good article.

    For about a dozen years prior to retiring, I worked on code used by weather/climate community. We mostly used Fortran 95, an ever increasing amount of C++, and MPI. Why not F2003, OpenMP, or coarrays? Simple. It was a fairly hard requirement that our
    code could run on a wide variety of compilers and hardware. It took an extraordinarily outrageous amount of time for the various compiler vendors to get F2003 supported - and to this day some still haven't. Same with co-arrays and F2008. In the case
    of OpenMP, it doesn't work will with large distributed memory machines - and we had customers whose models needed to scale to 10s of thousands of processors with such hardware.

    Fortran 95 works extremely well. Our code made extensive use of an object style of programming using derived types, modules, dynamic memory management, optional arguments on procedures, and so on. Pretty much the entire API is Fortran oriented - though
    many of the internals are in C++. Interestingly we also had a C API. It went largely unused until we decided to do a python API. The python API was implemented by calling the C API. The python API was surprisingly popular among our user base - so it
    actually became the biggest customer of the C API.

    One of the bigger issues with Fortran was the difficulty of doing template-like generic programming. (Sorry, but Parameterized Derived Types didn't have what we needed. And PDTs are F2003. So even if the compiler vendors had been quicker to support
    F2003, we still wouldn't have used them as much as one might want...) Instead we often made 'interesting' use of the C preprocessors macro capabilities to do this. It seems like these days fypp could be used instead - though that is a very recent
    development. The old CoCo portion of the F95 Standard did not have a macro capability. So even if the compiler vendors had implemented CoCo, which again they didn't, it was still lacking in usability.

    MPI works well, but hasn't been all roses. MPI was initially based on F77, so it took the MPI standards folks a long time to straighten out F90 interfaces and modules for ease of use and good compile-time checking of calls. Also the use of default
    integers for array sizes and such was a problem. MPI 3 and it looks like now MPI 4 have finally resolved much of this. Shame it has taken almost 30 years to get there.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)