• Automatic Differentiation

    From Daniel Feenberg@21:1/5 to All on Mon Apr 25 05:05:04 2022
    There are a number of hits to "fortran automatic differentiation" but the only one that claims to be available without cost is ADIFOR, from the Argonne Laboratory. On various web pages I have found 5 email addresses and 3 phone numbers. According to the
    web page, potential users should write to one of the addresses for access to the program itself.

    I have writen to Paul Hovland (hovland@mcs.anl.gov) and Alan Carle (carle@rice.edu) but I have received no response from either. Another page at ANL (https://www.anl.gov/partnerships/adifor-automatic-differentiation-of-fortran-77 ) gives the 2 phone
    numbers but neither is in service. A webpage at Rice University (http://www.crpc.rice.edu/newsletters/sum95/news.adifor.html ) adds another email address (bischof@mcs.anl.gov ), also no response there. These pages are 30 years old, so no real surprise.
    partners@anl.gov did respond and was referred to the same web page that proved uninformative above.

    It seems like a very well documented and useful program. It seems a shame for it to die. I do have a large F77 program so I do expect it would be useful for me.

    Daniel Feenberg
    National Bureau of Economic Research
    Cambridge MA 02138

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Daniel Feenberg on Mon Apr 25 16:34:35 2022
    Daniel Feenberg <feenberg@gmail.com> schrieb:

    There are a number of hits to "fortran automatic differentiation"
    but the only one that claims to be available without cost is ADIFOR,
    from the Argonne Laboratory. On various web pages I have found 5
    email addresses and 3 phone numbers. According to the web page,
    potential users should write to one of the addresses for access
    to the program itself.

    What exactly do you need differentiated?

    If you want fully automated Fortran-to-Fortran conversion, then
    I don't have anything.

    If you can massage your formulas so Maxima can accept them,
    like

    (%i1) display2d:false;

    (%o1) false
    (%i2) diff(sin(x^2),x);

    (%o2) 2*x*cos(x^2)

    from which you can massage the outputs into valid Fortran.

    Maple (which isn't free) can write fixed-form Fortran code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to Daniel Feenberg on Mon Apr 25 11:25:39 2022
    On 4/25/22 7:05 AM, Daniel Feenberg wrote:
    There are a number of hits to "fortran automatic differentiation" but the only one that claims to be available without cost is ADIFOR, from the Argonne Laboratory. On various web pages I have found 5 email addresses and 3 phone numbers. According to
    the web page, potential users should write to one of the addresses for access to the program itself.

    I have writen to Paul Hovland (hovland@mcs.anl.gov) and Alan Carle (carle@rice.edu) but I have received no response from either. Another page at ANL (https://www.anl.gov/partnerships/adifor-automatic-differentiation-of-fortran-77 ) gives the 2 phone
    numbers but neither is in service. A webpage at Rice University (http://www.crpc.rice.edu/newsletters/sum95/news.adifor.html ) adds another email address (bischof@mcs.anl.gov ), also no response there. These pages are 30 years old, so no real surprise.
    partners@anl.gov did respond and was referred to the same web page that proved uninformative above.

    It seems like a very well documented and useful program. It seems a shame for it to die. I do have a large F77 program so I do expect it would be useful for me.

    Daniel Feenberg
    National Bureau of Economic Research
    Cambridge MA 02138

    The Paul Hovland contact should be active. This is still an active area
    of research for that group. The code works also with f90+ (modules,
    operators, defined types, etc.), but I do not know the current status
    regarding the most recent language versions.

    Chris Bischof moved to Aachen, Germany about 15 years ago, but I think
    he is also still active in that area of research.

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to gah4@u.washington.edu on Mon Apr 25 20:13:34 2022
    gah4 <gah4@u.washington.edu> schrieb:
    On Monday, April 25, 2022 at 9:34:39 AM UTC-7, Thomas Koenig wrote:

    (snip)

    What exactly do you need differentiated?

    If you want fully automated Fortran-to-Fortran conversion, then
    I don't have anything.

    If you can massage your formulas so Maxima can accept them,
    like

    (snip)

    (%i2) diff(sin(x^2),x);

    (%o2) 2*x*cos(x^2)

    I read just a little of the description, which mentions functions
    and subroutines.

    Yes there are a few ways to differentiate an expression, in Fortran or not, and get the result. But now consider a whole Fortran function!

    You might have a Fortran function with loops and IFs and all, and
    desire a similar function returning the derivative of that one.
    A program (or you) could go through statement by statement,
    adding after each statement, a statement to evaluate the
    derivative of that one. Later statements will likely need both
    the value of variables, and also (from the product rule and
    chain rule) the derivative.

    With IFs, it is really hard - what should the derivative of

    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if

    be?

    Also, you usually want many derivatives for a filling a Jacobi
    matrix, which should be well-behaved, and your function should at
    least be continuous, or you will get into hot water with whatever
    you plan to do with that matrix.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to feen...@gmail.com on Mon Apr 25 13:03:42 2022
    On Monday, April 25, 2022 at 5:05:06 AM UTC-7, feen...@gmail.com wrote:
    There are a number of hits to "fortran automatic differentiation"
    but the only one that claims to be available without cost is ADIFOR,

    This reminds me of a system I knew years ago called Prose.
    It seems to be mentioned here, along with the follow-on: fortranCalculus:

    https://goal-driven.net/compiler-intro-&-history/history.html

    Prose was developed and available in the 1970's, and it seems from
    above gone by 1980. It was 1978 when I knew about it.

    It is an interpreted language that includes the ability to keep track
    of derivatives along with each variable, and then use them in the
    included solvers, or in user-written code.

    About that time, it occurred to me, since all Fortran compilers I
    knew used library routines for complex multiply and divide, that
    you could replace those with routines that return the value and
    derivative. That is, that use the calculus product rule and
    quotient rule for each operation. In that case, any expression
    that you could write with +, -, *, and /, would return the value and
    its derivative. I don't remember thinking about it up to **, or
    built-in functions, but you could also do it for them.

    In any case, the idea for Prose, and I presume fortranCalculus
    (that seems the capitalization they use) is to keep derivatives
    along with each variable, and use them as needed, but built
    into the language.

    Otherwise, I have much fun with the TI-92 calculator, which
    has a derivative operator, and can generate symbolic derivatives
    of expressions entered.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Thomas Koenig on Mon Apr 25 12:52:44 2022
    On Monday, April 25, 2022 at 9:34:39 AM UTC-7, Thomas Koenig wrote:

    (snip)

    What exactly do you need differentiated?

    If you want fully automated Fortran-to-Fortran conversion, then
    I don't have anything.

    If you can massage your formulas so Maxima can accept them,
    like

    (snip)

    (%i2) diff(sin(x^2),x);

    (%o2) 2*x*cos(x^2)

    I read just a little of the description, which mentions functions
    and subroutines.

    Yes there are a few ways to differentiate an expression, in Fortran or not,
    and get the result. But now consider a whole Fortran function!

    You might have a Fortran function with loops and IFs and all, and
    desire a similar function returning the derivative of that one.
    A program (or you) could go through statement by statement,
    adding after each statement, a statement to evaluate the
    derivative of that one. Later statements will likely need both
    the value of variables, and also (from the product rule and
    chain rule) the derivative.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Thomas Koenig on Mon Apr 25 13:33:49 2022
    On Monday, April 25, 2022 at 1:13:37 PM UTC-7, Thomas Koenig wrote:
    gah4 <ga...@u.washington.edu> schrieb:

    (snip)

    You might have a Fortran function with loops and IFs and all, and
    desire a similar function returning the derivative of that one.
    A program (or you) could go through statement by statement,
    adding after each statement, a statement to evaluate the
    derivative of that one. Later statements will likely need both
    the value of variables, and also (from the product rule and
    chain rule) the derivative.

    With IFs, it is really hard - what should the derivative of

    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if

    What it should do is easy, but what you do with the result, is where
    it gets harder:

    if (x > 0.1) then
    foo = 1+x
    dfoodx = 1
    else
    foo = -1-x
    dfoodx = -1
    end if

    Also, you usually want many derivatives for a filling a Jacobi
    matrix, which should be well-behaved, and your function should at
    least be continuous, or you will get into hot water with whatever
    you plan to do with that matrix.

    It does help to have a good understanding of the problem you are
    working in, and especially where things can go wrong.

    But usually the problem is more general, and so outside the derivative problem.

    I do remember doing much non-linear least-squares fitting, with the usual Newton's method solver. You might have a function that can only be evaluated for positive values, but it tries for negative values anyway.

    Now, if it is using sqrt, the function value and its derivative will
    have problems when it goes negative.

    I do remember, though hand computing some derivatives to put
    into least-squares fitting programs. Automating might have
    been nice to have.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to gah4@u.washington.edu on Mon Apr 25 21:04:58 2022
    gah4 <gah4@u.washington.edu> schrieb:

    I do remember, though hand computing some derivatives to put
    into least-squares fitting programs. Automating might have
    been nice to have.

    A few decades ago, I started using REDUCE for automatically
    calculating derivatives, which I then transferred to FORTRAN to
    do some calculations with equations of state, and then transferred
    them to F77 using an editor.

    It was nice, but the expressions became really big (naming the
    program EXPAND might have been more appropriate).

    A few years later, Maple sold site-wide licenses to all universities
    of the state of Baden-Württemberg, and I started using that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jeff Ryman@21:1/5 to feen...@gmail.com on Mon Apr 25 16:55:12 2022
    On Monday, April 25, 2022 at 5:05:06 AM UTC-7, feen...@gmail.com wrote:
    There are a number of hits to "fortran automatic differentiation" but the only one that claims to be available without cost is ADIFOR, from the Argonne Laboratory. On various web pages I have found 5 email addresses and 3 phone numbers. According to
    the web page, potential users should write to one of the addresses for access to the program itself.

    I have writen to Paul Hovland (hov...@mcs.anl.gov) and Alan Carle (ca...@rice.edu) but I have received no response from either. Another page at ANL (https://www.anl.gov/partnerships/adifor-automatic-differentiation-of-fortran-77 ) gives the 2 phone
    numbers but neither is in service. A webpage at Rice University (http://www.crpc.rice.edu/newsletters/sum95/news.adifor.html ) adds another email address (bis...@mcs.anl.gov ), also no response there. These pages are 30 years old, so no real surprise.
    part...@anl.gov did respond and was referred to the same web page that proved uninformative above.

    It seems like a very well documented and useful program. It seems a shame for it to die. I do have a large F77 program so I do expect it would be useful for me.

    Daniel Feenberg
    National Bureau of Economic Research
    Cambridge MA 02138

    Although it hasn't been worked on in some time there is also GRESS 3.0 from RSICC. See https://rsicc.ornl.gov/codes/psr/psr2/psr-231.html . It was only for Fortran 77. Although software with CCC- and DLC- designations have a fee, the last time I checked
    PSR- packages were still no cost.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Duffy@21:1/5 to Daniel Feenberg on Mon Apr 25 23:26:03 2022
    Daniel Feenberg <feenberg@gmail.com> wrote:
    There are a number of hits to "fortran automatic differentiation"
    but the only one that claims to be available without cost is ADIFOR,
    from the Argonne Laboratory. On various web pages I have found 5 email addresses and 3 phone numbers. According to the web page, potential
    users should write to one of the addresses for access to the program
    itself.

    http://www.autodiff.org/?module=Tools&language=Fortran77

    lists quite a few that still seem to be free. I last used TAPENADE,
    which still offers a service where you upload your subroutine(s) that
    you want to equip.

    Some posters upthread were wondering why such a technology has not
    been replaced by symbolic differentiation. The autodiff.org FAQ mentions

    How big a function can one differentiate using AD? The largest
    application to date is a 1.6 million line FEM code written in Fortran 77.


    Cheers, David Duffy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arjen Markus@21:1/5 to rym on Tue Apr 26 02:01:06 2022
    On Tuesday, April 26, 2022 at 1:55:14 AM UTC+2, rym wrote:
    On Monday, April 25, 2022 at 5:05:06 AM UTC-7, feen wrote:
    There are a number of hits to "fortran automatic differentiation" but the only one that claims to be available without cost is ADIFOR, from the Argonne Laboratory. On various web pages I have found 5 email addresses and 3 phone numbers. According to
    the web page, potential users should write to one of the addresses for access to the program itself.

    I have writen to Paul Hovland and Alan Carle but I have received no response from either. Another page at ANL (https://www.anl.gov/partnerships/adifor-automatic-differentiation-of-fortran-77 ) gives the 2 phone numbers but neither is in service. A
    webpage at Rice University (http://www.crpc.rice.edu/newsletters/sum95/news.adifor.html ) adds another email address, also no response there. These pages are 30 years old, so no real surprise. part at anl.gov did respond and was referred to the same web
    page that proved uninformative above.

    It seems like a very well documented and useful program. It seems a shame for it to die. I do have a large F77 program so I do expect it would be useful for me.

    Daniel Feenberg
    National Bureau of Economic Research
    Cambridge MA 02138
    Although it hasn't been worked on in some time there is also GRESS 3.0 from RSICC. See https://rsicc.ornl.gov/codes/psr/psr2/psr-231.html . It was only for Fortran 77. Although software with CCC- and DLC- designations have a fee, the last time I
    checked PSR- packages were still no cost.

    If a commercial solution is somehow acceptable, then NAG may have an alternative. I have worked with it a couple of years and it does what you want. I do not know more about ADIFOR than you.

    Regards,

    Arjen

    PS I removed the email addresses to keep Google groups happy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Daniel Feenberg@21:1/5 to Thomas Koenig on Tue Apr 26 05:13:26 2022
    On Monday, April 25, 2022 at 4:13:37 PM UTC-4, Thomas Koenig wrote:
    gah4 <ga...@u.washington.edu> schrieb:
    On Monday, April 25, 2022 at 9:34:39 AM UTC-7, Thomas Koenig wrote:

    (snip)

    What exactly do you need differentiated?

    I have a program that calculates income tax liability for any year from 1960-2023, and for any state. It is used by economists and others to estimate after tax income and prices in survey or administrative data. You can see it at http://taxsim.nber.org/
    taxsim35 . I have a hundred of so lines of fortran in the program to get the analytic derivative of federal tax with respect to earnings, but it would be too much work (and confusion) to do the states too, or any other types of income or deduction.
    There are different tax treatments for different types of income and deduction. I do use finite differences but they uncover many discontinuities, which are a problem for users.

    If you want fully automated Fortran-to-Fortran conversion, then
    I don't have anything.

    If you can massage your formulas so Maxima can accept them,
    like

    (snip)

    (%i2) diff(sin(x^2),x);

    (%o2) 2*x*cos(x^2)


    I am familiar with symbolic computation programs, but that isn't what I am looking for. The whole program (excluding the user interface) is more than 20,000 lines of code. (Note for non-US readers: The US income tax system is very complicated compared
    to European ones, and it changes every year in every state. Sometimes it is just a parameter value that changes, but new provisions are common and old ones expire with regularity). Converting it all to Macsyma would be a chore. The documentation for
    ADIFOR is very clear, so I am still hoping to get it running.


    I read just a little of the description, which mentions functions
    and subroutines.


    With IFs, it is really hard - what should the derivative of

    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if

    be?

    The AD program is another program, and when it executes the value of x would be a known constant. In my application dfoo/dx would be plus or minus one, depending on the value of x. The computer doesn't know about real numbers, but that doesn't bother me.


    Also, you usually want many derivatives for a filling a Jacobi
    matrix, which should be well-behaved, and your function should at
    least be continuous, or you will get into hot water with whatever
    you plan to do with that matrix.

    It does seem that almost all authors of automatic differentiation software assume the application is for a Jacobi matrix but that isn't my application.


    Daniel Feenberg

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Spiros Bousbouras@21:1/5 to gah4@u.washington.edu on Tue Apr 26 12:35:35 2022
    On Mon, 25 Apr 2022 13:33:49 -0700 (PDT)
    gah4 <gah4@u.washington.edu> wrote:
    On Monday, April 25, 2022 at 1:13:37 PM UTC-7, Thomas Koenig wrote:
    gah4 <ga...@u.washington.edu> schrieb:

    (snip)

    You might have a Fortran function with loops and IFs and all, and
    desire a similar function returning the derivative of that one.
    A program (or you) could go through statement by statement,
    adding after each statement, a statement to evaluate the
    derivative of that one. Later statements will likely need both
    the value of variables, and also (from the product rule and
    chain rule) the derivative.

    With IFs, it is really hard - what should the derivative of

    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if

    What it should do is easy, but what you do with the result, is where
    it gets harder:

    if (x > 0.1) then
    foo = 1+x
    dfoodx = 1
    else
    foo = -1-x
    dfoodx = -1
    end if

    Actually the derivative is not defined for x = 0.1 , foo is not even continuous for x = 0.1

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arjen Markus@21:1/5 to Spiros Bousbouras on Tue Apr 26 06:18:17 2022
    On Tuesday, April 26, 2022 at 2:35:39 PM UTC+2, Spiros Bousbouras wrote:

    Actually the derivative is not defined for x = 0.1 , foo is not even continuous for x = 0.1

    The function is continuous and differentiable on intervals not include x = 0.1. And that is exactly how it can be treated. Mind you, you do get results that are geared to precisely the surroundings of the input So, with discontinuous or non-smooth
    functions, you may need to consider different régimes.

    Regards,

    Arjen

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arjen Markus@21:1/5 to feen on Tue Apr 26 06:20:36 2022
    On Tuesday, April 26, 2022 at 2:13:29 PM UTC+2, feen wrote:

    I am familiar with symbolic computation programs, but that isn't what I am looking for. The whole program (excluding the user interface) is more than 20,000 lines of code. (Note for non-US readers: The US income tax system is very complicated compared
    to European ones, and it changes every year in every state. Sometimes it is just a parameter value that changes, but new provisions are common and old ones expire with regularity). Converting it all to Macsyma would be a chore. The documentation for
    ADIFOR is very clear, so I am still hoping to get it running.

    Daniel Feenberg

    So you have been able to find it? I failed to do so.

    Regards,

    Arjen

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to Arjen Markus on Tue Apr 26 09:45:07 2022
    On 4/26/22 8:18 AM, Arjen Markus wrote:
    On Tuesday, April 26, 2022 at 2:35:39 PM UTC+2, Spiros Bousbouras wrote:

    Actually the derivative is not defined for x = 0.1 , foo is not even
    continuous for x = 0.1

    The function is continuous and differentiable on intervals not include x = 0.1. And that is exactly how it can be treated. Mind you, you do get results that are geared to precisely the surroundings of the input So, with discontinuous or non-smooth
    functions, you may need to consider different régimes.

    The left derivative is defined at x=0.1, but the right derivative is
    not. And the function is discontinuous there. This is a common situation
    with physical simulations, for example at phase transitions where some
    physical properties are discontinuous as a function of say, temperature
    or pressure. If you look at a complicated phase diagram of, say water,
    with several solid phases, a liquid phase, a gas phase, and a
    supercritical region, then you will see many such boundaries along
    trajectories in (T,P) phase space.

    This situation also occurs artificially when, for example, different
    algorithms are used to evaluate a function in its domain. The
    mathematical function may not have any boundary regions, but the
    piecewise approximations might have them. In practice, even common
    functions, e.g. trig functions such as sin(x), cos(x), etc., are often evaluated in this piecewise way.

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Ron Shepard on Tue Apr 26 09:18:21 2022
    On Tuesday, April 26, 2022 at 7:45:12 AM UTC-7, Ron Shepard wrote:

    (snip on discontinuous derivatives)

    This situation also occurs artificially when, for example, different algorithms are used to evaluate a function in its domain. The
    mathematical function may not have any boundary regions, but the
    piecewise approximations might have them. In practice, even common
    functions, e.g. trig functions such as sin(x), cos(x), etc., are often evaluated in this piecewise way.

    They are. And presumably this would show up in doing numerical
    derivatives using those functions, though I don't know that I have
    ever seen the problem. Might not be hard to find, though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to feen...@gmail.com on Tue Apr 26 10:20:55 2022
    On Tuesday, April 26, 2022 at 5:13:29 AM UTC-7, feen...@gmail.com wrote:

    (snip)

    The AD program is another program, and when it executes the value of x would be a known constant. In my application dfoo/dx would be plus or minus one, depending on the value of x.

    Reading this one:

    https://en.wikipedia.org/wiki/PROSE_modeling_language

    reminded me of the difference between symbolic derivatives and
    AD (automatic derivatives). For AD, with each mathematical operation,
    and for that matter, statement, you compute the value and its derivatives.
    You then propagate them with the chain rule, and other calculus rules,
    step by step. Each step uses the values from the previous step.

    I suspect in many cases, it will result in less redundant calculations
    that you would get from the whole symbolic derivative.

    If you want to (and I don't know why you would)

    compute y=sin(sin(sin(x)))
    and also dy/dx

    then with my nearby TI-92 you get:

    y=sin(sin(sin(x)))
    dydx = cos(x) * cos(sin(x)) * cos(sin(sin(x)))

    If instead you expand, in the way that AD might do it:

    v=sin(x)
    w=sin(v)
    y=sin(w)

    then:
    v=sin(x)
    dvdx = cos(x)

    w=sin(v)
    dwdx = cos(v) * dvdx

    y=sin(w)
    dydx = cos(w) * dwdx

    only six function evaluations instead of nine.

    Convenient to do internally in an interpreted language,
    (or internally in a compiler), a little harder in Fortran source.

    The computer doesn't know about real numbers, but that doesn't bother me.

    (snip)

    It does seem that almost all authors of automatic differentiation software assume
    the application is for a Jacobi matrix but that isn't my application.

    At some point, the Jacobi matrix is a convenient way to write down
    the derivatives. And then after that, the Hessian matrix.

    But yes, the Jacobi matrix, and often Hessian matrix, are used in a variety
    of optimization algorithms, and so popular in AD systems.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Ron Shepard on Tue Apr 26 18:34:37 2022
    Ron Shepard <nospam@nowhere.org> schrieb:
    On 4/26/22 8:18 AM, Arjen Markus wrote:
    On Tuesday, April 26, 2022 at 2:35:39 PM UTC+2, Spiros Bousbouras wrote:

    Actually the derivative is not defined for x = 0.1 , foo is not even
    continuous for x = 0.1

    The function is continuous and differentiable on intervals not include x = 0.1. And that is exactly how it can be treated. Mind you, you do get results that are geared to precisely the surroundings of the input So, with discontinuous or non-smooth
    functions, you may need to consider different régimes.

    The left derivative is defined at x=0.1, but the right derivative is
    not. And the function is discontinuous there.

    And, of course, 0.1 is not even exactly representable in your
    typical binary floating-point type (which is why I chose it that
    way).

    The question is: What should automatic differentiation do
    with this sort of thing?

    This is a common situation
    with physical simulations, for example at phase transitions where some physical properties are discontinuous as a function of say, temperature
    or pressure. If you look at a complicated phase diagram of, say water,
    with several solid phases, a liquid phase, a gas phase, and a
    supercritical region, then you will see many such boundaries along trajectories in (T,P) phase space.

    Agreed. There is a reason for using, for example, enthalpy
    and entropy as variables.

    This situation also occurs artificially when, for example, different algorithms are used to evaluate a function in its domain. The
    mathematical function may not have any boundary regions, but the
    piecewise approximations might have them. In practice, even common
    functions, e.g. trig functions such as sin(x), cos(x), etc., are often evaluated in this piecewise way.

    Getting this right is _very_ hard indeed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ian Gay@21:1/5 to All on Tue Apr 26 13:24:16 2022
    gah4 wrote:

    On Monday, April 25, 2022 at 1:13:37 PM UTC-7, Thomas Koenig wrote:
    gah4 <ga...@u.washington.edu> schrieb:

    (snip)

    You might have a Fortran function with loops and IFs and all, and
    desire a similar function returning the derivative of that one.
    A program (or you) could go through statement by statement,
    adding after each statement, a statement to evaluate the
    derivative of that one. Later statements will likely need both
    the value of variables, and also (from the product rule and
    chain rule) the derivative.

    With IFs, it is really hard - what should the derivative of

    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if

    What it should do is easy, but what you do with the result, is where
    it gets harder:

    if (x > 0.1) then
    foo = 1+x
    dfoodx = 1
    else
    foo = -1-x
    dfoodx = -1
    end if

    Also, you usually want many derivatives for a filling a Jacobi
    matrix, which should be well-behaved, and your function should at
    least be continuous, or you will get into hot water with whatever
    you plan to do with that matrix.

    It does help to have a good understanding of the problem you are
    working in, and especially where things can go wrong.

    But usually the problem is more general, and so outside the derivative problem.

    I do remember doing much non-linear least-squares fitting, with the
    usual
    Newton's method solver. You might have a function that can only be
    evaluated for positive values, but it tries for negative values
    anyway.

    In this case, use ln(x) as your variable instead of x.


    Now, if it is using sqrt, the function value and its derivative will
    have problems when it goes negative.

    I do remember, though hand computing some derivatives to put
    into least-squares fitting programs. Automating might have
    been nice to have.

    --
    *********** To reply by e-mail, make w single in address **************

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Louis Krupp@21:1/5 to Daniel Feenberg on Fri Apr 29 14:36:07 2022
    On 4/25/2022 6:05 AM, Daniel Feenberg wrote:
    There are a number of hits to "fortran automatic differentiation" but the only one that claims to be available without cost is ADIFOR, from the Argonne Laboratory. On various web pages I have found 5 email addresses and 3 phone numbers. According to
    the web page, potential users should write to one of the addresses for access to the program itself.

    I have writen to Paul Hovland (hovland@mcs.anl.gov) and Alan Carle (carle@rice.edu) but I have received no response from either. Another page at ANL (https://www.anl.gov/partnerships/adifor-automatic-differentiation-of-fortran-77 ) gives the 2 phone
    numbers but neither is in service. A webpage at Rice University (http://www.crpc.rice.edu/newsletters/sum95/news.adifor.html ) adds another email address (bischof@mcs.anl.gov ), also no response there. These pages are 30 years old, so no real surprise.
    partners@anl.gov did respond and was referred to the same web page that proved uninformative above.

    It seems like a very well documented and useful program. It seems a shame for it to die. I do have a large F77 program so I do expect it would be useful for me.

    Daniel Feenberg
    National Bureau of Economic Research
    Cambridge MA 02138

    You've probably tried this, but the ANL partnerships page also mentions adifor@mcs.anl.gov in what looks like a screwed-up link.

    Louis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Thomas Koenig on Wed May 4 17:15:23 2022
    Thomas Koenig <tkoenig@netcologne.de> writes:
    With IFs, it is really hard - what should the derivative of
    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if
    be?

    There is /numeric/ and /symbolic/ differentiation.

    Numerically, one can easily get an approximation for anything
    as a difference quotient for a small difference, but needs
    to take care, because some results might not be correct
    (when the function changes too fast or is not differentiable
    at that point).

    Symbolically, one can also differentiate anything by taking
    the steps a mathematician would take for manual symbolic
    differentiation. Since FORTRAN was designed for numerical
    mathematics, writing symbolic differentiation in FORTRAN
    might be a tad more difficult than in a language like LISP
    that was made for list processing. What is more difficult
    actually is /simplifying/ the raw result of the symbolic
    differentiation to some reasonable and human-readable
    expression (in the case of large and complex expressions).

    Now, to your question: My first idea would be

    if (x > 0.1) then
    result = 1
    else
    result = -1
    end if

    as a first coarse approximation. (Some thoughts about
    the behavior at exactly x=0.1 might have to be added.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Thomas Koenig on Wed May 4 17:56:09 2022
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Since there is no way to specify the Dirac
    delta function (ok, so it's a distribution) in floating point,
    that's not given.

    As far as I understand it, what is a distribution is the
    mapping g(y) from f to

    S_0^oo f(x) delta(y) dx,

    while "delta" in isolaton still just is a symbol.

    With more Unicode symbols: the mapping g(y) from f to

    ∫₀°° f(x) δ(y) dx,

    is a distribution, while "δ" alone still is just a symbol.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Thomas Koenig@21:1/5 to Stefan Ram on Wed May 4 17:28:28 2022
    Stefan Ram <ram@zedat.fu-berlin.de> schrieb:
    Thomas Koenig <tkoenig@netcologne.de> writes:
    With IFs, it is really hard - what should the derivative of
    if (x > 0.1) then
    foo = 1+x
    else
    foo = -1-x
    end if
    be?

    There is /numeric/ and /symbolic/ differentiation.

    Numerically, one can easily get an approximation for anything
    as a difference quotient for a small difference, but needs
    to take care, because some results might not be correct
    (when the function changes too fast or is not differentiable
    at that point).

    Symbolically, one can also differentiate anything by taking
    the steps a mathematician would take for manual symbolic
    differentiation. Since FORTRAN was designed for numerical
    mathematics, writing symbolic differentiation in FORTRAN
    might be a tad more difficult than in a language like LISP
    that was made for list processing. What is more difficult
    actually is /simplifying/ the raw result of the symbolic
    differentiation to some reasonable and human-readable
    expression (in the case of large and complex expressions).

    Now, to your question: My first idea would be

    if (x > 0.1) then
    result = 1
    else
    result = -1
    end if

    as a first coarse approximation. (Some thoughts about
    the behavior at exactly x=0.1 might have to be added.)

    There is no "exactly x=0.1" in binary floating point (and I'm not
    sure that even a single Fortran compiler supports decimal floating
    point, despite radix being present in selected_real_kind), which
    is why I chose that particular number, to add a bit of difficulty.

    However, when you differentiate, there are usually certain
    assumptions, which are violated in this case - people sort of expect
    that integrating what you differentiated gets the same result
    (plus a constant). Since there is no way to specify the Dirac
    delta function (ok, so it's a distribution) in floating point,
    that's not given.

    Or people my want to use it for something useful like evaluating
    sensitivies, or for root-finding, or... all of that is likely
    to fall down in the face of discontinuity.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Thomas Koenig on Wed May 4 17:39:31 2022
    Thomas Koenig <tkoenig@netcologne.de> writes:
    Stefan Ram <ram@zedat.fu-berlin.de> schrieb:
    Now, to your question: My first idea would be
    if (x > 0.1) then
    result = 1
    else
    result = -1
    end if
    as a first coarse approximation. (Some thoughts about
    the behavior at exactly x=0.1 might have to be added.)
    However, when you differentiate, there are usually certain
    assumptions, which are violated in this case - people sort of expect
    that integrating what you differentiated gets the same result
    (plus a constant). Since there is no way to specify the Dirac
    delta function (ok, so it's a distribution) in floating point,
    that's not given.

    I since have cancelled my post, because I only learned about
    the meaning of "automatic differentiation" after writing it.

    Given

    1 if x > 0.1
    x =
    -1 , otherwise.

    , what would be the integral? I think it would be

    x+C0 if x > 0.1
    x =
    -x+C1 , otherwise.

    If one has the additional information that the integral
    is continuous, then one could conclude that C1 = 0.2 + C0.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Thomas Koenig on Wed May 4 19:05:41 2022
    Supersedes: <differentiation-20220504183830@ram.dialup.fu-berlin.de> [correcting the integral via "PS" below]

    Thomas Koenig <tkoenig@netcologne.de> writes:
    Stefan Ram <ram@zedat.fu-berlin.de> schrieb:
    Now, to your question: My first idea would be
    if (x > 0.1) then
    result = 1
    else
    result = -1
    end if
    as a first coarse approximation. (Some thoughts about
    the behavior at exactly x=0.1 might have to be added.)
    However, when you differentiate, there are usually certain
    assumptions, which are violated in this case - people sort of expect
    that integrating what you differentiated gets the same result
    (plus a constant). Since there is no way to specify the Dirac
    delta function (ok, so it's a distribution) in floating point,
    that's not given.

    I since have cancelled my post, because I only learned about
    the meaning of "automatic differentiation" after writing it.

    Given

    1 if x > 0.1
    x =
    -1 , otherwise.

    , what would be the integral? I think it would be

    x+C0 if x > 0.1
    x =
    -x+C1 , otherwise.

    If one has the additional information that the integral
    is continuous, then one could conclude that C1 = 0.2 + C0.

    PS: Using the usual definition of "integration" as the
    "surface under the curve", one would indeed assume
    continuity and, therefore, C1 = 0.2 + C0.

    My above integral with two different constants C1 and C0
    would be more adequate for a definition of "integration" as
    a task to find out which piecewise continuous functions
    could have a given function as their derivative.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Thomas Koenig on Wed May 4 18:38:50 2022
    Supersedes: <distribution-20220504185458@ram.dialup.fu-berlin.de>
    [to use a more common notation]

    Thomas Koenig <tkoenig@netcologne.de> writes:
    Since there is no way to specify the Dirac
    delta function (ok, so it's a distribution) in floating point,
    that's not given.

    As far as I understand it, what is a distribution is the
    mapping from f to f(0)=

    S_0^oo f(x) delta(x) dx,

    while "delta" in isolaton still just is a symbol.

    With more Unicode symbols: the mapping from f to f(0)

    ∫₀°° f(x) δ(x) dx,

    is a distribution, while "δ" alone still is just a symbol.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Thomas Koenig on Wed May 4 21:33:58 2022
    On Wednesday, May 4, 2022 at 10:28:31 AM UTC-7, Thomas Koenig wrote:

    (snip)

    There is no "exactly x=0.1" in binary floating point (and I'm not
    sure that even a single Fortran compiler supports decimal floating
    point, despite radix being present in selected_real_kind), which
    is why I chose that particular number, to add a bit of difficulty.

    I now have an IBM Power7 machines, which (like Power6) has decimal
    floating point. I haven't gotten to install an OS yet, with AIX and Linux being two choices, and don't know which compilers have support for it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to All on Thu May 5 01:37:02 2022
    On 5/4/22 11:33 PM, gah4 wrote:
    On Wednesday, May 4, 2022 at 10:28:31 AM UTC-7, Thomas Koenig wrote:

    (snip)

    There is no "exactly x=0.1" in binary floating point (and I'm not
    sure that even a single Fortran compiler supports decimal floating
    point, despite radix being present in selected_real_kind), which
    is why I chose that particular number, to add a bit of difficulty.

    I now have an IBM Power7 machines, which (like Power6) has decimal
    floating point. I haven't gotten to install an OS yet, with AIX and Linux being two choices, and don't know which compilers have support for it.

    This issue has absolutely nothing to do with decimal arithmetic and
    whether a particular processor supports it or not.

    In the original code, there was an if statement that compared to the
    literal value 0.1. That literal value is translated to floating point,
    and in floating point, it has a definite value. That numerical value
    might be different between the two, but that doesn't matter. Just like
    it doesn't matter if the value is represented in 32-bit floating point
    or 46-bit floating point or 64-bit floating point, however it is
    translated, it has a value. What matters is that some value exists. Then expressions such as "x > 0.1" have a meaning given the values of x and
    the values of 0.1. Those values can compare to be equal, regardless of
    whether binary or floating point arithmetic is being done.

    Or stated a different way, when the programmer writes "x > 0.1", he is
    telling the processor to compare the value of x to another value that it
    can represent, he is not telling the processor to compare x to some hypothetical value that it might not be able represent.

    As for fortran support of decimal floating point, I am curious how this
    is done too. Presumably there are some KIND values for those REALs? What happens with mixed-kind arithmetic? Is one kind converted to the other,
    or is there hardware support for the mixed-kind operations?

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)