• comment on fundamental design of Maple, Mathematica, Sage, Macsyma have

    From Nasser M. Abbasi@21:1/5 to All on Thu Oct 28 02:49:30 2021
    I am not memeber of Quora web site, but I saw this post
    on it which has most upvotes on the topic

    https://www.quora.com/What-is-the-difference-between-maple-and-mathematica

    "Note that Sage is unlikely to be that competitor: its
    fundamental design carries along the same flaws that Maple
    and Mathematica have; most of those flaws were actually already
    present in Macsyma years earlier, but the `conventional wisdom'
    had not moved on to recognize these fundamental design flaws"

    Unfortunately the post does not say what these fundamental design
    flaws are. it is an old post from 6 years ago.

    I wonder if any one would venture to guess or comment on what
    these flaws might be? I am just curious to learn.

    --Nasser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From nobody@nowhere.invalid@21:1/5 to Nasser M. Abbasi on Sun Oct 31 20:13:15 2021
    "Nasser M. Abbasi" schrieb:

    I am not memeber of Quora web site, but I saw this post
    on it which has most upvotes on the topic

    https://www.quora.com/What-is-the-difference-between-maple-and-mathematica

    "Note that Sage is unlikely to be that competitor: its
    fundamental design carries along the same flaws that Maple
    and Mathematica have; most of those flaws were actually already
    present in Macsyma years earlier, but the `conventional wisdom'
    had not moved on to recognize these fundamental design flaws"

    Unfortunately the post does not say what these fundamental design
    flaws are. it is an old post from 6 years ago.

    I wonder if any one would venture to guess or comment on what
    these flaws might be? I am just curious to learn.


    I think this post should be considered self-contained, and the flaws
    are to be found among the points raised about Maple and Mathematica
    earlier on. Of these, the following might concern Sage and/or Macsyma
    as well:

    * As programming languages, they [...] make anyone who knows anything
    about programming languages shudder.

    * Unfortunately, a lot of [Maple's] packages are not actually well
    integrated into the core routines.

    * Mathematica has a lot of fancy code [...], but hides a lot of it
    underneath interfaces with restricted options [...].

    * Most of [...] Maple is written in Maple, and is user-visible; most of Mathematica is written in C and invisible.

    * The poster believes in coding in the language imposed on users (the
    "eating your own dog food" method of software development).

    All five are interrelated. I have ignored comments on User Interfaces, Technical Support, and commercial aspects.

    Martin.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to Nasser M. Abbasi on Thu Nov 11 06:19:37 2021
    Nasser M. Abbasi <nma@12000.org> wrote:
    I am not memeber of Quora web site, but I saw this post
    on it which has most upvotes on the topic

    https://www.quora.com/What-is-the-difference-between-maple-and-mathematica

    "Note that Sage is unlikely to be that competitor: its
    fundamental design carries along the same flaws that Maple
    and Mathematica have; most of those flaws were actually already
    present in Macsyma years earlier, but the `conventional wisdom'
    had not moved on to recognize these fundamental design flaws"

    Unfortunately the post does not say what these fundamental design
    flaws are. it is an old post from 6 years ago.

    I wonder if any one would venture to guess or comment on what
    these flaws might be? I am just curious to learn.

    Well, I read phrase "fundamental design flaws" in a survey article
    by Richard Fateman may years ago. I do not know what he meant
    and similarly what is meant in Quora post. But I have some
    experience with developing CAS and I can share my view of
    problems.

    IMO, fundamental problem of computer algebra is that what users
    would like to have is uncomputable: user would like "correct"
    answers to mathematical problems which requires theorem proving
    (which is provably uncomputable). Now, fact that _class_ of
    problems is uncomputable does not mean that we can not answer
    specific problem. But uncomputability means that there are
    no uniform method to solve all "computer algebra" problems
    and as a corollary, some problems similar to easy problems
    may need much more time. Another aspect is that computer
    algebra program is likely to be quite complex.

    There one trick used to tame many computer algebra problems:
    if system can not fully solve problem it leaves input in
    unevaluated form. This is reasonable solution in case
    of simplification. However, there is a catch: when there
    is a conditional in program some decision have to be made.
    In particular, one needs to test for zero before division.
    Missing test can cause wrong results (several popular
    "1 = 0" "proofs" use hidden division by zero). This is
    so called "zero recognition problem". IMO no system
    has really good solution for it, basically system tries
    to some methods, but where results are inconclusive
    there is some comprimise between possiblity of wrong
    answers and rejecting valid but confusing examples.

    Now, comparing various systems, one aspect is programming
    language. In particular, abilty of programming language
    to express math concepts and follow math notation.
    Here IMO Macsyma is rather weak. Namely Lisp (used
    in Macsyma) allows to express many things and in
    particular math. But Lisp uses its own notation
    which deviates significantly from math notation.
    And Macsyma used rather low-level lisp coding practice,
    so it requires effort to "decode" math expressed
    in Macsyma source code. One of my first contacts
    with computer algebra was via Pari, it is coded
    in C and shares simlar problem like Macsyma,
    instead of

    a*x+b

    in Pari you had something like 'add(mul(a, x), b)'
    (that assuming C variables a, x and b were assigned
    appropriate values). Personaly I found Pari code
    easier to decode than Maxima (derived from MIT
    Macsyma), but still this was problematic. I have
    only limited experience with Maple code, but what
    I saw looked _much_ better than Pari or Maxima.
    Sage is intermedate: some math can be expressed
    nicely in Python and Sage folks added a preprocessor
    to support slightly more math, but in other places
    Pyton (and Sage) deviates rather far from math
    notation. You may guess that I am reasonably
    satisfied with notation used in FriCAS...

    Concerning programming language, there is question
    of relation of user language (used to express
    commands and enter expression) to language used
    to implement system. Macsyma user language is
    quite different from Lisp and in Maxima while
    there is large collecion of routines written
    in Maxima language there is tendency to rewrite
    some of them in Lisp either for performance or
    because coding in Maxima language has its problems.
    According to (old) official Maple info large
    majority of Maple is written in its user language
    (there is relatively small kernel written in C).
    Again, old official info said that Mathematica
    is about half in its own language and half in C.
    In Sage IIUC most code is really from external
    systems in their own languages. IIUC "own"
    Sage code is in Python, but (much/all ???) without
    preprocessing done for user input, so there is some
    difference. In FriCAS core language is the same,
    but at user level language is more forgiving
    for type errors (system makes a lot of effort to
    convert expressions to correct type), OTOH
    implementation language (Spad) has constructs for
    large scale programming: you need Spad to create
    new domains/categores or packages. Still, it
    is possible to prototype code using user language
    and later after small changes to compile it
    as Spad code.

    Another aspect is modularity. Again, it seems
    that Macsyma/Maxima is rather weak here (Lisp
    has resonably good support for modularity but
    IIUC Maxima makes only little use of it).
    IIUC at the beginning Maple and Mathematica
    did not think too much about modularity and
    essentially "bolted on" needed construct later.
    Still, from my limited point of view it looks
    resonably good. Python have good support for
    modularity and Sage uses it. FriCAS precursor
    (Axiom) was from the start designed with
    modularity in mind, and I think that FriCAS
    support of modularity if very good.

    Let me also mention design philosophy. Macsyma
    was build around idea that expression flow
    around and that code should pass unrecognized
    expression uncheanged in hope that eventually
    some part will handle them. FriCAS was build
    around idea that at given place is procisely
    defined set of legal expressions. Related to
    this is issue of types: FriCAS expressions
    have types and compiler checks types and
    enforces type correctness. Macsyma/Maxima
    (and most other CAS-es) use dynamic typing,
    meaning that some operations may require
    specific types (for example, to take first
    element of a list you need a list), but in
    principle variables can contain values of
    any type. Let me add that desig philosophy
    part is somewhat fuzzy, Maxima contains
    a lot of internal consistency checks and
    FriCAS for convenience sometimes accepts
    things that under strict interpretation
    would be wrong (in particular on command
    line FriCAS tries to convert types).
    Still, there is real difference: in FriCAS
    in most cases it is reasonably clear that
    some value it a bug and what correct value
    should be. Similarly, it is resonably
    clear which values should be accepted/rejected
    by a routine. My impression from ocasional
    reading of Maxima list it that Maxima
    rather frequently have doubts what given
    Maxima function should do and if given value
    is legal. Concerning types, for most
    users types probably are an obstacle, but
    without types FriCAS probably would not
    exist. Simply, with types I could read
    code, understand it, fix or extend it as
    needed. Without types it would be very hard
    to understand code and almost impossible
    to have confidence in correctness of changes.
    As an example let me mention that long in the
    past FriCAS had fake two dimensional arrays:
    they were really vectors (one dimensional arrays)
    of vectors. This was changed to real two
    dimensional arrays and thanks to types it
    has easy to find small number of places that
    depended on nature of arrays and change them.
    Due to typing rest of code could not see the
    change and worked fine (only faster) after
    change.

    Now, I would not consider any of differences that
    I mention as "fundamental", they can be changed
    or overcomed with sufficient effort. But I
    feel that they are significant: some systems
    require significantly more effort to improve
    than other.

    What I wrote is mostly impelementer view. Some
    people may be more interested in user view.
    However, I think that basically value that
    users get is effort spend on system divided by
    difficuly of improving system. So, while indirect,
    it is very relevant to users.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Fateman@21:1/5 to Nasser M. Abbasi on Tue Nov 30 17:21:11 2021
    On Thursday, October 28, 2021 at 12:49:33 AM UTC-7, Nasser M. Abbasi wrote:
    I am not memeber of Quora web site, but I saw this post
    on it which has most upvotes on the topic

    https://www.quora.com/What-is-the-difference-between-maple-and-mathematica

    "Note that Sage is unlikely to be that competitor: its
    fundamental design carries along the same flaws that Maple
    and Mathematica have; most of those flaws were actually already
    present in Macsyma years earlier, but the `conventional wisdom'
    had not moved on to recognize these fundamental design flaws"

    Unfortunately the post does not say what these fundamental design
    flaws are. it is an old post from 6 years ago.

    I wonder if any one would venture to guess or comment on what
    these flaws might be? I am just curious to learn.

    --Nasser
    I don't usually read sci.math.symbolic, for reasons that are probably obvious, but I happened across Nasser's post, which mentioned me! What kinds of fundamental design flaws? I'm not sure I can recall them all, but here are a few.
    1. Trying to do mathematics and then realizing that the system has to handle "assumptions" such as "assume x>y" or" assume x is an integer" .. or n is an integer mod 13, ... consequently, assumption handling is bolted on after the system is largely
    built.
    2. Trying to aim for "it works for high-school students". For instance, what to do with sqrt(x^2)? will abs(x) work? sometimes. Maybe it should be a set {-x,x} ? Too late, the system has already been built to handle single values. Maybe bolt on
    RootOf(y^2=x, y) and compute with that?
    3. Inadequate recognition of numerical computation (esp. bad in Mathematica's model of numbers), in the user language.

    I'm sure there were other issues -- what I wrote about decades ago was that the Macsyma group at MIT recognized a bunch of things that would have to be done differently, but at that time (c. 1980) funding was hard to get, and the impetus seemed to be to
    sell the system to Symbolics, rather than re-starting. Unfortunately, the design of Maple and Mathematica and numerous other system took the original (limited) design as their starting point. (Actually Mathematica had Wolfram's SMP as its broken
    original starting point).
    This phenomenon is the opposite of the statement

    made in a letter to Robert Hooke in 1675: Isaac Newton made his most famous statement: “If I have seen further it is by standing on the shoulders of Giants”. (allegedly Newton disliked Hooke. Also, Hooke may have been short/ stooped).

    Anyway, in computer algebra system building, it is almost universal that new system builders stand on the feet of those who came before. That is, they re-implement the easy and well-understood components without understanding the barriers. Often they
    think everything will be solved by a better user interface, some random parallel algorithms, a nice typeset display.

    Oh why am I so negative on Mathematica/ numerics? If you have a copy, try this:
    k = 1.0000000000000000000
    Do [k = 2*k - k, {i, 40}]
    k == 2*k ---> this test for equality returns True. Can you guess why?

    I think the Maple language has a gross syntax and clumsy semantics for function calls, and last time I used it for anything, the timing of a command vs the time taken for a programmed computation of the same result were extremely different.

    I think the idea behind Sage is fundamentally "Let's all get together and write programs in Python." As though that will fix everything.

    It is easier, in Macsyma's descendant Maxima, to make patches, or to build mostly self-contained subsystems using the features from Maxima as needed, than to take on the reprogramming from scratch of a new system, though from time to time the impulse
    rises to the surface. Maybe Axiom/Fricas ? I don't know much about recent efforts.

    RJF

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Fateman@21:1/5 to All on Wed Dec 1 17:23:01 2021
    Regarding Waldek's note -- about the undecidability of central problems in mathematics that require determining whether an expression is zero or not... (Results due to Daniel Richardson, 1968 or so).
    I don't consider this a design flaw in the systems that people build. It is a fundamental limitation in the mechanization of mathematics. One that cannot be remedied by a better design. Unless you design a system that is so constrained (say, to integer
    arithmetic) that Richardson's results don't hold.

    This issue about mechanization goes back to Russell and Whitehead, and actually a good deal before that.
    Analogously, Godel's theorem, while in some sense devastating, hardly stopped mathematicians from doing mathematics.

    RJF

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?B?0JHQsNC70YzRgtCw0LfQsNGAI@21:1/5 to All on Fri Dec 17 16:32:22 2021
    воскресенье, 31 октября 2021 г. в 22:06:09 UTC+3, nob...@nowhere.invalid:
    "Nasser M. Abbasi" schrieb:
    * As programming languages, they [...] make anyone who knows anything
    about programming languages shudder.

    * Unfortunately, a lot of [Maple's] packages are not actually well integrated into the core routines.

    * Mathematica has a lot of fancy code [...], but hides a lot of it underneath interfaces with restricted options [...].

    * Most of [...] Maple is written in Maple, and is user-visible; most of Mathematica is written in C and invisible.

    * The poster believes in coding in the language imposed on users (the "eating your own dog food" method of software development).

    All five are interrelated. I have ignored comments on User Interfaces, Technical Support, and commercial aspects.

    Martin.
    First of all "eating your own dog food" relates to compiler bootstrapping, which is what they did for gcc (very complex from asm https://stackoverflow.com/a/65708958 to lebel language and C and then to C++) and for C# (last one very recently in Roslyn
    version of the compiler). Mathematica is a symbolic language and bootstrapping it is insanity. As for writing most of the language in its own language, that is what Java did and why it is so slow, why the main cpython implementation of python did not do
    it. Also Mathematica allows to compile to stand alone C/CUDA applications and looking most of C/C++/CUDA code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to Richard Fateman on Mon Dec 20 18:08:30 2021
    Richard Fateman <fateman@gmail.com> wrote:
    On Thursday, October 28, 2021 at 12:49:33 AM UTC-7, Nasser M. Abbasi wrote:
    I am not memeber of Quora web site, but I saw this post
    on it which has most upvotes on the topic

    https://www.quora.com/What-is-the-difference-between-maple-and-mathematica

    "Note that Sage is unlikely to be that competitor: its
    fundamental design carries along the same flaws that Maple
    and Mathematica have; most of those flaws were actually already
    present in Macsyma years earlier, but the `conventional wisdom'
    had not moved on to recognize these fundamental design flaws"

    Unfortunately the post does not say what these fundamental design
    flaws are. it is an old post from 6 years ago.

    I wonder if any one would venture to guess or comment on what
    these flaws might be? I am just curious to learn.

    --Nasser
    I don't usually read sci.math.symbolic, for reasons that are probably obvious, but I happened across Nasser's post, which mentioned me! What kinds of fundamental design flaws? I'm not sure I can recall them all, but here are a few.
    Thanks for sharing.
    1. Trying to do mathematics and then realizing that the system has to handle "assumptions" such as "assume x>y" or" assume x is an integer" .. or n is an integer mod 13, ... consequently, assumption handling is bolted on after the system is largely
    built.

    However, I dare to say that in 2021 it is still not clear how "proper"
    handling of assumption should work. It seems that core Scratchpad II
    design was done between 1976 and 1982. I am not sure about handiling
    of assumptions in original Scratchpad, but developers of Scratchpad II
    should be aware of need for assumptions, yet Scratchpad II contained no
    special support for assumptions. It is possible that developers of
    Scratchpad II assumed that categores and domains make assumptions not
    needed. Certainly domains nicely handle "n is an integer mod 13"
    for purpose of computations. But domains rather poorly do with
    conditions like "n is an integer which happens to have to give reminder 3
    when divided by 13" or "x is real number which happens to be greater
    than 1/2". AFAICS we still do not know:
    - what should be good language to express assertions
    - how to manage context
    - how to integrate assertion support with computations.

    To expand on this: current practice seem to use rather restricted,
    ad-hoc language to express assertions. OTOH richer language may
    be too hard to handle. Concerning context: assertions are
    inserted in specific situations but resulting expression
    may be used in wider context. In particular, case splits
    will produce conditional expressions and when adding conditional
    expressions we need to properly combine conditions. So
    we need to somewhat associate/bundle context with expressions.
    Naivly attaching context to each expression is likely too be
    too heavy. And we want conditions like "x > 0 or x = 0 or x < 0$
    combine to nothing (or maybe "x is real").

    Related to this: given "1 < y, y < x" it is reasonably easy
    to infer that "0 < x". But how far do you want to go with
    inference?

    Concerning integration of assertions and computations: if each
    expression has it own context, then one can have some automatic
    way to combine contexts. However, in some (many??) important
    cases one can infer that some conditions coming from intermediate
    calculations are in fact redundant. For example, if theory says
    that result is analytic than we can use principle of analytic
    contination to extend range of validity. So, it is not clear
    how much of context management could be automated and what
    when special code should be added to computations.

    2. Trying to aim for "it works for high-school students". For instance, what to do with sqrt(x^2)? will abs(x) work? sometimes. Maybe it should be a set {-x,x} ?

    Oh, no. IMO set-valued mappings in this context only muddle the issue.
    I mean, case split is OK, but then you need whole machinery to properly
    track conditions and combine them in proper way. I am strongly
    in complex camp, so abs(x) is wrong (OK, in principle system could
    provide something called say 'real_sqrt' which requires argument
    to be nonnegative real number and returns nonnegative root).

    Too late, the system has already been built to handle single values. Maybe bolt on RootOf(y^2=x, y) and compute with that?

    Hmm, RootOf for irreducible equations is pretty fundamental so it
    it is not "bolted on". OTOH, allowing reducible equations in
    RootOf is debatable. So if you mean RootOf(y^2=x^2, y), then
    I agree that allowing it in system designed to work with irreducible
    equations is bad.

    3. Inadequate recognition of numerical computation (esp. bad in Mathematica's model of numbers), in the user language.

    Hmm, I wander what you want here? FriCAS allows machine floating point
    as supported type and also has settable precision bigfloat. OTOH
    many symbolic routines in FriCAS do not allow numeric arguments because
    the routine can not give sensible results for floating point numbers.
    For example, some people try to plug in numeric root of polynomial
    and expect simplifications as if computing with exact root. It
    would be nice, but IMO is too much to expect. To be more precise,
    pluging in floating point numbers into _result_ of symbolic
    computation should work fine. But expecting that symbolic
    routine will say sensibly compute GCD of polynomials with
    floating point coefficiencts is too much.

    <snip>
    I think the idea behind Sage is fundamentally "Let's all get together and write programs in Python." As though that will fix everything.

    I think "write in Python" is really Sympy idea. Sage is more "reuse what exists". Sage creator recognized that CAS need a language and decided
    to (mostly) re-use existing langiage, that is Python. However, AFAIK
    most functionality of Sage comes from external packages and most of
    them is _not_ written in Python.

    BTW: Around time when Sage was born I considered idea of creating
    a CAS on basis of existing code. I found Maxima code to be to hard
    to read and understand so very quickly I dropped idea of basing my
    effort on Maxima. But there were C or C++ based libraries and
    programs that provided several thing needed by a CAS (like
    artihmetic for univariate polynomials). But it was clear that
    trying to integrate such disparite codes was substantial effort.
    I think that I could manage this, but most of my time would go
    to low level issues like build machinery or managing incompatible
    data representations. So when I learned about Axiom I joined
    Axiom teams...

    In this context, I would rather say that Sage motto is "we have
    enough developers to manage all those pesky incompatibilities
    between componenets that we use".

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Fateman@21:1/5 to All on Wed Dec 22 10:53:02 2021
    There is a community of academics who work with theorem proving and mathematical representation -- look up the CICM conferences. The overlap between this community and the CAS community should be substantial, but for some reason it is not. I would
    expect that the representation of assumptions is presumably solved in the CICM community. Probably solved many times.

    It may be that the category-based computational model does not so much substitute for assumptions, as legislates against the existence of assumptions. If your model is (say) the integers only, it is not necessary to provide for assume(2>1). If your
    model is (say) polynomials in one or more variables x1, x2, ... then something like assume(x2>0) is not necessary for arithmetic of polynomials. Some additional structure may allow for assumptions, but that structure ("evaluation") is perhaps not part
    of the model.
    Just as adding the structure , oh, the logarithm in the complex plane has multiple values ... complicates programming.

    As for Sage/python -- I entirely agree that Sage is a composition of many programs and many capabilities not written in Python. However, there is some kind of main-line constructive framework that has been put together in Python. Some of the
    contributors to that, and to Sage packages, have done a good job within that context. However, viewing (say) Maxima by what is allowed to "show through" in Sage is to have a limited view. I have the feeling that less competent programmers/
    mathematicians (say, newbie Python programmers with a summer off from high school) may have more enthusiasm than skill, and "contribute" to a confused situation in Sage. This opinion is certainly based on old impressions, and may be outdated.
    I do agree that Maxima internals are complicated -- more so than they should be. You can start from scratch and build a less complicated system. But can it do the same thing as Maxima?
    RJF

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From nobody@nowhere.invalid@21:1/5 to All on Wed Dec 29 07:17:44 2021
    ????????? ??? schrieb:

    воскресенье, 31 октября 2021 г. в 22:06:09 UTC+3, nob...@nowhere.invalid:
    "Nasser M. Abbasi" schrieb:
    * As programming languages, they [...] make anyone who knows
    anything about programming languages shudder.

    * Unfortunately, a lot of [Maple's] packages are not actually well integrated into the core routines.

    * Mathematica has a lot of fancy code [...], but hides a lot of it underneath interfaces with restricted options [...].

    * Most of [...] Maple is written in Maple, and is user-visible; most
    of Mathematica is written in C and invisible.

    * The poster believes in coding in the language imposed on users
    (the "eating your own dog food" method of software development).

    All five are interrelated. I have ignored comments on User
    Interfaces, Technical Support, and commercial aspects.

    First of all "eating your own dog food" relates to compiler
    bootstrapping, which is what they did for gcc (very complex from asm https://stackoverflow.com/a/65708958 to lebel language and C and then
    to C++) and for C# (last one very recently in Roslyn version of the compiler). Mathematica is a symbolic language and bootstrapping it is insanity. As for writing most of the language in its own language,
    that is what Java did and why it is so slow, why the main cpython implementation of python did not do it. Also Mathematica allows to
    compile to stand alone C/CUDA applications and looking most of
    C/C++/CUDA code.

    From memory ...

    Maple users cannot invoke a compilation of their code, and to my
    knowledge, the development of the Maple system never involved compiler bootstrapping. Its kernel, ported from a predecessor language, I
    believe, is now coded in C, I think, and the many decades of Maple's
    existence saw some movement of system components between the rather
    slim kernel and the extensive libraries written in Maple's language
    itself.

    The absence of compilation and irrelevance of compiler bootstrapping
    should also have applied to Mathematica until fairly recently. Today, I believe, the system offers compilation of its user language, a feature accompanied by a rebranding of the language as Wolfram. I have no idea
    how their Wolfram compiler was created and in which language it is
    implemented.

    ... but memory may be fooling me.

    By the way, what is the state of symbolic algebra in Julia, where code
    is indeed compiled automatically on the fly? Can this or other Julia
    features be of advantage here?

    Martin.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to Richard Fateman on Sun Jan 2 13:56:54 2022
    Richard Fateman <fateman@gmail.com> wrote:
    Regarding Waldek's note -- about the undecidability of central problems in mathematics that require determining whether an expression is zero or not... (Results due to Daniel Richardson, 1968 or so).
    I don't consider this a design flaw in the systems that people build. It is a fundamental limitation in the mechanization of mathematics. One that cannot be remedied by a better design. Unless you design a system that is so constrained (say, to
    integer arithmetic) that Richardson's results don't hold.

    I do not say that undecidability is a design flaw. Undecidability is
    a fact that we need to accept. OTOH design of a system should
    accomodate undecidability. And related to undecidability is
    complexity. IMO impotant part of quality of design is how it
    manages complexity needed to get correct and useful results
    in undecidable domains.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to Richard Fateman on Sun Jan 2 14:40:12 2022
    Richard Fateman <fateman@gmail.com> wrote:

    There is a community of academics who work with theorem proving and mathematical representation -- look up the CICM conferences. The overlap between this community and the CAS community should be substantial, but for some reason it is not. I would
    expect that the representation of assumptions is presumably solved in the CICM community. Probably solved many times.

    Clearly there are systems each applying some method. And there is a lot
    of papers. But bare-bone nature of Open Math (basically everthing
    interesting is delegated to specilized dictionaries that depend
    on common agreement of interested parties) clearly shows that
    there are no "accepted" solution.


    It may be that the category-based computational model does not so much substitute for assumptions, as legislates against the existence of assumptions.

    At fundamental level category model in Scrachpad II is agnostic to
    assumptions. Existing categories indeed are somewhat hostile
    to assumptions. But nothing fundamentaly is against categories
    that support assumptions.

    If your model is (say) the integers only, it is not necessary to provide for assume(2>1). If your model is (say) polynomials in one or more variables x1, x2, ... then something like assume(x2>0) is not necessary for arithmetic of polynomials.
    Some additional structure may allow for assumptions, but that structure ("evaluation") is perhaps not part of the model.

    Basic thing is failure: current categories in FriCAS assume that
    failure will happen only in some well-defined circumstances like
    division by 0. This allows relativly simple way of computing
    things that otherwise would be hard to compute.

    Assumptions are closely related to partial functions and failure.
    Namely, when assumptions are used to decide something lack of
    assumptions does not mean that negation of assumption is true.
    Simply, whithout assumptions we can not decide. Of course, there
    is old trick of keeping things unevaluated. But it affects
    control flow quite a lot.

    As for Sage/python -- I entirely agree that Sage is a composition of many programs and many capabilities not written in Python. However, there is some kind of main-line constructive framework that has been put together in Python.

    IIUC Sage without external components can do so little that I would
    not call it a CAS. There are external components written in Python,
    notably Sympy, but I suspect that with non-Python components removed
    Sage would be essentially non-functional.

    I do agree that Maxima internals are complicated -- more so than they should be. You can start from scratch and build a less complicated system. But can it do the same thing as Maxima?

    Can Maxima do something special? AFAICS core functionality of
    various CAS-es is similar (polynomial operations, equation
    solving, limits, integration, etc.) and in Maxima case this
    part seem to be rather dated. It was stat of the art in 1980,
    but there was (IMO significant) progress after that.

    Of course, Maxima user language is specific to Maxima and
    various user "tactics" may work quite differently. But those
    are "legacy" aspects. Clearly, new systems are not 100% compatible,
    but if on average they provide more/better functionality
    then it is worth switching to new system.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nasser M. Abbasi@21:1/5 to clicliclic@freenet.de on Sun Jan 9 00:32:52 2022
    On 12/29/2021 12:17 AM, clicliclic@freenet.de wrote:

    By the way, what is the state of symbolic algebra in Julia, where code
    is indeed compiled automatically on the fly? Can this or other Julia
    features be of advantage here?

    Martin.


    There is the OSCAR project, which is a CAS that uses Julia:

    "Julia serves as an integration layer allowing the four
    cornerstones to communicate in a more direct way than
    through unidirectional interfaces. Furthermore it serves
    as high-level language for implementing efficient algorithms utilizing all cornerstones."

    https://www.youtube.com/watch?v=rWQK4mU3jQc&ab_channel=TheJuliaProgrammingLanguage
    https://oscar.computeralgebra.de/about/

    I never used Oscar myself.

    The main page for Julia symbolic is
    https://symbolics.juliasymbolics.org/dev/ but do not how much
    development is going on with Symbolics.jl

    --Nasser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From nobody@nowhere.invalid@21:1/5 to Nasser M. Abbasi on Sat Jan 15 13:23:03 2022
    "Nasser M. Abbasi" schrieb:

    On 12/29/2021 12:17 AM, clicliclic@freenet.de wrote:

    By the way, what is the state of symbolic algebra in Julia, where
    code is indeed compiled automatically on the fly? Can this or other
    Julia features be of advantage here?


    There is the OSCAR project, which is a CAS that uses Julia:

    "Julia serves as an integration layer allowing the four
    cornerstones to communicate in a more direct way than
    through unidirectional interfaces. Furthermore it serves
    as high-level language for implementing efficient algorithms
    utilizing all cornerstones."

    https://www.youtube.com/watch?v=rWQK4mU3jQc&ab_channel=TheJuliaProgrammingLanguage
    https://oscar.computeralgebra.de/about/

    I never used Oscar myself.

    The main page for Julia symbolic is
    https://symbolics.juliasymbolics.org/dev/ but do not how much
    development is going on with Symbolics.jl


    OSCAR (Open Source Computer Algebra Resource|Research system):

    ... blurped as: "A next generation open source computer algebra system surpassing the combined mathematical capabilities of the underlying
    systems." Hmmm.

    ... bundling the four computer algebra systems GAP, polymake, Singular,
    and Antic (Hecke plus Nemo), which are geared at abstract algebra and
    number theory, under a Julia umbrella. None of the components is
    written in Julia, and the bundle should resemble the Python-based Sage
    project, albeit without calculus support.

    ... appearing to be a project financed by German authorities (TRR 195),
    again like Sage, which (for some time at least) received EU funding, I
    believe.

    Wow: The linked video has been viewed 636 times so far; you may be the
    first to comment on it!


    Julia symbolics:

    ... blurped as: "A fast and modern Computer Algebra System for a fast
    and modern programming language." Hmmm.

    ... constituting a self-contained effort in pure Julia, apparently
    meant to provide symbolic support for efficient number crunching
    (matrix computations) in Julia - that much and little else.

    ... lacking many basic computer algebra capabilities so far, such as:
    stating and applying assumptions, polynomial factorization, Grbner
    bases, symbolic solutions to systems of polynomial equations, limits
    and integration in calculus,
    which unsurprisingly constitute those staples of computer algebra that
    are harder to implement.

    How long will users have to wait for polynomial factorization? For
    plain number crunching matrix buffs, PSLQ-based factorization may be a convenient choice.

    Martin.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nasser M. Abbasi@21:1/5 to clicliclic@freenet.de on Sat Jan 15 08:21:10 2022
    On 1/15/2022 6:23 AM, clicliclic@freenet.de wrote:

    Julia symbolics:

    ...

    ... lacking many basic computer algebra capabilities so far, such as:
    stating and applying assumptions, polynomial factorization, Gröbner
    bases, symbolic solutions to systems of polynomial equations, limits
    and integration in calculus,
    which unsurprisingly constitute those staples of computer algebra that
    are harder to implement.

    Hi.

    To do integration in Julia, the Julia package Reduce can be
    used

    https://reduce.crucialflow.com/v1.2/
    "Symbolic parser generator for Julia language expressions
    using REDUCE algebra term rewriter"

    It looks like an API to the original Reduce CAS at

    https://reduce-algebra.sourceforge.io/index.php
    "REDUCE is a portable general-purpose computer algebra system"

    --------------------------
    _
    _ _ _(_)_ | Documentation: https://docs.julialang.org
    (_) | (_) (_) |
    _ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
    | | | | | | |/ _` | |
    | | |_| | | | (_| | | Version 1.7.1 (2021-12-22)
    _/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
    |__/ |

    julia> using Reduce

    #use Reduce to do the integration

    julia> output = :(int(sin(x),x)) |> rcall
    :(-(cos(x)))

    julia> :(int((a^2 - b^2*x^2)^(-1),x)) |> rcall
    :((log(-((a + b * x))) - log(a - b * x)) / (2 * a * b))

    julia> :(int( (-1+exp(1/2*x))^3/exp(1/2*x),x)) |> RExpr

    x/2 x x
    e *(e + 3*x) - 2*(3*e - 1)
    --------------------------------
    x/2
    e

    ---------------------------------

    I was thinking to adding Reduce via Julia to the next build of the
    CAS independent integration tests if I have time.

    --Nasser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From nobody@nowhere.invalid@21:1/5 to Nasser M. Abbasi on Sun Jan 16 10:21:23 2022
    "Nasser M. Abbasi" schrieb:

    On 1/15/2022 6:23 AM, clicliclic@freenet.de wrote:

    Julia symbolics:

    ...

    ... lacking many basic computer algebra capabilities so far, such as:
    stating and applying assumptions, polynomial factorization, Gröbner
    bases, symbolic solutions to systems of polynomial equations, limits
    and integration in calculus,
    which unsurprisingly constitute those staples of computer algebra that
    are harder to implement.


    To do integration in Julia, the Julia package Reduce can be
    used

    https://reduce.crucialflow.com/v1.2/
    "Symbolic parser generator for Julia language expressions
    using REDUCE algebra term rewriter"

    It looks like an API to the original Reduce CAS at

    https://reduce-algebra.sourceforge.io/index.php
    "REDUCE is a portable general-purpose computer algebra system"

    --------------------------
    _
    _ _ _(_)_ | Documentation: https://docs.julialang.org
    (_) | (_) (_) |
    _ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
    | | | | | | |/ _` | |
    | | |_| | | | (_| | | Version 1.7.1 (2021-12-22)
    _/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
    |__/ |

    julia> using Reduce

    #use Reduce to do the integration

    julia> output = :(int(sin(x),x)) |> rcall
    :(-(cos(x)))

    julia> :(int((a^2 - b^2*x^2)^(-1),x)) |> rcall
    :((log(-((a + b * x))) - log(a - b * x)) / (2 * a * b))

    julia> :(int( (-1+exp(1/2*x))^3/exp(1/2*x),x)) |> RExpr

    x/2 x x
    e *(e + 3*x) - 2*(3*e - 1)
    --------------------------------
    x/2
    e

    ---------------------------------

    I was thinking to adding Reduce via Julia to the next build of the
    CAS independent integration tests if I have time.


    And there is a Maxima for Julia too:

    <https://nsmith5.github.io/Maxima.jl/latest/>

    Now only Giac is missing.

    Is one offered the latest respective CAS version automatically? There
    should be a command causing Reduce to identify itself.

    Martin.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Nasser M. Abbasi@21:1/5 to clicliclic@freenet.de on Sun Jan 16 03:47:09 2022
    On 1/16/2022 3:21 AM, clicliclic@freenet.de wrote:


    And there is a Maxima for Julia too:

    <https://nsmith5.github.io/Maxima.jl/latest/>

    Now only Giac is missing.

    Is one offered the latest respective CAS version automatically? There
    should be a command causing Reduce to identify itself.

    Martin.


    As for version of reduce used by Julia, I was also wondering the same
    thing. How to find which version of reduce one is using.

    I could not find a way to find out. But since "Reduce" itself seems to be almost dead CAS (no activity I could see. No forum at stackoverflow for example),
    I assume Julia is using the same version one can download
    from https://reduce-algebra.sourceforge.io/ which is

    redcsl --version
    Codemist Standard Lisp revision 6105 for linux-gnu:x86_64: Oct 19 2021

    I found few problems using reduce from Julia to do integration testing.

    What Reduce generates as output can't be used easily in Julia
    symbolics. Julia Symbolics.jl package does not "understand" the
    output from Reduce.jl package. These two packages basically were
    not designed to have common language between them so they can talk
    same language. For example, Reduce.jl use "e" for Euler constant,
    while Symbolics uses "exp()" and things like that. When reduce
    returns "int()", Symbolics and Julia do not understand what this is, so
    can't convert it to Latex, and so on.

    This makes it very limited to use and to do any kind of real testing.

    I tried to use Reduce itself, but could not figure how to do some things
    with it, and there are no good documenation and no forum to ask. (send email
    to join Reduce mailing list at sourceforge, but never got a reply).

    Using sagemath to access these CAS systems is better supported.

    But sagemath has no current interface to use Reduce. If and when
    it does, will use it from sagemath.

    Btw, speaking of interfaces to CAS systems, there is work to have an
    interface from sagemath to mathics CAS.

    https://trac.sagemath.org/ticket/31778


    --Nasser

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Fateman@21:1/5 to anti...@math.uni.wroc.pl on Sat Jan 29 21:32:47 2022
    On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:
    Can Maxima do something special? AFAICS core functionality of
    various CAS-es is similar (polynomial operations, equation
    solving, limits, integration, etc.) and in Maxima case this
    part seem to be rather dated. It was stat of the art in 1980,
    but there was (IMO significant) progress after that.

    Waldek Hebisch

    Hi Waldek & sci.math.symbolic.
    Apologizes for the delayed reaction. I don't visit here that often.

    1. Maxima is written (mostly) in Lisp; the Lisp systems have gotten better in various ways and support
    more or better memory, bignum arithmetic, communication with systems in other languages, web stuff.
    Those changes seep into Maxima, sometimes, though slowed by the need to be platform agnostic.

    2. Some subsystems not necessarily in Lisp have also improved. For example the user interface
    wxmaxima is, I think, quite nice and getting nicer.
    Another example is the improvements in graphical display via improvements in gnuplot.
    There are also translations from Fortran of software -- like quadpack. If there were other pieces of
    software of interest written in Fortran, they might also be translated to Lisp and run in Maxima.
    I suspect that with modest alterations this code can be run using arbitrary-precision floats in Maxima.
    Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
    any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
    existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
    of the Maxima distribution it would have to be platform (hardware, software) agnostic.

    So are these "special"? I don't know for sure, but I think there are certainly not dated.

    3. There is a continuing effort by a host of people who provide fixes, enhancements, and applications
    in their own library public repositories. There are educational projects, and wholesale adoption of
    Maxima in schools and school districts. There is an active Maxima mailing list.

    4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis for
    testing if Maxima's facilities were dated. I am aware of the testing of indefinite integration of
    functions of a single variable, comparing Rubi to various other systems. I have some doubts about
    measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
    some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
    that are simpler and smaller than some of the competition. So I would not judge from this.

    While indefinite integration is an application that relies on a tower of algorithmic developments in
    symbolic mathematical systems, one that made researchers proud over the years -- starting as
    probably the first largish program written in Lisp (James Slagle's SAINT, 1961)
    it is not much in demand by engineers and applied mathematicians. In fact
    the far more common problems of DEFINITE integration (in one or more variables) can
    usually be addressed by numerical quadrature. The reference / handbooks of calculus formulas
    contain far more formulas for definite integrals [ with parameters], involving special functions, and
    even so, they harken back to a time when a mathematician did not have access to computers.

    So while a core functionality of a CAS might be "integral calculus", it is partly a tribute to "we can mostly do this with what we built."
    more than "applied mathematicians asked us to do this for their daily work".
    In part it is a historical tribute to "we must be doing something hard because human calculus students struggle to do their homework problems, and maybe
    this is even Artificial Intelligence. And that is good."

    If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,
    polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
    with substantial likelihood they can be inserted into Maxima, or added in via libraries.
    For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation of
    special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
    doing nothing other than writing in C is a step forward, I think is wrong. (People say Python and sympy
    are slower than C, maybe slower than Lisp, Julia is faster than Python or maybe faster than
    C. These are all just running within constant multiplies of each other, if they use the same
    algorithms. And benchmarks tend to be misleading anyway.)

    There are areas where interested programmers could add to
    a computer algebra system, and they might consider adding to Maxima; a few I've suggested
    include an improved interval arithmetic system, and a way of using an inverse symbolic
    calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )

    I am more struck by the fact that "new" CAS have rarely improved on those core capabilities, rarely moving in interesting directions. The ones that have been mentioned previously in
    this thread. And some of them have made stumbling steps in wrong directions. When pointed out, they respond, in effect, in the immortal words of Peewee Herman,
    "I meant to do that"... https://gifs.com/gif/pee-wee-herman-i-meant-to-do-that-mPElxr

    Are there possible breakthroughs that will make all current CAS so obsolete that they must
    all be tossed in the trash? If so, I haven't seen them yet. Can current CAS be improved? Sure,
    but some improvements will be difficult.

    Richard Fateman

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to Richard Fateman on Tue Feb 8 02:32:27 2022
    Richard Fateman <fateman@gmail.com> wrote:
    On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:
    Can Maxima do something special? AFAICS core functionality of
    various CAS-es is similar (polynomial operations, equation
    solving, limits, integration, etc.) and in Maxima case this
    part seem to be rather dated. It was stat of the art in 1980,
    but there was (IMO significant) progress after that.

    Waldek Hebisch

    Hi Waldek & sci.math.symbolic.
    Apologizes for the delayed reaction. I don't visit here that often.

    1. Maxima is written (mostly) in Lisp; the Lisp systems have gotten better in various ways and support
    more or better memory, bignum arithmetic, communication with systems in other languages, web stuff.
    Those changes seep into Maxima, sometimes, though slowed by the need to be platform agnostic.

    2. Some subsystems not necessarily in Lisp have also improved. For example the user interface
    wxmaxima is, I think, quite nice and getting nicer.
    Another example is the improvements in graphical display via improvements in gnuplot.
    There are also translations from Fortran of software -- like quadpack. If there were other pieces of
    software of interest written in Fortran, they might also be translated to Lisp and run in Maxima.
    I suspect that with modest alterations this code can be run using arbitrary-precision floats in Maxima.

    Well, when I looked at Fortran code, my conclusion was that significant
    part can not be _usefully_ run at arbitrary precision. For example,
    special functions and some qadratures use magic constants that need
    to be accurate to required precision. Then there is question of
    compute time, low order methods scale quite badly with precision.
    For example, for double float eliptic functions there is Carlson
    code. This code needs number of iterations growing linarly with
    precision. OTOH Gauss-Landen transforms need only logarithmic
    number of iterations. So, for double precision FriCAS uses
    Carlson method. But arbitrary precision uses Gauss-Landen transforms
    (with some added part from Carlson to improve robustness).
    In FriCAS nominally Romberg method can deliver arbitrary precision,
    but it is too slow to use. Variable order Gauss methods are
    useful for arbitrary precision, but rather different than
    fixed precision ones.

    There is also another possible problem: if code is written as
    generic Lisp arithmetic, than for double precision it will be
    much slower than specialised one. So you really want several
    versions of code: real double precision, complex double precision
    and separate arbitrary precision version(s) (you can probably use
    single version for real and complex arbitrary precision, but
    it may be simpler to have 2 arbitrary precision versions) and
    possibly also single precision version (ATM in FriCAS there is
    no support for single precision).

    Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
    any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
    existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
    of the Maxima distribution it would have to be platform (hardware, software) agnostic.

    So are these "special"? I don't know for sure, but I think there are certainly not dated.

    Well, what you mention above are things outside core. If you think
    that they are important, than you should really go with "new" systems,
    AFAICS they have this part much better than Maxima. OK, you put
    Lisp as advantage, for Lisp lover indeed Maxima has advantage as
    it is system where Lisp is most visible. For me Lisp is just
    implementation detail which should _not_ leak to user level.

    3. There is a continuing effort by a host of people who provide fixes, enhancements, and applications
    in their own library public repositories. There are educational projects, and wholesale adoption of
    Maxima in schools and school districts. There is an active Maxima mailing list.

    You basically say that there is inertia: in short time for Maxima
    folks it is easier to continue to use Maxima than to switch to
    something else. True, inerta is powerful force. But if you
    look at number of people behind various systems, Maxima would
    look better than FriCAS, but worse than several "new" systems.

    4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis for
    testing if Maxima's facilities were dated.

    In arxiv preprint 1611.02569 Parisse gives an example of polynomial factorization and proposes special method to solve it. ATM FriCAS
    does not do very well on this example (2500s on my machine), but
    certainly better than systems that can not do it at all. However,
    my estimate is that using up-to-date general methods it should take
    fraction of second (Magma was reported to need 3s, Parisse special
    method 2s). So, I would say that possibly all systems have
    some work to do.

    I am aware of the testing of indefinite integration of
    functions of a single variable, comparing Rubi to various other systems. I have some doubts about
    measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
    some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
    that are simpler and smaller than some of the competition. So I would not judge from this.

    Rubi testsute has flaws, but I think that it actually overestimates
    Maxima capabilites. Namely, examples in Rubi testsuite are
    arranged to be easily matchable to patterns. That eliminates
    work that would be otherwise needed to discover true structure
    of integrand. AFAICS Maxima basically assume that integrand
    can be handled by simple-minded heuristics. This is true
    for Rubi testsuite, but fails for more general cases.
    Already random exp-log examples seem to cause trouble for Maxima.

    Let me add that working on integration I also test integration
    on various systems. My examples are probably harder than
    average. Anyway, it is pretty easy to came with examples
    that Maxima or Reduce can not do. Maple is much better
    there. And Mathematica is still better.

    While indefinite integration is an application that relies on a tower of algorithmic developments in
    symbolic mathematical systems, one that made researchers proud over the years -- starting as
    probably the first largish program written in Lisp (James Slagle's SAINT, 1961)
    it is not much in demand by engineers and applied mathematicians. In fact the far more common problems of DEFINITE integration (in one or more variables) can
    usually be addressed by numerical quadrature. The reference / handbooks of calculus formulas
    contain far more formulas for definite integrals [ with parameters], involving special functions, and
    even so, they harken back to a time when a mathematician did not have access to computers.

    So while a core functionality of a CAS might be "integral calculus", it is partly a tribute to "we can mostly do this with what we built."
    more than "applied mathematicians asked us to do this for their daily work". In part it is a historical tribute to "we must be doing something hard because
    human calculus students struggle to do their homework problems, and maybe this is even Artificial Intelligence. And that is good."

    Well, when people need numbers they probably should use numeric
    methods. But some people need formulas and they hope that
    CAS will deliver formulas.

    Regardless of your view about usefulness of indefinite
    integration, testing indefinite integration is useful. Namely,
    indefinite integration is easily testable and to have good preformance
    in general you need good algorithms. You may claim that those
    algorthms are very specialised. However, core Risch algorithm
    is relativly simple. Difficultly is that you need a lot of
    support routines which however are usable in more general contexts.
    In more detail, you need polynomial and rational function arithmetic,
    including gcd, resultants, factorization and partial fraction
    decomposition, this is usable almost everwhere in CAS. Next,
    you need differential field routines, they are usable also for
    simplification, transcendental solving, differential equation
    solving, limits and power series expansion.

    If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,
    polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
    with substantial likelihood they can be inserted into Maxima, or added in via libraries.

    Hmm, you should know about Monagan and Pearce work on polynomial
    operations. It is now included in Maple and gives significant
    speedup on large polynomials. There is long cycle of works of
    Monagan and collaborators on polynomial GCD-s and factorisation
    giving significant speedups compared to earlier methods. IIUC
    several of them are incuded in Maple. Some variations are
    implemented in FriCAS. As an illustartion let me say that
    for GCD-s with algebraic coefficients FriCAS previously used
    subresultant GCD (claimed to be very good implementation
    of subresultant GCD). Switching to modular methods in
    style of Monagan and van Hoej gave large speedup. My
    estimates suggest that using different modular method
    should be much faster. Monagan shows example where
    factorization in algebraic extention using older method
    is prohibitivly expensive, but newer method works quite
    well. In fact, in all papers he gives examples and IMO
    it is clear that what he and collaborators implemented
    gives significant progress.

    I looked a bit at Maxima multivariate factorization code.
    AFAICS this is early Wang method trying to use zero as
    evaluation point. This is resonably fast if you are lucky.
    But it may be quite slow otherwise. In modern time there
    seem to be agreement that zero evals should be avoided,
    there is too high risk of degraded performance if you try
    them.

    Anyway, already in classic case of integer coefficients
    Maxima lacks improvement concerning handling harder cases.
    For algebraic coefficients IIUC there is problem of
    correctness (IIUC some Maxima methods silently produce
    wrong results) and no (correct) support for modular
    methods. State of the art approach would use Hensel
    lifting directly for algebraic coefficients. And
    there is well developed methodology for exploiting
    sparsity with non-zero evaluation point. There are
    heuristics for early failure and restart (to avoid
    hude loss of time due to bad eval). There are new
    approaches to leading coefficient problem.

    Of course, one could try to add this to existing routines.
    But IMO needed changes are so large that it makes more
    sense to start from scratch. In FriCAS code was newer
    and much more readable than Maxima. Still, parts
    of new approach did not fit well into existing schema
    so I had to add new parallel modules. This is ongoing
    change (in relatively early stage) and currently old code
    is used together with new one, but at the end it is possible
    that no old factorization/gcd code will survive.

    Concerning power series expansions: AFAICS FriCAS
    routines are much better than Maxima routines in this
    area. Part of this is that FriCAS uses lazy approach.
    Part is that FriCAS has special handling for many
    cases (so more code, but better performance).

    As another example let me mention that recently on Maxima
    list there was a guy with system of equations which Maxima
    could not solve (IIRC Maxima run out of memory). I tried
    this system in FriCAS. After little modification (which
    matches stated intent) the system was easily solvable
    (few seconds) in FriCAS. Orginal system was harder
    (minutes or maybe tens of minutes) but worked too.
    On Maxima list advice was: "you do not need to solve
    it, answer would be too complicated to see anything".
    Well, FriCAS answer was largish and gave me no
    enlightment. But sometimes answers are small despite
    large computations needed to obtain them. Sometimes
    large answer may give important que. And in many
    cases it is possible to do further calculations
    with such answer.

    So "you do not need this" is a poor excuse, system
    that can give answer is better that one which can
    not.

    For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation of
    special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
    doing nothing other than writing in C is a step forward, I think is wrong.

    Unfortunately Lisp implementations are slow. IME fastest is SBCL
    where normally I can get about half of speed of comparable C code.
    But at Lisp level this code has complete type declarations,
    uses specialized array and machine compatible types. SBCL gives
    slower code than C simply because SBCL code generator can not
    optimize so well as optimizers in C compilers. In principle
    on such code ECL or GCL should do better because code could be
    easily 1-1 translated to C. But somewhat, both ECL and GCL
    insert extra operations which kill performance. However, that
    was somewhat optimistic, when I really need performance in
    C I could tweak code to use say SSE instructions, doing 2
    or 4 times more work per instruction. Also, in gcc I have
    easy access to high part of product of 64-bit numbers, which
    in several problems roughly doubles performance compared to
    64-bit product. So, using C in many cases I could get 4-8
    times faster code than going via SBCL. Recently I needed
    to work with 2-dimensional arrays. To my dismay I discovered
    that my code was much slower than I hoped, about 6 or 8
    times slower than comparable C. It turned out that SBCL
    (at least version that I used) was unable to optimize
    indexing for 2-dimensional arrays and dully generated code
    repeatedly fetching dimension from array object and
    performing multiplication implied by indexing...

    Another thing is that Lisp encourages writing "generic" code,
    which works for several types. That is nice for non-critical
    code, but there is rather large slowdown compared to well-typed
    code with enough type info.

    ATM I am staying with SBCL, but you can see that when it comes
    to speed of low level code that Lisp is a problem.

    Concerning writing everything in C: I think that this approach
    is wrong. It makes sense if you write low-level library that
    should be callable from many languages, because calling C
    is relatively easy and calls between different higher level
    languages are problematic.

    (People say Python and sympy
    are slower than C, maybe slower than Lisp, Julia is faster than Python or maybe faster than
    C.

    I did many microbenchmarks and also looked at correlation with
    runtime for applications. One can get resonably good idea
    of language intrinsic speed looking at speed of relatively
    simple loops. At very top level language speed essentially
    does not matter. Slowest ones where Unix shell and UCB Logo,
    both about 10000 time slower than C, on modern machine
    this may easily hide behind time taken by large mass of
    other code. Then you have code "outside inner loop". In
    many cases you can accept slow code there. But things
    may be tricky. In early version of guessing code I had
    access to 2 dimensional array outside 2 nested loops.
    So rarely executed code and its speed should not matter.
    However, I cared about speed of guessing code and I
    profiled it. And this array access showed in profile.
    This was not biggest item, just few percent, but still
    suprisingly large (I changes how FriCAS array indexing works
    and now it is invisible in profile). Python is about 100
    times slower than C, so basically you want your inner loops
    to be in C (or C++ which is also easily callable fro Python),
    that is either use Python builtins or some support library
    exposed to Python. Sympy folks insisted on "pure Python"
    and Python builtins seem to be not so good match for
    computer algebra, so there is slowdown (and as reaction Sympy
    core project in C++ which is much faster for basic operations).
    Julia uses LLVM, and is unlikely to run faster than C.
    They claim good speed, but I do not know how good it is,
    their claims are consistent both with Julia being 2 times
    faster than SBCL and also with 2 times slower...

    Language speed is essentially meaningless without a profiler.
    Namely, without profiler you are making blind changes, which
    frequently is waste of time. Or you need to write a lot
    of instrumentation code. I find SBCL profiler to be quite
    good. For other Lisps I was not able to find really
    usable profiler. For Lisp and also for interpeted languages
    you need dedicated profiler: in case of Lisp code in
    principle moves in memory and for example on GCL standard
    system profiler tells me about time spent in GCL runtime,
    but not about time spent in my code. Similarly, for
    interpreted languages system profiler tells me about time
    spent in interpeter, but without info which code was
    responsible for interpeter time.

    These are all just running within constant multiplies of each other, if they use the same
    algorithms. And benchmarks tend to be misleading anyway.)

    Well, constants matter. 100 times may be huge difference. Even
    2 times may be significant. Whan Apple switched to gcc they
    gave as reason 10% improvement in code speed compared to compiler
    they used previously (there where other reasons too, but speed
    was significant factor).

    There are areas where interested programmers could add to
    a computer algebra system, and they might consider adding to Maxima; a few I've suggested
    include an improved interval arithmetic system, and a way of using an inverse symbolic
    calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )

    I am more struck by the fact that "new" CAS have rarely improved on those core
    capabilities, rarely moving in interesting directions.

    I am not sure what you mean by "new" CAS above. There is bunch of
    systems that essentially take view that core functionality does not
    matter much or can be delegated to other systems and instead concentrate
    on packaging and presentation. Sadly, while not new Maxima seem
    to exibit eqivalent attitude ("all important core work was dane in
    seventies"). OTOH, there are new developements. I have no personal
    experience with Giac, but on some polynomial problems it gave
    impressive benchmark results. There are efficient libraries.
    By now classic in NTL having among other good routines for factorization
    of polynomials over finite fields. There is Flint, which has
    good routines for univariate polynomials. Singular seem to
    have quite good Groebner bases. There are solvers for large
    equation systems with integer coefficients. Sage links
    to several other systems and libraries and if you are in
    range of one of specialized algorithms, then you get good
    performance.

    Among not new systems FriCAS has guessing package (main part of
    which is efficient solver for large linear systems possesing special structure), GFUN offers similar thing to Maple. Maple
    (and I supect Magma and Mathematica too) made progess on multiplication, division, gcds and factorization for polynomials. There are
    also significant improvement in this area in FriCAS.

    The ones that have been mentioned previously in
    this thread. And some of them have made stumbling steps in wrong directions. When pointed out, they respond, in effect, in the immortal words of Peewee Herman,
    "I meant to do that"... https://gifs.com/gif/pee-wee-herman-i-meant-to-do-that-mPElxr

    Are there possible breakthroughs that will make all current CAS so obsolete that they must
    all be tossed in the trash? If so, I haven't seen them yet. Can current CAS be improved? Sure,
    but some improvements will be difficult.

    Richard Fateman






    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Klaus Hartlage@21:1/5 to Nasser M. Abbasi on Sun Feb 13 05:02:08 2022
    Nasser M. Abbasi schrieb am Sonntag, 16. Januar 2022 um 10:47:13 UTC+1:
    I tried to use Reduce itself, but could not figure how to do some things
    with it, and there are no good documenation and no forum to ask. (send email to join Reduce mailing list at sourceforge, but never got a reply).

    There's a GUI for Reduce, which may help to get started much quicker?
    - https://fjwright.github.io/Run-REDUCE/

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From nobody@nowhere.invalid@21:1/5 to Klaus Hartlage on Tue Feb 15 14:12:38 2022
    [The following message made it to <aioe.org> but does not appear on
    Google Groups even though it was posted from Google; possibly because
    the author has deleted it there.]

    Klaus Hartlage schrieb:

    Nasser M. Abbasi schrieb am Sonntag, 16. Januar 2022 um 10:47:13 UTC+1:
    I tried to use Reduce itself, but could not figure how to do some things with it, and there are no good documenation and no forum to ask. (send email
    to join Reduce mailing list at sourceforge, but never got a reply).

    There's a GUI for Reduce, which may help to get started much quicker?
    - https://fjwright.github.io/Run-REDUCE/


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Fateman@21:1/5 to anti...@math.uni.wroc.pl on Wed Feb 23 22:43:23 2022
    On
    Again, apologies for a delayed response, but thanks for your long note!!!
    My comments are interspersed.
    rjf

    Monday, February 7, 2022 at 6:32:30 PM UTC-8, anti...@math.uni.wroc.pl wrote:
    Richard Fateman <fat...@gmail.com> wrote:
    On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:
    Can Maxima do something special? AFAICS core functionality of
    various CAS-es is similar (polynomial operations, equation
    solving, limits, integration, etc.) and in Maxima case this
    part seem to be rather dated. It was stat of the art in 1980,
    but there was (IMO significant) progress after that.

    Waldek Hebisch

    Hi Waldek & sci.math.symbolic.
    Apologizes for the delayed reaction. I don't visit here that often.

    1. Maxima is written (mostly) in Lisp; the Lisp systems have gotten better in various ways and support
    more or better memory, bignum arithmetic, communication with systems in other languages, web stuff.
    Those changes seep into Maxima, sometimes, though slowed by the need to be platform agnostic.

    2. Some subsystems not necessarily in Lisp have also improved. For example the user interface
    wxmaxima is, I think, quite nice and getting nicer.
    Another example is the improvements in graphical display via improvements in gnuplot.
    There are also translations from Fortran of software -- like quadpack. If there were other pieces of
    software of interest written in Fortran, they might also be translated to Lisp and run in Maxima.
    I suspect that with modest alterations this code can be run using arbitrary-precision floats in Maxima.
    Well, when I looked at Fortran code, my conclusion was that significant
    part can not be _usefully_ run at arbitrary precision. For example,
    special functions and some qadratures use magic constants that need
    to be accurate to required precision. Then there is question of
    compute time, low order methods scale quite badly with precision.
    For example, for double float eliptic functions there is Carlson
    code. This code needs number of iterations growing linarly with
    precision. OTOH Gauss-Landen transforms need only logarithmic
    number of iterations. So, for double precision FriCAS uses
    Carlson method. But arbitrary precision uses Gauss-Landen transforms
    (with some added part from Carlson to improve robustness).
    In FriCAS nominally Romberg method can deliver arbitrary precision,
    but it is too slow to use. Variable order Gauss methods are
    useful for arbitrary precision, but rather different than
    fixed precision ones.

    There are specific methods that work regardless of precision. One that I came across is
    for computing Bessel functions, designed by JCP Miller, and available (in Algol) in the
    Collected Algorithms of the ACM. A Google search will find many commentaries and
    elaborations. The iterative (or recursive) testing for termination generally is not
    specific to constants of particular precision.

    There is also another possible problem: if code is written as
    generic Lisp arithmetic, than for double precision it will be
    much slower than specialised one. So you really want several
    versions of code: real double precision, complex double precision
    and separate arbitrary precision version(s) (you can probably use
    single version for real and complex arbitrary precision, but
    it may be simpler to have 2 arbitrary precision versions) and
    possibly also single precision version (ATM in FriCAS there is
    no support for single precision).

    This is certainly true, and I think it is a basis for the Julia fans to say they can
    make programs run faster by compiling -- on the fly -- specialized code for particular
    types. In lisp, there is certainly the option to compile multiple versions of the same
    procedure but with different type declarations. So I don't see that as a problem;
    more like an opportunity. In Maxima there has been specialized code for modular arithmetic
    in which the modulus is smaller than a machine word, and more general code when the
    modulus is larger. I haven't checked the code to see if it is still there.

    Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
    any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
    existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
    of the Maxima distribution it would have to be platform (hardware, software) agnostic.

    So are these "special"? I don't know for sure, but I think there are certainly not dated.
    Well, what you mention above are things outside core. If you think
    that they are important, than you should really go with "new" systems,
    AFAICS they have this part much better than Maxima.

    I doubt it. If all you want to do is write a C or Python program to do bignum arithmetic, and you
    are proficient in those languages, that might be better than using Maxima, especially if you
    are unwilling to read the documentation, or download the binary code. (Better to spend
    a week in the computer lab than an hour in the library.)

    OK, you put
    Lisp as advantage, for Lisp lover indeed Maxima has advantage as
    it is system where Lisp is most visible. For me Lisp is just
    implementation detail which should _not_ leak to user level.

    It usually does not, in Maxima. But it is there in case a facility has not been raised up
    to the Maxima level. Say you want to trace the internal lisp function simptimes. You can
    descend to the lisp level from the Maxima command line with:
    :lisp (trace simptimes)

    3. There is a continuing effort by a host of people who provide fixes, enhancements, and applications
    in their own library public repositories. There are educational projects, and wholesale adoption of
    Maxima in schools and school districts. There is an active Maxima mailing list.
    You basically say that there is inertia: in short time for Maxima
    folks it is easier to continue to use Maxima than to switch to
    something else. True, inerta is powerful force. But if you
    look at number of people behind various systems, Maxima would
    look better than FriCAS, but worse than several "new" systems.

    How many people are using Maxima? Between 11/1/2021 and 2/1/2022 76,209 people downloaded
    the Windows version of the binary. It is hard to know how many Unix versions, since they are included
    in distributions not directly from sourceforge.

    4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis for
    testing if Maxima's facilities were dated.
    In arxiv preprint 1611.02569 Parisse gives an example of polynomial factorization and proposes special method to solve it. ATM FriCAS
    does not do very well on this example (2500s on my machine), but
    certainly better than systems that can not do it at all. However,
    my estimate is that using up-to-date general methods it should take
    fraction of second (Magma was reported to need 3s, Parisse special
    method 2s). So, I would say that possibly all systems have
    some work to do.

    Parisse states in his conclusion that he hopes that other open-source CAS will implement
    this method. It does not look too difficult to do in Maxima. Factorization time has not
    been raised as a limiting issue in a computation of interest, that I am aware of. The particular
    test example is large.

    I am aware of the testing of indefinite integration of
    functions of a single variable, comparing Rubi to various other systems. I have some doubts about
    measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
    some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
    that are simpler and smaller than some of the competition. So I would not judge from this.
    Rubi testsute has flaws, but I think that it actually overestimates
    Maxima capabilites. Namely, examples in Rubi testsuite are
    arranged to be easily matchable to patterns.

    I don't understand this comment.

    That eliminates
    work that would be otherwise needed to discover true structure
    of integrand. AFAICS Maxima basically assume that integrand
    can be handled by simple-minded heuristics.

    This describes the first of 3 stages in the integration program. There are
    two more, described in J. Moses' paper in Comm. ACM. The third stage
    is a (partial) implementation of the Risch algorithm. It is possible
    that the decomposition of the integrand in a differential field would be
    more difficult starting from one form or another, but the integration
    program has simplifications such as radcan() at its disposal.

    This is true
    for Rubi testsuite, but fails for more general cases.
    Already random exp-log examples seem to cause trouble for Maxima.

    These probably should be reported as bugs, then.

    Let me add that working on integration I also test integration
    on various systems. My examples are probably harder than
    average.

    Average? For what population of integration tasks? Examples taken from homework problems in Freshman calculus?

    Anyway, it is pretty easy to came with examples
    that Maxima or Reduce can not do. Maple is much better
    there. And Mathematica is still better.

    Finding problems where Mathematica fails is not difficult. Improving Maxima to have
    a larger coverage of indefinite integrals is probably not hard either. I have been hoping
    that wholesale adoption of Rubi would make this unnecessary!

    While indefinite integration is an application that relies on a tower of algorithmic developments in
    symbolic mathematical systems, one that made researchers proud over the years -- starting as
    probably the first largish program written in Lisp (James Slagle's SAINT, 1961)
    it is not much in demand by engineers and applied mathematicians. In fact the far more common problems of DEFINITE integration (in one or more variables) can
    usually be addressed by numerical quadrature. The reference / handbooks of calculus formulas
    contain far more formulas for definite integrals [ with parameters], involving special functions, and
    even so, they harken back to a time when a mathematician did not have access to computers.

    So while a core functionality of a CAS might be "integral calculus", it is partly a tribute to "we can mostly do this with what we built."
    more than "applied mathematicians asked us to do this for their daily work".
    In part it is a historical tribute to "we must be doing something hard because
    human calculus students struggle to do their homework problems, and maybe this is even Artificial Intelligence. And that is good."
    Well, when people need numbers they probably should use numeric
    methods. But some people need formulas and they hope that
    CAS will deliver formulas.

    This is unusual. It is possible to use plotting programs sometimes to gain some
    understanding of a result

    Regardless of your view about usefulness of indefinite
    integration, testing indefinite integration is useful. Namely,
    indefinite integration is easily testable and to have good preformance
    in general you need good algorithms. You may claim that those
    algorthms are very specialised. However, core Risch algorithm
    is relativly simple. Difficultly is that you need a lot of
    support routines which however are usable in more general contexts.
    In more detail, you need polynomial and rational function arithmetic, including gcd, resultants, factorization and partial fraction
    decomposition, this is usable almost everwhere in CAS. Next,
    you need differential field routines, they are usable also for simplification, transcendental solving, differential equation
    solving, limits and power series expansion.

    I agree entirely that integration relies on many facilities. Some of them
    are rarely used elsewhere. Partial Fractions for instance.

    If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,
    polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
    with substantial likelihood they can be inserted into Maxima, or added in via libraries.
    Hmm, you should know about Monagan and Pearce work on polynomial
    operations. It is now included in Maple and gives significant
    speedup on large polynomials.

    I am aware of their results and corresponded with them. The point you make is correct...
    "speedup on large polynomials". In my careful implementation of their ideas, the results were a slowdown on all problems that were NOT exceedingly large. And
    so not of great interest. Figuring comparisons of Maple to other systems is hindered by the miscellaneous programming and data-structure decisions in Maple. Thus programs written in the underlying language (Margay? is that still in use?)
    and in the Maple "user" language run at quite different rates. Thus (only) in Maple
    did it make sense to map certain polynomial problems into packed
    integer problems. The integer computations were in the core, the polynomial problems were not. I don't know if this is still a prevalent feature of
    the current Maple.

    There is long cycle of works of
    Monagan and collaborators on polynomial GCD-s and factorisation
    giving significant speedups compared to earlier methods.

    I have expressed my concern above; a heuristic GCD that maps polynomials
    into (big) integers is the kind of "optimization" that might work in Maple and nowhere else. Or maybe it does work ...
    See this 1995 paper
    https://dl.acm.org/doi/abs/10.1145/220346.220376

    IIUC
    several of them are incuded in Maple. Some variations are
    implemented in FriCAS. As an illustartion let me say that
    for GCD-s with algebraic coefficients FriCAS previously used
    subresultant GCD (claimed to be very good implementation
    of subresultant GCD). Switching to modular methods in
    style of Monagan and van Hoej gave large speedup. My
    estimates suggest that using different modular method
    should be much faster. Monagan shows example where
    factorization in algebraic extention using older method
    is prohibitivly expensive, but newer method works quite
    well. In fact, in all papers he gives examples and IMO
    it is clear that what he and collaborators implemented
    gives significant progress.

    I would not endorse an algorithm for general use if it has only been implemented in Maple. It has to be tested on a more neutral
    platform.

    I looked a bit at Maxima multivariate factorization code.
    AFAICS this is early Wang method trying to use zero as
    evaluation point. This is resonably fast if you are lucky.
    But it may be quite slow otherwise. In modern time there
    seem to be agreement that zero evals should be avoided,
    there is too high risk of degraded performance if you try
    them.

    If there is a better way to get around unlucky evaluation points than
    is currently programmed in Maxima, perhaps someone will
    implement it.

    Anyway, already in classic case of integer coefficients
    Maxima lacks improvement concerning handling harder cases.
    For algebraic coefficients IIUC there is problem of
    correctness (IIUC some Maxima methods silently produce
    wrong results) and no (correct) support for modular
    methods.

    If you are aware of bugs, it should be easy to report them.

    State of the art approach would use Hensel
    lifting directly for algebraic coefficients. And
    there is well developed methodology for exploiting
    sparsity with non-zero evaluation point. There are
    heuristics for early failure and restart (to avoid
    hude loss of time due to bad eval). There are new
    approaches to leading coefficient problem.

    Again, reporting of bugs should be easy.


    Of course, one could try to add this to existing routines.
    But IMO needed changes are so large that it makes more
    sense to start from scratch.

    This hardly makes sense to me. Why should you have to write
    a new computer algebra system to write an improved
    polynomial factoring program?

    In FriCAS code was newer
    and much more readable than Maxima. Still, parts
    of new approach did not fit well into existing schema
    so I had to add new parallel modules. This is ongoing
    change (in relatively early stage) and currently old code
    is used together with new one, but at the end it is possible
    that no old factorization/gcd code will survive.

    The readability of code in Maxima probably depends on one's
    familiarity with the general framework. Not having seen FriCAS
    code, I cannot compare it to Maxima. I would not, however,
    be surprised if FriCAS were more readable than code written
    in 1969 or so.

    Concerning power series expansions: AFAICS FriCAS
    routines are much better than Maxima routines in this
    area. Part of this is that FriCAS uses lazy approach.
    Part is that FriCAS has special handling for many
    cases (so more code, but better performance).

    Speed for truncated Taylor series (if that is what you mean)
    has not usually be an issue. Maxima has other routines to
    compute the general term of a power series (infinite series).
    I don't know if FriCAS does that.


    As another example let me mention that recently on Maxima
    list there was a guy with system of equations which Maxima
    could not solve (IIRC Maxima run out of memory). I tried
    this system in FriCAS. After little modification (which
    matches stated intent) the system was easily solvable
    (few seconds) in FriCAS. Orginal system was harder
    (minutes or maybe tens of minutes) but worked too.
    On Maxima list advice was: "you do not need to solve
    it, answer would be too complicated to see anything".
    Well, FriCAS answer was largish and gave me no
    enlightment. But sometimes answers are small despite
    large computations needed to obtain them.

    Yes, sometimes. A system that may be many times faster again than FriCAS
    is Fermat. I think that it would be worth testing, if you want to see
    how fast is fast.
    https://home.bway.net/lewis/

    Sometimes
    large answer may give important que. And in many
    cases it is possible to do further calculations
    with such answer.

    So "you do not need this" is a poor excuse, system
    that can give answer is better that one which can
    not.

    On the one hand, that is true. On the other, telling someone that a problem should be reformulated for easier computation is also a good idea.

    For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation of
    special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
    doing nothing other than writing in C is a step forward, I think is wrong.
    Unfortunately Lisp implementations are slow. IME fastest is SBCL
    where normally I can get about half of speed of comparable C code.

    So why are people writing in Python, which is (apparently) quite slow?

    But at Lisp level this code has complete type declarations,
    uses specialized array and machine compatible types. SBCL gives
    slower code than C simply because SBCL code generator can not
    optimize so well as optimizers in C compilers.

    I have looked at SBCL compiler generated code, which is easily viewed
    by the lisp disassemble function. Complaints about SBCL code
    can be sent to the SBCL mailing list.

    In principle
    on such code ECL or GCL should do better because code could be
    easily 1-1 translated to C. But somewhat, both ECL and GCL
    insert extra operations which kill performance.

    Lisp compilers (some of them, anyway) generate assembly language.

    However, that
    was somewhat optimistic, when I really need performance in
    C I could tweak code to use say SSE instructions, doing 2
    or 4 times more work per instruction. Also, in gcc I have
    easy access to high part of product of 64-bit numbers, which
    in several problems roughly doubles performance compared to
    64-bit product. So, using C in many cases I could get 4-8
    times faster code than going via SBCL.

    If you wish to write code in assembler, I assume that it can be
    linked to Lisp as well.

    Recently I needed
    to work with 2-dimensional arrays. To my dismay I discovered
    that my code was much slower than I hoped, about 6 or 8
    times slower than comparable C. It turned out that SBCL
    (at least version that I used) was unable to optimize
    indexing for 2-dimensional arrays and dully generated code
    repeatedly fetching dimension from array object and
    performing multiplication implied by indexing...

    Another thing is that Lisp encourages writing "generic" code,
    which works for several types. That is nice for non-critical
    code, but there is rather large slowdown compared to well-typed
    code with enough type info.

    There is an advantage to generic code that works. Then you add
    declarations to make it faster. There is an argument that type
    declarations make it easier to debug. Maybe. I think that in some
    cases the type declarations ARE the bug.

    ATM I am staying with SBCL, but you can see that when it comes
    to speed of low level code that Lisp is a problem.

    I am not convinced that speed is such a problem. If it were, we would
    all be using FERMAT.

    Concerning writing everything in C: I think that this approach
    is wrong. It makes sense if you write low-level library that
    should be callable from many languages, because calling C
    is relatively easy and calls between different higher level
    languages are problematic.
    (People say Python and sympy
    are slower than C, maybe slower than Lisp, Julia is faster than Python or maybe faster than
    C.
    I did many microbenchmarks and also looked at correlation with
    runtime for applications. One can get resonably good idea
    of language intrinsic speed looking at speed of relatively
    simple loops. At very top level language speed essentially
    does not matter. Slowest ones where Unix shell and UCB Logo,
    both about 10000 time slower than C, on modern machine
    this may easily hide behind time taken by large mass of
    other code. Then you have code "outside inner loop". In
    many cases you can accept slow code there. But things
    may be tricky. In early version of guessing code I had
    access to 2 dimensional array outside 2 nested loops.
    So rarely executed code and its speed should not matter.
    However, I cared about speed of guessing code and I
    profiled it. And this array access showed in profile.
    This was not biggest item, just few percent, but still
    suprisingly large (I changes how FriCAS array indexing works
    and now it is invisible in profile). Python is about 100
    times slower than C, so basically you want your inner loops
    to be in C (or C++ which is also easily callable fro Python),
    that is either use Python builtins or some support library
    exposed to Python. Sympy folks insisted on "pure Python"
    and Python builtins seem to be not so good match for
    computer algebra, so there is slowdown (and as reaction Sympy
    core project in C++ which is much faster for basic operations).
    Julia uses LLVM, and is unlikely to run faster than C.
    They claim good speed, but I do not know how good it is,
    their claims are consistent both with Julia being 2 times
    faster than SBCL and also with 2 times slower...

    I do not have any personal experience with the Julia language.
    Since Julia changes and SBCL changes, comparisons may
    vary from time to time.
    Again, if speed is the prime issue for some computation,
    maybe you should try FERMAT.

    Language speed is essentially meaningless without a profiler.
    Namely, without profiler you are making blind changes, which
    frequently is waste of time.

    I agree.

    Or you need to write a lot
    of instrumentation code. I find SBCL profiler to be quite
    good. For other Lisps I was not able to find really
    usable profiler. For Lisp and also for interpeted languages
    you need dedicated profiler: in case of Lisp code in
    principle moves in memory and for example on GCL standard
    system profiler tells me about time spent in GCL runtime,
    but not about time spent in my code. Similarly, for
    interpreted languages system profiler tells me about time
    spent in interpeter, but without info which code was
    responsible for interpeter time.
    These are all just running within constant multiplies of each other, if they use the same
    algorithms. And benchmarks tend to be misleading anyway.)
    Well, constants matter. 100 times may be huge difference. Even
    2 times may be significant. Whan Apple switched to gcc they
    gave as reason 10% improvement in code speed compared to compiler
    they used previously (there where other reasons too, but speed
    was significant factor).
    There are areas where interested programmers could add to
    a computer algebra system, and they might consider adding to Maxima; a few I've suggested
    include an improved interval arithmetic system, and a way of using an inverse symbolic
    calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )

    I am more struck by the fact that "new" CAS have rarely improved on those core
    capabilities, rarely moving in interesting directions.
    I am not sure what you mean by "new" CAS above.

    Any system that is newer than Macsyma circa 1970, or Reduce of that time.

    In particular, Mathematica, Maple are new. Even though they are now rather old compared to (say) Sympy.

    There is bunch of
    systems that essentially take view that core functionality does not
    matter much or can be delegated to other systems and instead concentrate
    on packaging and presentation.

    Yes, all the world is a web app.

    Sadly, while not new Maxima seem
    to exibit eqivalent attitude ("all important core work was dane in seventies").

    I don't quite agree. I would say "If your system is a re-implementation of
    work that has been in open-source systems since 1970, what is your added value?"

    OTOH, there are new developements. I have no personal
    experience with Giac, but on some polynomial problems it gave
    impressive benchmark results. There are efficient libraries.
    By now classic in NTL having among other good routines for factorization
    of polynomials over finite fields. There is Flint, which has
    good routines for univariate polynomials. Singular seem to
    have quite good Groebner bases. There are solvers for large
    equation systems with integer coefficients. Sage links
    to several other systems and libraries and if you are in
    range of one of specialized algorithms, then you get good
    performance.

    Again, I would mention FERMAT if you consider as a "new development"
    an implementation that computes the same thing, but faster.

    Some of the features of the systems you mention are essentially
    aimed at a small target audience, consisting mostly of specialists in certain branches of
    pure mathematics.

    Among not new systems FriCAS has guessing package (main part of
    which is efficient solver for large linear systems possesing special structure), GFUN offers similar thing to Maple. Maple
    (and I supect Magma and Mathematica too) made progess on multiplication, division, gcds and factorization for polynomials. There are
    also significant improvement in this area in FriCAS.

    I would guess that for most users of Maxima, the system responds
    pretty much instantaneously to most commands. This is not to say efficiency
    is of no concern, but it is certainly less important than correctness and coverage of applied mathematics or other areas.

    The ones that have been mentioned previously in
    this thread. And some of them have made stumbling steps in wrong directions.
    When pointed out, they respond, in effect, in the immortal words of Peewee Herman,
    "I meant to do that"... https://gifs.com/gif/pee-wee-herman-i-meant-to-do-that-mPElxr

    Are there possible breakthroughs that will make all current CAS so obsolete that they must
    all be tossed in the trash? If so, I haven't seen them yet. Can current CAS be improved? Sure,
    but some improvements will be difficult.

    Richard Fateman





    --
    Waldek Hebisch

    Thanks; I hope we can continue this!
    RJF

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to Richard Fateman on Sat Feb 26 07:01:50 2022
    Richard Fateman <fateman@gmail.com> wrote:
    On
    Again, apologies for a delayed response, but thanks for your long note!!!
    My comments are interspersed.
    rjf

    Monday, February 7, 2022 at 6:32:30 PM UTC-8, anti...@math.uni.wroc.pl wrote:
    Richard Fateman <fat...@gmail.com> wrote:
    On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:
    There is also another possible problem: if code is written as
    generic Lisp arithmetic, than for double precision it will be
    much slower than specialised one. So you really want several
    versions of code: real double precision, complex double precision
    and separate arbitrary precision version(s) (you can probably use
    single version for real and complex arbitrary precision, but
    it may be simpler to have 2 arbitrary precision versions) and
    possibly also single precision version (ATM in FriCAS there is
    no support for single precision).

    This is certainly true, and I think it is a basis for the Julia fans to say they can
    make programs run faster by compiling -- on the fly -- specialized code for particular
    types. In lisp, there is certainly the option to compile multiple versions of the same
    procedure but with different type declarations. So I don't see that as a problem;
    more like an opportunity.

    Well, the point is that get good performance you need to do more
    work than mechanical translation.

    Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
    any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
    existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
    of the Maxima distribution it would have to be platform (hardware, software) agnostic.

    So are these "special"? I don't know for sure, but I think there are certainly not dated.
    Well, what you mention above are things outside core. If you think
    that they are important, than you should really go with "new" systems, AFAICS they have this part much better than Maxima.

    I doubt it.

    What you mean by "it"? That "new" systems have better non-core
    parts?

    If all you want to do is write a C or Python program to do bignum arithmetic, and you
    are proficient in those languages, that might be better than using Maxima, especially if you
    are unwilling to read the documentation, or download the binary code. (Better to spend
    a week in the computer lab than an hour in the library.)

    I am not sure what binary code changes here.

    3. There is a continuing effort by a host of people who provide fixes, enhancements, and applications
    in their own library public repositories. There are educational projects, and wholesale adoption of
    Maxima in schools and school districts. There is an active Maxima mailing list.
    You basically say that there is inertia: in short time for Maxima
    folks it is easier to continue to use Maxima than to switch to
    something else. True, inerta is powerful force. But if you
    look at number of people behind various systems, Maxima would
    look better than FriCAS, but worse than several "new" systems.

    How many people are using Maxima? Between 11/1/2021 and 2/1/2022 76,209 people downloaded
    the Windows version of the binary. It is hard to know how many Unix versions, since they are included
    in distributions not directly from sourceforge.

    If you look at improvements what matters is people who engage in developement. AFAICS Sympy has more developers than Maxima. Sage defintely has _much_
    more developers. Even Julia symbolics, which up to now is now fairly incomplete, seem to have several developers.

    4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis for
    testing if Maxima's facilities were dated.
    In arxiv preprint 1611.02569 Parisse gives an example of polynomial factorization and proposes special method to solve it. ATM FriCAS
    does not do very well on this example (2500s on my machine), but
    certainly better than systems that can not do it at all. However,
    my estimate is that using up-to-date general methods it should take fraction of second (Magma was reported to need 3s, Parisse special
    method 2s). So, I would say that possibly all systems have
    some work to do.

    Parisse states in his conclusion that he hopes that other open-source CAS will implement
    this method. It does not look too difficult to do in Maxima.

    My adice is to ignore his method. Instead use state-of-the art general
    Hensel lifting. This example is very sparse, general Hensel lifting
    should work well also on less sparse examples or even completely dense
    ones (of couse cost grows with number of terms). AFAICS potentially problematic part in factorization is factor recombination and
    leading coefficient problem. In Parisse approach you need to
    solve this several times. In standard Hensel lifting one can
    use old Kaltofen trick to limit both problems to essentially
    single stage (there is some work in each stage, but it gets
    much easier after first stage).

    Factorization time has not
    been raised as a limiting issue in a computation of interest, that I am aware of. The particular
    test example is large.

    I do not think the example is large. Similar thing may appear from
    rather small problem due to unfortunate choice of parametrization
    (the point is that _after_ factoring it may be clear how to
    change parametrization to make problem smaller, before it may
    be not so clear).

    Concerring "limiting issue", existing codes in many cases avoided
    factorization due to old myth of "expensive factorization". With
    faster factorization you can apply it in more cases. As to
    more concrete examples, general testing for algebraic dependence
    (in presence of nested roots) depends on factorization, and
    AFAICS factorization is limiting ability to handle algebraic
    dependencies. In similar spirit, computation of Galois groups
    needs factorization. In both cases you need algebraic
    coefficients which tend to make factorization much more
    costly. If you use Trager method for handling algebraic
    coefficients you get _much_ larger polynomial as norm.

    I am aware of the testing of indefinite integration of
    functions of a single variable, comparing Rubi to various other systems. I have some doubts about
    measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
    some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
    that are simpler and smaller than some of the competition. So I would not judge from this.
    Rubi testsute has flaws, but I think that it actually overestimates
    Maxima capabilites. Namely, examples in Rubi testsuite are
    arranged to be easily matchable to patterns.

    I don't understand this comment.

    1) Rubi examples are essentially all written in factored form.
    Have you (or somebody from Maxima team) tried to expand Rubi
    examples before integration? I did not do any systematic
    testing of this sort, but in few cases I noted that expanding
    integrand turned something that Maxima can do into case that
    it can not do.

    2) In general there are difficulties due to constants. In Rubi
    testuite almost all constants are symbolic parameters. This
    means that pattern matcher can easily handle them without need
    for additional computations.

    3) Difficult (and IMO interesting) case in symbolic integration
    involve sums with partial cancellations between terms. Such
    sums are especially difficult for pattern machers. Almost
    all cases in Rubi testsuite match to single rule or single
    chain of rules, so no such difficulty in Rubi testsuite.

    That eliminates
    work that would be otherwise needed to discover true structure
    of integrand. AFAICS Maxima basically assume that integrand
    can be handled by simple-minded heuristics.

    This describes the first of 3 stages in the integration program. There are two more, described in J. Moses' paper in Comm. ACM. The third stage
    is a (partial) implementation of the Risch algorithm.

    1) AFAICS "Risch algorithm" in Maxima is so incomplete that really is
    just another heuristic. IIUC Maxima completely misses work of Rothstein
    which is start of modern "Risch algorithm".
    2) Risch algorithm can be extened to special functions. Completeness
    of extended algortihm is not clear (polylogs and elliptic integrals
    are tricky there), but IMO it should have more coverage than
    pattern matchers.

    It is possible
    that the decomposition of the integrand in a differential field would be
    more difficult starting from one form or another, but the integration
    program has simplifications such as radcan() at its disposal.

    Doing simplifications in the middle of integration really is asking
    for trouble. Integrator needs full control of from of integrand,
    any transformation should be part of integration process. Doing
    otherwise plants bugs.

    This is true
    for Rubi testsuite, but fails for more general cases.
    Already random exp-log examples seem to cause trouble for Maxima.

    These probably should be reported as bugs, then.

    You have example in thread that I started. Full run was done by
    Nassir.

    Let me add that working on integration I also test integration
    on various systems. My examples are probably harder than
    average.

    Average? For what population of integration tasks? Examples taken from homework problems in Freshman calculus?

    Integration tests that I have seen.

    Anyway, it is pretty easy to came with examples
    that Maxima or Reduce can not do. Maple is much better
    there. And Mathematica is still better.

    Finding problems where Mathematica fails is not difficult. Improving Maxima to have
    a larger coverage of indefinite integrals is probably not hard either. I have been hoping
    that wholesale adoption of Rubi would make this unnecessary!

    Well, IMO Rubi (and pattern match integrators in general) is dead end.
    More precisely, pattern matchers are good if you have somewhat isolated example(s) that you really want to handle. Axiom used to have
    7 patterns for popular special functions. One of the patterns matched "exp(-x^2)" and produced error function. Actually, patterns was
    a bit more general and was hooked as "last chance" step to Risch
    algorithms so it covered much more forms. Still, it was easy to
    fool. AFAICS main reason for pattern was that many folks heard
    that "exp(-x^2)" was "impossible" and would try it. System that
    would return unevaluated result in such case would make bad
    impression compared to other systems. In FriCAS this is replaced
    by algorithm. Current implementation is still incomplete, but
    AFAICS covers all that pattern could hope to cover and many
    things which are tricky for pattern matchers. In current
    developement version of FriCAS pattern matching is no longer
    used for indefinite integrals.

    My point is that with pattern matcher you can not hope for wide
    coverage. Rubi currently has about 5000 rules which take more
    than 20000 lines. This is almost twice as big as FriCAS integrator.
    Yet Rubi to get "finite task" needs to drastically limit possible
    froms of integrands.

    If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,
    polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
    with substantial likelihood they can be inserted into Maxima, or added in via libraries.
    Hmm, you should know about Monagan and Pearce work on polynomial operations. It is now included in Maple and gives significant
    speedup on large polynomials.

    I am aware of their results and corresponded with them. The point you make is correct...
    "speedup on large polynomials". In my careful implementation of their ideas,
    the results were a slowdown on all problems that were NOT exceedingly large. And
    so not of great interest.

    Well, I am not sure how "careful" was your implementation. IMO to
    get results comparable to Monagan and Pearce would require substantial
    effort, with lot of attention to detail. BTW: AFAICS their data
    structures are somewhat hostile to Lisp. I do not see how to get
    comparable speed from Lisp implementation. And in (unlikely) case
    that you coded in C, there are performance traps in C, so
    a lot of care is needed for good performance.

    I agree that for CAS smaller polymials are quite important and most
    my effort is on small and moderately large polynomial. But I would
    not dismiss polynomials as "exceedingly large". In equation
    solving large polynomials routinely appear at intermediate stages.

    Figuring comparisons of Maple to other systems is
    hindered by the miscellaneous programming and data-structure decisions in Maple. Thus programs written in the underlying language (Margay? is that still in use?)

    Hmm, all Maple literature that I saw claimed "kernel" written in C.

    and in the Maple "user" language run at quite different rates.

    Maple user language is interpreted, so clearly slower than C.
    Probably comparable in speed to Python...

    Thus (only) in Maple
    did it make sense to map certain polynomial problems into packed
    integer problems. The integer computations were in the core, the polynomial problems were not. I don't know if this is still a prevalent feature of
    the current Maple.

    IIUC for long time they have univariate polynomials in core. OTOH
    you wrote paper about using GMP for polynomial operations and some
    other systems use this method.

    There is long cycle of works of
    Monagan and collaborators on polynomial GCD-s and factorisation
    giving significant speedups compared to earlier methods.

    I have expressed my concern above; a heuristic GCD that maps polynomials
    into (big) integers is the kind of "optimization" that might work in Maple and
    nowhere else. Or maybe it does work ...
    See this 1995 paper
    https://dl.acm.org/doi/abs/10.1145/220346.220376

    I have read this paper long time ago. Concerning heuristic GCD:
    FriCAS uses it for univariate polynomials with integer coefficients.
    I have compared it with alternatives and is was clearly superior
    on my testcases. One reason is that there is widely available
    well-optimized implementation of bignum arithmetic, namely GMP.
    Compared to that polynomial libraries have simpler and much
    less optimized code. But there is also somewhat deeper reason,
    namely current computer architecture. At theoretical level
    bignum operations and operations on univariate polynomials
    modulo small prime can use essentially equivalent methods.
    Textbook may prefer to discuss polynomials, as there are less
    troubles due to carry. However, carries merely complicate
    code with small performace impact. Other operations in
    integer version can be quite efficient. OTOH division needed
    in modular approach is slow on modern hardware. Modular algorithms
    frequently play tricks to reduce cost of division, but it is
    still significant compared to costs in bigint arithmetic.

    For multivariate cases IMO variation of Zippel is likely to
    be fastest.

    IIUC
    several of them are incuded in Maple. Some variations are
    implemented in FriCAS. As an illustartion let me say that
    for GCD-s with algebraic coefficients FriCAS previously used
    subresultant GCD (claimed to be very good implementation
    of subresultant GCD). Switching to modular methods in
    style of Monagan and van Hoej gave large speedup. My
    estimates suggest that using different modular method
    should be much faster. Monagan shows example where
    factorization in algebraic extention using older method
    is prohibitivly expensive, but newer method works quite
    well. In fact, in all papers he gives examples and IMO
    it is clear that what he and collaborators implemented
    gives significant progress.

    I would not endorse an algorithm for general use if it has only been implemented in Maple. It has to be tested on a more neutral
    platform.

    Normally Monagan gives all data needed to judge proposed
    method regardless of your opinion about Maple. In case
    of GCD with algebraic coefficient I can say more: implementation
    of Monagan and van Hoej method in FriCAS works essentially
    as advertised in the paper. In case of factorization with
    coefficients in algebraic extention example from the paper
    currently is prohibitively expensive in FriCAS, while they
    gave quite reasonable runtime for new method (and my estimates
    indicate that their method would be significant improvement
    for FriCAS).

    Some authors give only relative times, that is system specific
    and claimed improvement may be due to bad implementation of
    old method. But Monagan gives _much_ more info.

    I looked a bit at Maxima multivariate factorization code.
    AFAICS this is early Wang method trying to use zero as
    evaluation point. This is resonably fast if you are lucky.
    But it may be quite slow otherwise. In modern time there
    seem to be agreement that zero evals should be avoided,
    there is too high risk of degraded performance if you try
    them.

    If there is a better way to get around unlucky evaluation points than
    is currently programmed in Maxima, perhaps someone will
    implement it.

    AFAICS amoing free system FriCAS and Giac already implement
    better method (there is much to be improved in FriCAS
    factorizer, but this part was done long ago).

    Anyway, already in classic case of integer coefficients
    Maxima lacks improvement concerning handling harder cases.
    For algebraic coefficients IIUC there is problem of
    correctness (IIUC some Maxima methods silently produce
    wrong results) and no (correct) support for modular
    methods.

    If you are aware of bugs, it should be easy to report them.

    My info is based on public messages from Maxima developers.
    IIUC bugs are in Maxima bugtracker.

    State of the art approach would use Hensel
    lifting directly for algebraic coefficients. And
    there is well developed methodology for exploiting
    sparsity with non-zero evaluation point. There are
    heuristics for early failure and restart (to avoid
    hude loss of time due to bad eval). There are new
    approaches to leading coefficient problem.

    Again, reporting of bugs should be easy.

    Well, I reported bugs when I was using Maxima. Concerning
    newer methods, it is for Maxima developers to do the
    work. IME telling folks "you should implement algortihm X"
    is almost useless, either somebody is willing and able and
    will do this anyway or thing will be postponed indefinitely.

    Of course, one could try to add this to existing routines.
    But IMO needed changes are so large that it makes more
    sense to start from scratch.

    This hardly makes sense to me. Why should you have to write
    a new computer algebra system to write an improved
    polynomial factoring program?

    Well, that is not only factoring but also gcd and other polynomial
    routines. And you need support code like linear equations
    solver. In all probably between 5-20 kloc. You could
    replace existing Maxima polynomial code by new one. My
    point was that existing Maxima factorizer (and surrounding
    routines) is unlikely to help writing new factorizer.
    And my impression is that doing such replacement in Maxima is
    harder than is some other systems.

    Concerning power series expansions: AFAICS FriCAS
    routines are much better than Maxima routines in this
    area. Part of this is that FriCAS uses lazy approach.
    Part is that FriCAS has special handling for many
    cases (so more code, but better performance).

    Speed for truncated Taylor series (if that is what you mean)
    has not usually be an issue. Maxima has other routines to
    compute the general term of a power series (infinite series).
    I don't know if FriCAS does that.

    1) FriCAS series are lazy, which in particular means that all
    terms that are delivered are correct. This is unlike
    Maxima style truncation where cancelation may mean that
    delivered result is has lower than requested accuracy.
    Note: I wrote the above mainly to explain why I am
    reluctant to use word "truncated" describing FriCAS
    series.
    2) In the past I tried expansion of relatively simple
    function. IIRC function fit in single line and desired
    part of expansion was less than a screen. Maxima needed
    5 min to deliver few terms and for me it was clear that
    Maxima is unable to deliver desired result in reasonable
    time. This was several years ago, so now this part of
    Maxima may be faster. But for me it made much more sense
    to improve thing that was already good (namely FriCAS) than
    try to fix extensive problems with Maxima implementation...

    As another example let me mention that recently on Maxima
    list there was a guy with system of equations which Maxima
    could not solve (IIRC Maxima run out of memory). I tried
    this system in FriCAS. After little modification (which
    matches stated intent) the system was easily solvable
    (few seconds) in FriCAS. Orginal system was harder
    (minutes or maybe tens of minutes) but worked too.
    On Maxima list advice was: "you do not need to solve
    it, answer would be too complicated to see anything".
    Well, FriCAS answer was largish and gave me no
    enlightment. But sometimes answers are small despite
    large computations needed to obtain them.

    Yes, sometimes. A system that may be many times faster again than FriCAS
    is Fermat. I think that it would be worth testing, if you want to see
    how fast is fast.
    https://home.bway.net/lewis/

    I know about Fermat. Its supposedly best version was available
    only as binary without any explantation how it works internally
    (later there was source for older version). But I looked
    at provided benchmark results and they were not very impressive.
    In some cases FriCAS has comparable timings, in some cases
    Fermat time was signifiantly better than FriCAS, but worse
    than some other system. My conclusion was that there is
    nothing interesting to be learned from Fermat. Maybe in the
    past Fermat was really faster than other systems, but not
    in recent times.

    BTW: Core algoritms in FriCAS typically are much faster than
    in Axiom era. AFAICS old Axiom algoritms in many cases were
    faster than Maple or Mathematica from comparable time. But
    Maple and Mathematica (and other systems) are implementing
    better methods all the time so any comparison should
    really state which versions are compared (and result may
    be different for later versions).

    For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation of
    special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
    doing nothing other than writing in C is a step forward, I think is wrong.
    Unfortunately Lisp implementations are slow. IME fastest is SBCL
    where normally I can get about half of speed of comparable C code.

    So why are people writing in Python, which is (apparently) quite slow?

    Well, slow may be better than nothing. In case of Python, IIUC normal
    practice is to have speed-critical parts in C++. As I already
    wrote, vast majority of Sage code is in external libraries,
    mostly in C or C++.

    But at Lisp level this code has complete type declarations,
    uses specialized array and machine compatible types. SBCL gives
    slower code than C simply because SBCL code generator can not
    optimize so well as optimizers in C compilers.

    I have looked at SBCL compiler generated code, which is easily viewed
    by the lisp disassemble function. Complaints about SBCL code
    can be sent to the SBCL mailing list.

    I am not sure if complaints would serve any useful purpose.
    My impression is that SBCL developers have quite good idea
    what could be improved. There is simply question of allocating
    developement effort. Since they are doing the work, it is
    up to them to decide what has priority.

    In principle
    on such code ECL or GCL should do better because code could be
    easily 1-1 translated to C. But somewhat, both ECL and GCL
    insert extra operations which kill performance.

    Lisp compilers (some of them, anyway) generate assembly language.

    Free compilers that I know normally either emit (binary) machine
    code (SBCL, Closure CL, Poplog), C (ECL and GCL), Java bytecodes
    (ABCL) or bytecode for their own interpreter (Clisp, interpreted
    modes of other compilers). Of the above Poplog has code to
    emit assembler, but assembly output normally is used for Pop11
    and it is not clear if it would work correctly for Common Lisp.

    I have never used commercial Lisp compiler, they may have
    features not present in free Lisp-s.

    Anyway, may point was that ECL and GCL lost advantage that
    code generator in gcc should give them, and on FriCAS code
    are slower than SBCL.

    However, that
    was somewhat optimistic, when I really need performance in
    C I could tweak code to use say SSE instructions, doing 2
    or 4 times more work per instruction. Also, in gcc I have
    easy access to high part of product of 64-bit numbers, which
    in several problems roughly doubles performance compared to
    64-bit product. So, using C in many cases I could get 4-8
    times faster code than going via SBCL.

    If you wish to write code in assembler, I assume that it can be
    linked to Lisp as well.

    That is problematic. In gcc I can have say 2-3 critcal lines
    in assembler and rest in C. To use SSE instructions it
    is enough to tweak few declarations and say to compiler that
    target processor supports relevant instructions. Rest is
    automatic, gcc code generator knows how to generate SSE
    from normal C source. Also, gcc has built-in functions
    that at C level look like a function call, but generate
    single assembler instruction. So in gcc one can use low
    level features with minimal effort.

    In SBCL AFAIK there is no documented way to insert assember
    instructions in middle of Lisp routine (SBCL developers can
    do this, but there is no explantiation what are the rules).

    In Closure CL there is reasonably well documented way to
    write Lisp-callable assembler routines, but there are
    severe restrictions to satisfy garbage collector.

    One could write whole routine in assembler (or C) and
    call it like external C routine. But this has overhead
    of inter-language call (in Closure CL forces copying of
    paramenters and may force system call to adjust environment).

    The point is that one can not limit low-level aspect to
    tiny part, but one is forced to move larger part of code
    to low level, partially because low level routine must
    do enough work to amortize cost of call.

    Another point is that using C one can get low-level benefits
    working at much higher level.

    Recently I needed
    to work with 2-dimensional arrays. To my dismay I discovered
    that my code was much slower than I hoped, about 6 or 8
    times slower than comparable C. It turned out that SBCL
    (at least version that I used) was unable to optimize
    indexing for 2-dimensional arrays and dully generated code
    repeatedly fetching dimension from array object and
    performing multiplication implied by indexing...

    Another thing is that Lisp encourages writing "generic" code,
    which works for several types. That is nice for non-critical
    code, but there is rather large slowdown compared to well-typed
    code with enough type info.

    There is an advantage to generic code that works. Then you add
    declarations to make it faster. There is an argument that type
    declarations make it easier to debug. Maybe. I think that in some
    cases the type declarations ARE the bug.

    Well, I write generic code when appropriate. But when there is
    performance critical part you want it fast. And fast usually
    requires specialized code.

    Concerning type declarations, Lisp type declarations are error
    prone. Since compiler does not check them program may have
    latent bug which only surfaces later. In SBCL if you declare
    variable type you need to initialize the variable with value
    of correct type. This is problematic, as default initalization
    is nil. For example, several polynomial routines in Mock-MMA
    failed in this way.

    In FriCAS you can write generic code using appropriate generic
    types. In such case FriCAS will generate Lisp with no type
    declarations. Or you can use specialized type, like SingleInteger
    (Lisp fixnum) or DoubleFloat and then FriCAS will generate
    specialized operations and Lisp type declarations. There
    are places in FriCAS with silly-looking code like

    if T is DoubleFloat then
    X
    else
    X

    where X is code using type T. Meaning of such snippet is that
    when type T happens to be DoubleFloat FriCAS will use
    specialized version of code otherwise FriCAS will will
    use equivalent generic code.

    Concerning types and debugging, I find FriCAS types quite
    useful during developement, type errors in early phase of
    developement in some cases prevented design bugs that otherwise
    would waste much more effort. And type checking prevents
    some classes of errors, so I can concentrate testing on
    things not expressible via types. Also, type help with
    readablility, they explain role/format of arguments reducing
    need for comments. Lisp types, since they are not checked
    by compiler can not really serve similar purpose.

    ATM I am staying with SBCL, but you can see that when it comes
    to speed of low level code that Lisp is a problem.

    I am not convinced that speed is such a problem. If it were, we would
    all be using FERMAT.

    As I wrote, I do not think Fermat is that fast. And surely
    slow is better than not at all. But I think that CAS should
    be serious about speed. And I think that leading systems
    take speed seriously.

    There are areas where interested programmers could add to
    a computer algebra system, and they might consider adding to Maxima; a few I've suggested
    include an improved interval arithmetic system, and a way of using an inverse symbolic
    calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )

    I am more struck by the fact that "new" CAS have rarely improved on those core
    capabilities, rarely moving in interesting directions.
    I am not sure what you mean by "new" CAS above.

    Any system that is newer than Macsyma circa 1970, or Reduce of that time.

    In particular, Mathematica, Maple are new. Even though they are now rather old
    compared to (say) Sympy.

    Well, I would say that Mathematica and Maple added interesting functionalty compared to Maxima. They have their quirks and additions may be not
    the ones you would like most, but they have several IMO interesting
    additions. For me functinality in specialized systems like GAP or
    Pari is also interesting, part of it found way to more general
    systems.

    FriCAS has interval arthmetic inherited from Axiom, Sage has something
    too. I understand your critique of Mathematica arithmetic, but it
    is not clear for me if you want for Maxima something different than
    FriCAS interval arthmetic and if yes why it would be better.

    Concerning recognizing floats, it seems that methods that work
    are based on things like LLL or PSQL. Several systems now have
    LLL, so this is definte progress compared to say 1980.

    There is bunch of
    systems that essentially take view that core functionality does not
    matter much or can be delegated to other systems and instead concentrate
    on packaging and presentation.

    Yes, all the world is a web app.

    Sadly, while not new Maxima seem
    to exibit eqivalent attitude ("all important core work was dane in seventies").

    I don't quite agree. I would say "If your system is a re-implementation of

    [continued in next message]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)