I am not memeber of Quora web site, but I saw this post
on it which has most upvotes on the topic
https://www.quora.com/What-is-the-difference-between-maple-and-mathematica
"Note that Sage is unlikely to be that competitor: its
fundamental design carries along the same flaws that Maple
and Mathematica have; most of those flaws were actually already
present in Macsyma years earlier, but the `conventional wisdom'
had not moved on to recognize these fundamental design flaws"
Unfortunately the post does not say what these fundamental design
flaws are. it is an old post from 6 years ago.
I wonder if any one would venture to guess or comment on what
these flaws might be? I am just curious to learn.
I am not memeber of Quora web site, but I saw this post
on it which has most upvotes on the topic
https://www.quora.com/What-is-the-difference-between-maple-and-mathematica
"Note that Sage is unlikely to be that competitor: its
fundamental design carries along the same flaws that Maple
and Mathematica have; most of those flaws were actually already
present in Macsyma years earlier, but the `conventional wisdom'
had not moved on to recognize these fundamental design flaws"
Unfortunately the post does not say what these fundamental design
flaws are. it is an old post from 6 years ago.
I wonder if any one would venture to guess or comment on what
these flaws might be? I am just curious to learn.
I am not memeber of Quora web site, but I saw this postI don't usually read sci.math.symbolic, for reasons that are probably obvious, but I happened across Nasser's post, which mentioned me! What kinds of fundamental design flaws? I'm not sure I can recall them all, but here are a few.
on it which has most upvotes on the topic
https://www.quora.com/What-is-the-difference-between-maple-and-mathematica
"Note that Sage is unlikely to be that competitor: its
fundamental design carries along the same flaws that Maple
and Mathematica have; most of those flaws were actually already
present in Macsyma years earlier, but the `conventional wisdom'
had not moved on to recognize these fundamental design flaws"
Unfortunately the post does not say what these fundamental design
flaws are. it is an old post from 6 years ago.
I wonder if any one would venture to guess or comment on what
these flaws might be? I am just curious to learn.
--Nasser
"Nasser M. Abbasi" schrieb:First of all "eating your own dog food" relates to compiler bootstrapping, which is what they did for gcc (very complex from asm https://stackoverflow.com/a/65708958 to lebel language and C and then to C++) and for C# (last one very recently in Roslyn
* As programming languages, they [...] make anyone who knows anything
about programming languages shudder.
* Unfortunately, a lot of [Maple's] packages are not actually well integrated into the core routines.
* Mathematica has a lot of fancy code [...], but hides a lot of it underneath interfaces with restricted options [...].
* Most of [...] Maple is written in Maple, and is user-visible; most of Mathematica is written in C and invisible.
* The poster believes in coding in the language imposed on users (the "eating your own dog food" method of software development).
All five are interrelated. I have ignored comments on User Interfaces, Technical Support, and commercial aspects.
Martin.
On Thursday, October 28, 2021 at 12:49:33 AM UTC-7, Nasser M. Abbasi wrote:Thanks for sharing.
I am not memeber of Quora web site, but I saw this post
on it which has most upvotes on the topic
https://www.quora.com/What-is-the-difference-between-maple-and-mathematica
"Note that Sage is unlikely to be that competitor: its
fundamental design carries along the same flaws that Maple
and Mathematica have; most of those flaws were actually already
present in Macsyma years earlier, but the `conventional wisdom'
had not moved on to recognize these fundamental design flaws"
Unfortunately the post does not say what these fundamental design
flaws are. it is an old post from 6 years ago.
I wonder if any one would venture to guess or comment on what
these flaws might be? I am just curious to learn.
--NasserI don't usually read sci.math.symbolic, for reasons that are probably obvious, but I happened across Nasser's post, which mentioned me! What kinds of fundamental design flaws? I'm not sure I can recall them all, but here are a few.
1. Trying to do mathematics and then realizing that the system has to handle "assumptions" such as "assume x>y" or" assume x is an integer" .. or n is an integer mod 13, ... consequently, assumption handling is bolted on after the system is largelybuilt.
2. Trying to aim for "it works for high-school students". For instance, what to do with sqrt(x^2)? will abs(x) work? sometimes. Maybe it should be a set {-x,x} ?
Too late, the system has already been built to handle single values. Maybe bolt on RootOf(y^2=x, y) and compute with that?
3. Inadequate recognition of numerical computation (esp. bad in Mathematica's model of numbers), in the user language.
I think the idea behind Sage is fundamentally "Let's all get together and write programs in Python." As though that will fix everything.
воскресенье, 31 октября 2021 г. в 22:06:09 UTC+3, nob...@nowhere.invalid:
"Nasser M. Abbasi" schrieb:
* As programming languages, they [...] make anyone who knows
anything about programming languages shudder.
* Unfortunately, a lot of [Maple's] packages are not actually well integrated into the core routines.
* Mathematica has a lot of fancy code [...], but hides a lot of it underneath interfaces with restricted options [...].
* Most of [...] Maple is written in Maple, and is user-visible; most
of Mathematica is written in C and invisible.
* The poster believes in coding in the language imposed on users
(the "eating your own dog food" method of software development).
All five are interrelated. I have ignored comments on User
Interfaces, Technical Support, and commercial aspects.
First of all "eating your own dog food" relates to compiler
bootstrapping, which is what they did for gcc (very complex from asm https://stackoverflow.com/a/65708958 to lebel language and C and then
to C++) and for C# (last one very recently in Roslyn version of the compiler). Mathematica is a symbolic language and bootstrapping it is insanity. As for writing most of the language in its own language,
that is what Java did and why it is so slow, why the main cpython implementation of python did not do it. Also Mathematica allows to
compile to stand alone C/CUDA applications and looking most of
C/C++/CUDA code.
Regarding Waldek's note -- about the undecidability of central problems in mathematics that require determining whether an expression is zero or not... (Results due to Daniel Richardson, 1968 or so).integer arithmetic) that Richardson's results don't hold.
I don't consider this a design flaw in the systems that people build. It is a fundamental limitation in the mechanization of mathematics. One that cannot be remedied by a better design. Unless you design a system that is so constrained (say, to
There is a community of academics who work with theorem proving and mathematical representation -- look up the CICM conferences. The overlap between this community and the CAS community should be substantial, but for some reason it is not. I wouldexpect that the representation of assumptions is presumably solved in the CICM community. Probably solved many times.
It may be that the category-based computational model does not so much substitute for assumptions, as legislates against the existence of assumptions.
If your model is (say) the integers only, it is not necessary to provide for assume(2>1). If your model is (say) polynomials in one or more variables x1, x2, ... then something like assume(x2>0) is not necessary for arithmetic of polynomials.Some additional structure may allow for assumptions, but that structure ("evaluation") is perhaps not part of the model.
As for Sage/python -- I entirely agree that Sage is a composition of many programs and many capabilities not written in Python. However, there is some kind of main-line constructive framework that has been put together in Python.
I do agree that Maxima internals are complicated -- more so than they should be. You can start from scratch and build a less complicated system. But can it do the same thing as Maxima?
By the way, what is the state of symbolic algebra in Julia, where code
is indeed compiled automatically on the fly? Can this or other Julia
features be of advantage here?
Martin.
On 12/29/2021 12:17 AM, clicliclic@freenet.de wrote:
By the way, what is the state of symbolic algebra in Julia, where
code is indeed compiled automatically on the fly? Can this or other
Julia features be of advantage here?
There is the OSCAR project, which is a CAS that uses Julia:
"Julia serves as an integration layer allowing the four
cornerstones to communicate in a more direct way than
through unidirectional interfaces. Furthermore it serves
as high-level language for implementing efficient algorithms
utilizing all cornerstones."
https://www.youtube.com/watch?v=rWQK4mU3jQc&ab_channel=TheJuliaProgrammingLanguage
https://oscar.computeralgebra.de/about/
I never used Oscar myself.
The main page for Julia symbolic is
https://symbolics.juliasymbolics.org/dev/ but do not how much
development is going on with Symbolics.jl
Julia symbolics:
... lacking many basic computer algebra capabilities so far, such as:
stating and applying assumptions, polynomial factorization, Gröbner
bases, symbolic solutions to systems of polynomial equations, limits
and integration in calculus,
which unsurprisingly constitute those staples of computer algebra that
are harder to implement.
On 1/15/2022 6:23 AM, clicliclic@freenet.de wrote:
Julia symbolics:
...
... lacking many basic computer algebra capabilities so far, such as:
stating and applying assumptions, polynomial factorization, Gröbner
bases, symbolic solutions to systems of polynomial equations, limits
and integration in calculus,
which unsurprisingly constitute those staples of computer algebra that
are harder to implement.
To do integration in Julia, the Julia package Reduce can be
used
https://reduce.crucialflow.com/v1.2/
"Symbolic parser generator for Julia language expressions
using REDUCE algebra term rewriter"
It looks like an API to the original Reduce CAS at
https://reduce-algebra.sourceforge.io/index.php
"REDUCE is a portable general-purpose computer algebra system"
--------------------------
_
_ _ _(_)_ | Documentation: https://docs.julialang.org
(_) | (_) (_) |
_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help.
| | | | | | |/ _` | |
| | |_| | | | (_| | | Version 1.7.1 (2021-12-22)
_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release
|__/ |
julia> using Reduce
#use Reduce to do the integration
julia> output = :(int(sin(x),x)) |> rcall
:(-(cos(x)))
julia> :(int((a^2 - b^2*x^2)^(-1),x)) |> rcall
:((log(-((a + b * x))) - log(a - b * x)) / (2 * a * b))
julia> :(int( (-1+exp(1/2*x))^3/exp(1/2*x),x)) |> RExpr
x/2 x x
e *(e + 3*x) - 2*(3*e - 1)
--------------------------------
x/2
e
---------------------------------
I was thinking to adding Reduce via Julia to the next build of the
CAS independent integration tests if I have time.
And there is a Maxima for Julia too:
<https://nsmith5.github.io/Maxima.jl/latest/>
Now only Giac is missing.
Is one offered the latest respective CAS version automatically? There
should be a command causing Reduce to identify itself.
Martin.
Can Maxima do something special? AFAICS core functionality of
various CAS-es is similar (polynomial operations, equation
solving, limits, integration, etc.) and in Maxima case this
part seem to be rather dated. It was stat of the art in 1980,
but there was (IMO significant) progress after that.
Waldek Hebisch
On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:
Can Maxima do something special? AFAICS core functionality of
various CAS-es is similar (polynomial operations, equation
solving, limits, integration, etc.) and in Maxima case this
part seem to be rather dated. It was stat of the art in 1980,
but there was (IMO significant) progress after that.
Waldek Hebisch
Hi Waldek & sci.math.symbolic.
Apologizes for the delayed reaction. I don't visit here that often.
1. Maxima is written (mostly) in Lisp; the Lisp systems have gotten better in various ways and support
more or better memory, bignum arithmetic, communication with systems in other languages, web stuff.
Those changes seep into Maxima, sometimes, though slowed by the need to be platform agnostic.
2. Some subsystems not necessarily in Lisp have also improved. For example the user interface
wxmaxima is, I think, quite nice and getting nicer.
Another example is the improvements in graphical display via improvements in gnuplot.
There are also translations from Fortran of software -- like quadpack. If there were other pieces of
software of interest written in Fortran, they might also be translated to Lisp and run in Maxima.
I suspect that with modest alterations this code can be run using arbitrary-precision floats in Maxima.
Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
of the Maxima distribution it would have to be platform (hardware, software) agnostic.
So are these "special"? I don't know for sure, but I think there are certainly not dated.
3. There is a continuing effort by a host of people who provide fixes, enhancements, and applications
in their own library public repositories. There are educational projects, and wholesale adoption of
Maxima in schools and school districts. There is an active Maxima mailing list.
4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis for
testing if Maxima's facilities were dated.
I am aware of the testing of indefinite integration of
functions of a single variable, comparing Rubi to various other systems. I have some doubts about
measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
that are simpler and smaller than some of the competition. So I would not judge from this.
While indefinite integration is an application that relies on a tower of algorithmic developments in
symbolic mathematical systems, one that made researchers proud over the years -- starting as
probably the first largish program written in Lisp (James Slagle's SAINT, 1961)
it is not much in demand by engineers and applied mathematicians. In fact the far more common problems of DEFINITE integration (in one or more variables) can
usually be addressed by numerical quadrature. The reference / handbooks of calculus formulas
contain far more formulas for definite integrals [ with parameters], involving special functions, and
even so, they harken back to a time when a mathematician did not have access to computers.
So while a core functionality of a CAS might be "integral calculus", it is partly a tribute to "we can mostly do this with what we built."
more than "applied mathematicians asked us to do this for their daily work". In part it is a historical tribute to "we must be doing something hard because
human calculus students struggle to do their homework problems, and maybe this is even Artificial Intelligence. And that is good."
If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,
polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
with substantial likelihood they can be inserted into Maxima, or added in via libraries.
For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation of
special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
doing nothing other than writing in C is a step forward, I think is wrong.
(People say Python and sympy
are slower than C, maybe slower than Lisp, Julia is faster than Python or maybe faster than
C.
These are all just running within constant multiplies of each other, if they use the same
algorithms. And benchmarks tend to be misleading anyway.)
There are areas where interested programmers could add to
a computer algebra system, and they might consider adding to Maxima; a few I've suggested
include an improved interval arithmetic system, and a way of using an inverse symbolic
calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )
I am more struck by the fact that "new" CAS have rarely improved on those core
capabilities, rarely moving in interesting directions.
The ones that have been mentioned previously in
this thread. And some of them have made stumbling steps in wrong directions. When pointed out, they respond, in effect, in the immortal words of Peewee Herman,
"I meant to do that"... https://gifs.com/gif/pee-wee-herman-i-meant-to-do-that-mPElxr
Are there possible breakthroughs that will make all current CAS so obsolete that they must
all be tossed in the trash? If so, I haven't seen them yet. Can current CAS be improved? Sure,
but some improvements will be difficult.
Richard Fateman
I tried to use Reduce itself, but could not figure how to do some things
with it, and there are no good documenation and no forum to ask. (send email to join Reduce mailing list at sourceforge, but never got a reply).
Nasser M. Abbasi schrieb am Sonntag, 16. Januar 2022 um 10:47:13 UTC+1:
I tried to use Reduce itself, but could not figure how to do some things with it, and there are no good documenation and no forum to ask. (send email
to join Reduce mailing list at sourceforge, but never got a reply).
There's a GUI for Reduce, which may help to get started much quicker?
- https://fjwright.github.io/Run-REDUCE/
Richard Fateman <fat...@gmail.com> wrote:
On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:
Can Maxima do something special? AFAICS core functionality of
various CAS-es is similar (polynomial operations, equation
solving, limits, integration, etc.) and in Maxima case this
part seem to be rather dated. It was stat of the art in 1980,
but there was (IMO significant) progress after that.
Waldek Hebisch
Hi Waldek & sci.math.symbolic.
Apologizes for the delayed reaction. I don't visit here that often.
1. Maxima is written (mostly) in Lisp; the Lisp systems have gotten better in various ways and support
more or better memory, bignum arithmetic, communication with systems in other languages, web stuff.
Those changes seep into Maxima, sometimes, though slowed by the need to be platform agnostic.
2. Some subsystems not necessarily in Lisp have also improved. For example the user interfaceWell, when I looked at Fortran code, my conclusion was that significant
wxmaxima is, I think, quite nice and getting nicer.
Another example is the improvements in graphical display via improvements in gnuplot.
There are also translations from Fortran of software -- like quadpack. If there were other pieces of
software of interest written in Fortran, they might also be translated to Lisp and run in Maxima.
I suspect that with modest alterations this code can be run using arbitrary-precision floats in Maxima.
part can not be _usefully_ run at arbitrary precision. For example,
special functions and some qadratures use magic constants that need
to be accurate to required precision. Then there is question of
compute time, low order methods scale quite badly with precision.
For example, for double float eliptic functions there is Carlson
code. This code needs number of iterations growing linarly with
precision. OTOH Gauss-Landen transforms need only logarithmic
number of iterations. So, for double precision FriCAS uses
Carlson method. But arbitrary precision uses Gauss-Landen transforms
(with some added part from Carlson to improve robustness).
In FriCAS nominally Romberg method can deliver arbitrary precision,
but it is too slow to use. Variable order Gauss methods are
useful for arbitrary precision, but rather different than
fixed precision ones.
There is also another possible problem: if code is written as
generic Lisp arithmetic, than for double precision it will be
much slower than specialised one. So you really want several
versions of code: real double precision, complex double precision
and separate arbitrary precision version(s) (you can probably use
single version for real and complex arbitrary precision, but
it may be simpler to have 2 arbitrary precision versions) and
possibly also single precision version (ATM in FriCAS there is
no support for single precision).
Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
of the Maxima distribution it would have to be platform (hardware, software) agnostic.
So are these "special"? I don't know for sure, but I think there are certainly not dated.Well, what you mention above are things outside core. If you think
that they are important, than you should really go with "new" systems,
AFAICS they have this part much better than Maxima.
Lisp as advantage, for Lisp lover indeed Maxima has advantage as
it is system where Lisp is most visible. For me Lisp is just
implementation detail which should _not_ leak to user level.
3. There is a continuing effort by a host of people who provide fixes, enhancements, and applicationsYou basically say that there is inertia: in short time for Maxima
in their own library public repositories. There are educational projects, and wholesale adoption of
Maxima in schools and school districts. There is an active Maxima mailing list.
folks it is easier to continue to use Maxima than to switch to
something else. True, inerta is powerful force. But if you
look at number of people behind various systems, Maxima would
look better than FriCAS, but worse than several "new" systems.
4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis forIn arxiv preprint 1611.02569 Parisse gives an example of polynomial factorization and proposes special method to solve it. ATM FriCAS
testing if Maxima's facilities were dated.
does not do very well on this example (2500s on my machine), but
certainly better than systems that can not do it at all. However,
my estimate is that using up-to-date general methods it should take
fraction of second (Magma was reported to need 3s, Parisse special
method 2s). So, I would say that possibly all systems have
some work to do.
I am aware of the testing of indefinite integration ofRubi testsute has flaws, but I think that it actually overestimates
functions of a single variable, comparing Rubi to various other systems. I have some doubts about
measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
that are simpler and smaller than some of the competition. So I would not judge from this.
Maxima capabilites. Namely, examples in Rubi testsuite are
arranged to be easily matchable to patterns.
That eliminates
work that would be otherwise needed to discover true structure
of integrand. AFAICS Maxima basically assume that integrand
can be handled by simple-minded heuristics.
This is true
for Rubi testsuite, but fails for more general cases.
Already random exp-log examples seem to cause trouble for Maxima.
Let me add that working on integration I also test integration
on various systems. My examples are probably harder than
average.
Anyway, it is pretty easy to came with examples
that Maxima or Reduce can not do. Maple is much better
there. And Mathematica is still better.
While indefinite integration is an application that relies on a tower of algorithmic developments in
symbolic mathematical systems, one that made researchers proud over the years -- starting as
probably the first largish program written in Lisp (James Slagle's SAINT, 1961)
it is not much in demand by engineers and applied mathematicians. In fact the far more common problems of DEFINITE integration (in one or more variables) can
usually be addressed by numerical quadrature. The reference / handbooks of calculus formulas
contain far more formulas for definite integrals [ with parameters], involving special functions, and
even so, they harken back to a time when a mathematician did not have access to computers.
So while a core functionality of a CAS might be "integral calculus", it is partly a tribute to "we can mostly do this with what we built."Well, when people need numbers they probably should use numeric
more than "applied mathematicians asked us to do this for their daily work".
In part it is a historical tribute to "we must be doing something hard because
human calculus students struggle to do their homework problems, and maybe this is even Artificial Intelligence. And that is good."
methods. But some people need formulas and they hope that
CAS will deliver formulas.
Regardless of your view about usefulness of indefinite
integration, testing indefinite integration is useful. Namely,
indefinite integration is easily testable and to have good preformance
in general you need good algorithms. You may claim that those
algorthms are very specialised. However, core Risch algorithm
is relativly simple. Difficultly is that you need a lot of
support routines which however are usable in more general contexts.
In more detail, you need polynomial and rational function arithmetic, including gcd, resultants, factorization and partial fraction
decomposition, this is usable almost everwhere in CAS. Next,
you need differential field routines, they are usable also for simplification, transcendental solving, differential equation
solving, limits and power series expansion.
If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,Hmm, you should know about Monagan and Pearce work on polynomial
polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
with substantial likelihood they can be inserted into Maxima, or added in via libraries.
operations. It is now included in Maple and gives significant
speedup on large polynomials.
There is long cycle of works of
Monagan and collaborators on polynomial GCD-s and factorisation
giving significant speedups compared to earlier methods.
IIUC
several of them are incuded in Maple. Some variations are
implemented in FriCAS. As an illustartion let me say that
for GCD-s with algebraic coefficients FriCAS previously used
subresultant GCD (claimed to be very good implementation
of subresultant GCD). Switching to modular methods in
style of Monagan and van Hoej gave large speedup. My
estimates suggest that using different modular method
should be much faster. Monagan shows example where
factorization in algebraic extention using older method
is prohibitivly expensive, but newer method works quite
well. In fact, in all papers he gives examples and IMO
it is clear that what he and collaborators implemented
gives significant progress.
I looked a bit at Maxima multivariate factorization code.
AFAICS this is early Wang method trying to use zero as
evaluation point. This is resonably fast if you are lucky.
But it may be quite slow otherwise. In modern time there
seem to be agreement that zero evals should be avoided,
there is too high risk of degraded performance if you try
them.
Anyway, already in classic case of integer coefficients
Maxima lacks improvement concerning handling harder cases.
For algebraic coefficients IIUC there is problem of
correctness (IIUC some Maxima methods silently produce
wrong results) and no (correct) support for modular
methods.
lifting directly for algebraic coefficients. And
there is well developed methodology for exploiting
sparsity with non-zero evaluation point. There are
heuristics for early failure and restart (to avoid
hude loss of time due to bad eval). There are new
approaches to leading coefficient problem.
Of course, one could try to add this to existing routines.
But IMO needed changes are so large that it makes more
sense to start from scratch.
and much more readable than Maxima. Still, parts
of new approach did not fit well into existing schema
so I had to add new parallel modules. This is ongoing
change (in relatively early stage) and currently old code
is used together with new one, but at the end it is possible
that no old factorization/gcd code will survive.
Concerning power series expansions: AFAICS FriCAS
routines are much better than Maxima routines in this
area. Part of this is that FriCAS uses lazy approach.
Part is that FriCAS has special handling for many
cases (so more code, but better performance).
As another example let me mention that recently on Maxima
list there was a guy with system of equations which Maxima
could not solve (IIRC Maxima run out of memory). I tried
this system in FriCAS. After little modification (which
matches stated intent) the system was easily solvable
(few seconds) in FriCAS. Orginal system was harder
(minutes or maybe tens of minutes) but worked too.
On Maxima list advice was: "you do not need to solve
it, answer would be too complicated to see anything".
Well, FriCAS answer was largish and gave me no
enlightment. But sometimes answers are small despite
large computations needed to obtain them.
Sometimes
large answer may give important que. And in many
cases it is possible to do further calculations
with such answer.
So "you do not need this" is a poor excuse, system
that can give answer is better that one which can
not.
For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation ofUnfortunately Lisp implementations are slow. IME fastest is SBCL
special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
doing nothing other than writing in C is a step forward, I think is wrong.
where normally I can get about half of speed of comparable C code.
But at Lisp level this code has complete type declarations,
uses specialized array and machine compatible types. SBCL gives
slower code than C simply because SBCL code generator can not
optimize so well as optimizers in C compilers.
In principle
on such code ECL or GCL should do better because code could be
easily 1-1 translated to C. But somewhat, both ECL and GCL
insert extra operations which kill performance.
However, that
was somewhat optimistic, when I really need performance in
C I could tweak code to use say SSE instructions, doing 2
or 4 times more work per instruction. Also, in gcc I have
easy access to high part of product of 64-bit numbers, which
in several problems roughly doubles performance compared to
64-bit product. So, using C in many cases I could get 4-8
times faster code than going via SBCL.
Recently I needed
to work with 2-dimensional arrays. To my dismay I discovered
that my code was much slower than I hoped, about 6 or 8
times slower than comparable C. It turned out that SBCL
(at least version that I used) was unable to optimize
indexing for 2-dimensional arrays and dully generated code
repeatedly fetching dimension from array object and
performing multiplication implied by indexing...
Another thing is that Lisp encourages writing "generic" code,
which works for several types. That is nice for non-critical
code, but there is rather large slowdown compared to well-typed
code with enough type info.
ATM I am staying with SBCL, but you can see that when it comes
to speed of low level code that Lisp is a problem.
Concerning writing everything in C: I think that this approach
is wrong. It makes sense if you write low-level library that
should be callable from many languages, because calling C
is relatively easy and calls between different higher level
languages are problematic.
(People say Python and sympyI did many microbenchmarks and also looked at correlation with
are slower than C, maybe slower than Lisp, Julia is faster than Python or maybe faster than
C.
runtime for applications. One can get resonably good idea
of language intrinsic speed looking at speed of relatively
simple loops. At very top level language speed essentially
does not matter. Slowest ones where Unix shell and UCB Logo,
both about 10000 time slower than C, on modern machine
this may easily hide behind time taken by large mass of
other code. Then you have code "outside inner loop". In
many cases you can accept slow code there. But things
may be tricky. In early version of guessing code I had
access to 2 dimensional array outside 2 nested loops.
So rarely executed code and its speed should not matter.
However, I cared about speed of guessing code and I
profiled it. And this array access showed in profile.
This was not biggest item, just few percent, but still
suprisingly large (I changes how FriCAS array indexing works
and now it is invisible in profile). Python is about 100
times slower than C, so basically you want your inner loops
to be in C (or C++ which is also easily callable fro Python),
that is either use Python builtins or some support library
exposed to Python. Sympy folks insisted on "pure Python"
and Python builtins seem to be not so good match for
computer algebra, so there is slowdown (and as reaction Sympy
core project in C++ which is much faster for basic operations).
Julia uses LLVM, and is unlikely to run faster than C.
They claim good speed, but I do not know how good it is,
their claims are consistent both with Julia being 2 times
faster than SBCL and also with 2 times slower...
Language speed is essentially meaningless without a profiler.
Namely, without profiler you are making blind changes, which
frequently is waste of time.
Or you need to write a lot
of instrumentation code. I find SBCL profiler to be quite
good. For other Lisps I was not able to find really
usable profiler. For Lisp and also for interpeted languages
you need dedicated profiler: in case of Lisp code in
principle moves in memory and for example on GCL standard
system profiler tells me about time spent in GCL runtime,
but not about time spent in my code. Similarly, for
interpreted languages system profiler tells me about time
spent in interpeter, but without info which code was
responsible for interpeter time.
These are all just running within constant multiplies of each other, if they use the sameWell, constants matter. 100 times may be huge difference. Even
algorithms. And benchmarks tend to be misleading anyway.)
2 times may be significant. Whan Apple switched to gcc they
gave as reason 10% improvement in code speed compared to compiler
they used previously (there where other reasons too, but speed
was significant factor).
There are areas where interested programmers could add to
a computer algebra system, and they might consider adding to Maxima; a few I've suggested
include an improved interval arithmetic system, and a way of using an inverse symbolic
calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )
I am more struck by the fact that "new" CAS have rarely improved on those coreI am not sure what you mean by "new" CAS above.
capabilities, rarely moving in interesting directions.
There is bunch of
systems that essentially take view that core functionality does not
matter much or can be delegated to other systems and instead concentrate
on packaging and presentation.
Sadly, while not new Maxima seem
to exibit eqivalent attitude ("all important core work was dane in seventies").
OTOH, there are new developements. I have no personal
experience with Giac, but on some polynomial problems it gave
impressive benchmark results. There are efficient libraries.
By now classic in NTL having among other good routines for factorization
of polynomials over finite fields. There is Flint, which has
good routines for univariate polynomials. Singular seem to
have quite good Groebner bases. There are solvers for large
equation systems with integer coefficients. Sage links
to several other systems and libraries and if you are in
range of one of specialized algorithms, then you get good
performance.
Among not new systems FriCAS has guessing package (main part of
which is efficient solver for large linear systems possesing special structure), GFUN offers similar thing to Maple. Maple
(and I supect Magma and Mathematica too) made progess on multiplication, division, gcds and factorization for polynomials. There are
also significant improvement in this area in FriCAS.
The ones that have been mentioned previously in
this thread. And some of them have made stumbling steps in wrong directions.
When pointed out, they respond, in effect, in the immortal words of Peewee Herman,
"I meant to do that"... https://gifs.com/gif/pee-wee-herman-i-meant-to-do-that-mPElxr
Are there possible breakthroughs that will make all current CAS so obsolete that they must
all be tossed in the trash? If so, I haven't seen them yet. Can current CAS be improved? Sure,
but some improvements will be difficult.
Richard Fateman
--
Waldek Hebisch
On
Again, apologies for a delayed response, but thanks for your long note!!!
My comments are interspersed.
rjf
Monday, February 7, 2022 at 6:32:30 PM UTC-8, anti...@math.uni.wroc.pl wrote:
Richard Fateman <fat...@gmail.com> wrote:
On Sunday, January 2, 2022 at 6:40:15 AM UTC-8, anti...@math.uni.wroc.pl wrote:There is also another possible problem: if code is written as
generic Lisp arithmetic, than for double precision it will be
much slower than specialised one. So you really want several
versions of code: real double precision, complex double precision
and separate arbitrary precision version(s) (you can probably use
single version for real and complex arbitrary precision, but
it may be simpler to have 2 arbitrary precision versions) and
possibly also single precision version (ATM in FriCAS there is
no support for single precision).
This is certainly true, and I think it is a basis for the Julia fans to say they can
make programs run faster by compiling -- on the fly -- specialized code for particular
types. In lisp, there is certainly the option to compile multiple versions of the same
procedure but with different type declarations. So I don't see that as a problem;
more like an opportunity.
Other code is potentially called as foreign function libraries (e.g. Gnu MPFR). I suppose that
any of the packages linked to (say) Sage or Julia could be called from Lisp, since there are
existing interfaces to C and Python. I don't know if anyone has done this, but again to be part
of the Maxima distribution it would have to be platform (hardware, software) agnostic.
So are these "special"? I don't know for sure, but I think there are certainly not dated.Well, what you mention above are things outside core. If you think
that they are important, than you should really go with "new" systems, AFAICS they have this part much better than Maxima.
I doubt it.
If all you want to do is write a C or Python program to do bignum arithmetic, and you
are proficient in those languages, that might be better than using Maxima, especially if you
are unwilling to read the documentation, or download the binary code. (Better to spend
a week in the computer lab than an hour in the library.)
3. There is a continuing effort by a host of people who provide fixes, enhancements, and applicationsYou basically say that there is inertia: in short time for Maxima
in their own library public repositories. There are educational projects, and wholesale adoption of
Maxima in schools and school districts. There is an active Maxima mailing list.
folks it is easier to continue to use Maxima than to switch to
something else. True, inerta is powerful force. But if you
look at number of people behind various systems, Maxima would
look better than FriCAS, but worse than several "new" systems.
How many people are using Maxima? Between 11/1/2021 and 2/1/2022 76,209 people downloaded
the Windows version of the binary. It is hard to know how many Unix versions, since they are included
in distributions not directly from sourceforge.
4. If there were a set of standard benchmarks for "core" functionalities, there might be a basis forIn arxiv preprint 1611.02569 Parisse gives an example of polynomial factorization and proposes special method to solve it. ATM FriCAS
testing if Maxima's facilities were dated.
does not do very well on this example (2500s on my machine), but
certainly better than systems that can not do it at all. However,
my estimate is that using up-to-date general methods it should take fraction of second (Magma was reported to need 3s, Parisse special
method 2s). So, I would say that possibly all systems have
some work to do.
Parisse states in his conclusion that he hopes that other open-source CAS will implement
this method. It does not look too difficult to do in Maxima.
Factorization time has not
been raised as a limiting issue in a computation of interest, that I am aware of. The particular
test example is large.
I am aware of the testing of indefinite integration ofRubi testsute has flaws, but I think that it actually overestimates
functions of a single variable, comparing Rubi to various other systems. I have some doubts about
measurements of Maxima, since they are done through the partly-blinded eyes of Sage. I have run
some of the "failed" Maxima tests through Maxima and found they succeed, and indeed find answers
that are simpler and smaller than some of the competition. So I would not judge from this.
Maxima capabilites. Namely, examples in Rubi testsuite are
arranged to be easily matchable to patterns.
I don't understand this comment.
That eliminates
work that would be otherwise needed to discover true structure
of integrand. AFAICS Maxima basically assume that integrand
can be handled by simple-minded heuristics.
This describes the first of 3 stages in the integration program. There are two more, described in J. Moses' paper in Comm. ACM. The third stage
is a (partial) implementation of the Risch algorithm.
It is possible
that the decomposition of the integrand in a differential field would be
more difficult starting from one form or another, but the integration
program has simplifications such as radcan() at its disposal.
This is true
for Rubi testsuite, but fails for more general cases.
Already random exp-log examples seem to cause trouble for Maxima.
These probably should be reported as bugs, then.
Let me add that working on integration I also test integration
on various systems. My examples are probably harder than
average.
Average? For what population of integration tasks? Examples taken from homework problems in Freshman calculus?
Anyway, it is pretty easy to came with examples
that Maxima or Reduce can not do. Maple is much better
there. And Mathematica is still better.
Finding problems where Mathematica fails is not difficult. Improving Maxima to have
a larger coverage of indefinite integrals is probably not hard either. I have been hoping
that wholesale adoption of Rubi would make this unnecessary!
If some of the newer CAS have "better" core algorithms like -- polynomial multiplication,Hmm, you should know about Monagan and Pearce work on polynomial operations. It is now included in Maple and gives significant
polynomial GCD, expansion in Taylor series, it would be interesting to take note, and if so
with substantial likelihood they can be inserted into Maxima, or added in via libraries.
speedup on large polynomials.
I am aware of their results and corresponded with them. The point you make is correct...
"speedup on large polynomials". In my careful implementation of their ideas,
the results were a slowdown on all problems that were NOT exceedingly large. And
so not of great interest.
Figuring comparisons of Maple to other systems is
hindered by the miscellaneous programming and data-structure decisions in Maple. Thus programs written in the underlying language (Margay? is that still in use?)
and in the Maple "user" language run at quite different rates.
Thus (only) in Maple
did it make sense to map certain polynomial problems into packed
integer problems. The integer computations were in the core, the polynomial problems were not. I don't know if this is still a prevalent feature of
the current Maple.
There is long cycle of works of
Monagan and collaborators on polynomial GCD-s and factorisation
giving significant speedups compared to earlier methods.
I have expressed my concern above; a heuristic GCD that maps polynomials
into (big) integers is the kind of "optimization" that might work in Maple and
nowhere else. Or maybe it does work ...
See this 1995 paper
https://dl.acm.org/doi/abs/10.1145/220346.220376
IIUC
several of them are incuded in Maple. Some variations are
implemented in FriCAS. As an illustartion let me say that
for GCD-s with algebraic coefficients FriCAS previously used
subresultant GCD (claimed to be very good implementation
of subresultant GCD). Switching to modular methods in
style of Monagan and van Hoej gave large speedup. My
estimates suggest that using different modular method
should be much faster. Monagan shows example where
factorization in algebraic extention using older method
is prohibitivly expensive, but newer method works quite
well. In fact, in all papers he gives examples and IMO
it is clear that what he and collaborators implemented
gives significant progress.
I would not endorse an algorithm for general use if it has only been implemented in Maple. It has to be tested on a more neutral
platform.
I looked a bit at Maxima multivariate factorization code.
AFAICS this is early Wang method trying to use zero as
evaluation point. This is resonably fast if you are lucky.
But it may be quite slow otherwise. In modern time there
seem to be agreement that zero evals should be avoided,
there is too high risk of degraded performance if you try
them.
If there is a better way to get around unlucky evaluation points than
is currently programmed in Maxima, perhaps someone will
implement it.
Anyway, already in classic case of integer coefficients
Maxima lacks improvement concerning handling harder cases.
For algebraic coefficients IIUC there is problem of
correctness (IIUC some Maxima methods silently produce
wrong results) and no (correct) support for modular
methods.
If you are aware of bugs, it should be easy to report them.
State of the art approach would use Hensel
lifting directly for algebraic coefficients. And
there is well developed methodology for exploiting
sparsity with non-zero evaluation point. There are
heuristics for early failure and restart (to avoid
hude loss of time due to bad eval). There are new
approaches to leading coefficient problem.
Again, reporting of bugs should be easy.
Of course, one could try to add this to existing routines.
But IMO needed changes are so large that it makes more
sense to start from scratch.
This hardly makes sense to me. Why should you have to write
a new computer algebra system to write an improved
polynomial factoring program?
Concerning power series expansions: AFAICS FriCAS
routines are much better than Maxima routines in this
area. Part of this is that FriCAS uses lazy approach.
Part is that FriCAS has special handling for many
cases (so more code, but better performance).
Speed for truncated Taylor series (if that is what you mean)
has not usually be an issue. Maxima has other routines to
compute the general term of a power series (infinite series).
I don't know if FriCAS does that.
As another example let me mention that recently on Maxima
list there was a guy with system of equations which Maxima
could not solve (IIRC Maxima run out of memory). I tried
this system in FriCAS. After little modification (which
matches stated intent) the system was easily solvable
(few seconds) in FriCAS. Orginal system was harder
(minutes or maybe tens of minutes) but worked too.
On Maxima list advice was: "you do not need to solve
it, answer would be too complicated to see anything".
Well, FriCAS answer was largish and gave me no
enlightment. But sometimes answers are small despite
large computations needed to obtain them.
Yes, sometimes. A system that may be many times faster again than FriCAS
is Fermat. I think that it would be worth testing, if you want to see
how fast is fast.
https://home.bway.net/lewis/
For instance, improved algebraic system solving, limits (e.g. D. Gruntz), manipulation ofUnfortunately Lisp implementations are slow. IME fastest is SBCL
special functions. The common notion that "Lisp is slow" and "C is fast" and that therefore
doing nothing other than writing in C is a step forward, I think is wrong.
where normally I can get about half of speed of comparable C code.
So why are people writing in Python, which is (apparently) quite slow?
But at Lisp level this code has complete type declarations,
uses specialized array and machine compatible types. SBCL gives
slower code than C simply because SBCL code generator can not
optimize so well as optimizers in C compilers.
I have looked at SBCL compiler generated code, which is easily viewed
by the lisp disassemble function. Complaints about SBCL code
can be sent to the SBCL mailing list.
In principle
on such code ECL or GCL should do better because code could be
easily 1-1 translated to C. But somewhat, both ECL and GCL
insert extra operations which kill performance.
Lisp compilers (some of them, anyway) generate assembly language.
However, that
was somewhat optimistic, when I really need performance in
C I could tweak code to use say SSE instructions, doing 2
or 4 times more work per instruction. Also, in gcc I have
easy access to high part of product of 64-bit numbers, which
in several problems roughly doubles performance compared to
64-bit product. So, using C in many cases I could get 4-8
times faster code than going via SBCL.
If you wish to write code in assembler, I assume that it can be
linked to Lisp as well.
Recently I needed
to work with 2-dimensional arrays. To my dismay I discovered
that my code was much slower than I hoped, about 6 or 8
times slower than comparable C. It turned out that SBCL
(at least version that I used) was unable to optimize
indexing for 2-dimensional arrays and dully generated code
repeatedly fetching dimension from array object and
performing multiplication implied by indexing...
Another thing is that Lisp encourages writing "generic" code,
which works for several types. That is nice for non-critical
code, but there is rather large slowdown compared to well-typed
code with enough type info.
There is an advantage to generic code that works. Then you add
declarations to make it faster. There is an argument that type
declarations make it easier to debug. Maybe. I think that in some
cases the type declarations ARE the bug.
ATM I am staying with SBCL, but you can see that when it comes
to speed of low level code that Lisp is a problem.
I am not convinced that speed is such a problem. If it were, we would
all be using FERMAT.
There are areas where interested programmers could add to
a computer algebra system, and they might consider adding to Maxima; a few I've suggested
include an improved interval arithmetic system, and a way of using an inverse symbolic
calculator (see Stoutemyer's interesting paper https://arxiv.org/abs/2103.16720 )
I am more struck by the fact that "new" CAS have rarely improved on those coreI am not sure what you mean by "new" CAS above.
capabilities, rarely moving in interesting directions.
Any system that is newer than Macsyma circa 1970, or Reduce of that time.
In particular, Mathematica, Maple are new. Even though they are now rather old
compared to (say) Sympy.
There is bunch of
systems that essentially take view that core functionality does not
matter much or can be delegated to other systems and instead concentrate
on packaging and presentation.
Yes, all the world is a web app.
Sadly, while not new Maxima seem
to exibit eqivalent attitude ("all important core work was dane in seventies").
I don't quite agree. I would say "If your system is a re-implementation of
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 112 |
Nodes: | 8 (1 / 7) |
Uptime: | 245:04:32 |
Calls: | 2,467 |
Calls today: | 1 |
Files: | 8,614 |
Messages: | 1,885,831 |