Because FORTH Code is far more condensed and efficient, how many lines of FORTH Code would I need to recode a 10000 LOC C application?
OK, let's see:
„Hello, world” program in C:
#include <stdio.h>
int main()
{
printf("Hello, world!");
return 0;
}
„Hello, world” program in Forth:
.( Hello, world! )
It seems the Forth program will take about 1/6 lines of similar C program.
It's been a while since I looked at LOC much, but don't they actually use SLOC, which removes the cruft and only counts functional lines?
Andreas Neuner <andreas...@w3group.de> writes:
Because FORTH Code is far more condensed and efficient, how many lines of FORTH Code would I need to recode a 10000 LOC C application?Depends on what you mean by "recode". Wil Baden has taken some
Another approach is to write the application from requirements.
Somewhat in that direction is
http://www.euroforth.org/ef99/ertl99.pdf, where I compared parser
- anton
--
M. Anton Ertl http://www.complang.tuwien.ac.at/anton/home.html comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
New standard: https://forth-standard.org/
EuroForth 2022: https://euro.theforth.net
Because FORTH Code is far more condensed and efficient, how many lines
of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here who have recoded some C apps.
Because FORTH Code is far more condensed and efficient, how many lines of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here who have recoded some C apps.
Because FORTH Code is far more condensed and efficient, how many lines of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here who have recoded some C apps.
Thank you
Best Wishes
Andreas
I'm sure people there are people around here who have recoded some C apps.
And sometimes (maybe most of the time?) Forth coders build a solution tha= >t
would not make sense in C. My favourite example is making an Assembler.
The C program will be a substantial project because it has to create everyt= >hing.
A Forth assembler is normally a set of tiny "assemblers" for each instructi= >on.
These little "assembler" are knitted together to make programs.=20
On Saturday, May 27, 2023 at 7:00:02 PM UTC+2, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many lines
of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here who have recoded some C apps.
Very difficult to say without more information on what needs to be done.
The Forth code can be very compact because normally Forth will be extended
to fullfil the requirements of the application. Size also depends on the amount of
special libraries that are needed and of course, the programmer.
An (atypical) example is a SPICE-like circuit simulator. In iForth this is 10,000
LOC (including comments) in 5 files. The open source ngspice simulator
counts 16,481 source files with a total size of 2.4 GB.
-marcel
On Saturday, May 27, 2023 at 2:04:47 PM UTC-4, Zbig wrote:use SLOC, which removes the cruft and only counts functional lines?
OK, let's see:
„Hello, world” program in C:
#include <stdio.h>
int main()
{
printf("Hello, world!");
return 0;
}
„Hello, world” program in Forth:
.( Hello, world! )
It seems the Forth program will take about 1/6 lines of similar C program.
It's been a while since I looked at LOC much, but don't they actually
Rick C.
In article <fb7e7f22-2b83-48e2...@googlegroups.com>,
Lorem Ipsum <gnuarm.del...@gmail.com> wrote:
On Saturday, May 27, 2023 at 2:04:47 PM UTC-4, Zbig wrote:
OK, let's see:
„Hello, world” program in C:
#include <stdio.h>
int main()
{
printf("Hello, world!");
return 0;
}
„Hello, world” program in Forth:
.( Hello, world! )
It seems the Forth program will take about 1/6 lines of similar C program.
It's been a while since I looked at LOC much, but don't they actuallyuse SLOC, which removes the cruft and only counts functional lines?
In forth I look at WOC's (words of code) similarly stripping comments. ----------
#!/bin/sh
cat $1 |\
sed -e 's/\\ .*//' |\
sed -e 's/( [^)]*)//g' |\
sed -e '/\<DOC\>/,/\<ENDDOC\>/d'|\
sed -e 's/\\D .*//' |\
wc -w
exit
----------
My crc facility is 62 woc.
But look at the code, the table is generated on the fly,
a table included in c has dozens of loc's or 512 woc's.
This cannot be done in c.
----------------------------------------------------
( CRC-MORE CRC ) CF: ?32 \ AvdH C2feb27
"BOUNDS" WANTED "-scripting-" WANTED HEX
\ Well the polynomial
EDB8,8320 CONSTANT CRC32_POLYNOMIAL \ CRC-32K
\ Auxiliary table with values for single bytes.
CREATE CRCTable
100 0 DO I 8 0 DO
DUP >R 1 RSHIFT R> 1 AND IF CRC32_POLYNOMIAL XOR THEN
LOOP , LOOP
\ For initial CRC and BUFFER COUNT pair, leave the updated CRC
: CRC-MORE BOUNDS ?DO DUP I C@ XOR 0FF AND CELLS CRCTable + @
SWAP 8 RSHIFT XOR LOOP ;
\ For BUFFER COUNT pair, leave the CRC .
: CRC -1 ROT ROT CRC-MORE INVERT ;
DECIMAL
----------------------------------------------------
I think any "fair" comparison is bound to give c an unfair
advantage.
Because FORTH Code is far more condensed and efficient, how many lines of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here who have recoded some C apps.
Thank you
Best Wishes
Andreas
...
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many linesof FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around herewho have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many linesof FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around herewho have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
In article <00e4b748-0cc2-4143...@googlegroups.com>,
Brian Fox <bria...@brianfox.ca> wrote:
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many lines >of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here >who have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/implsThis is why github sucks.
1. There is no way to estimate the size of source files
2. There is no description of source files
You have to click on files to see the content and were it fits in the whole. All the source combined for dozens as languages takes only 3.6 Mbyte
(that is byte with an M not byte with a G)
That is indicative of only toy lisp.
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
--
Don't praise the day before the evening. One swallow doesn't make spring. You must not say "hey" before you have crossed the bridge. Don't sell the hide of the bear until you shot it. Better one bird in the hand than ten in the air. First gain is a cat spinning. - the Wise from Antrim -
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many linesof FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around herewho have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
On Monday, 29 May 2023 at 11:33:41 UTC+1, none albert wrote:
In article <00e4b748-0cc2-4143...@googlegroups.com>,
Brian Fox <bria...@brianfox.ca> wrote:
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:This is why github sucks.
Because FORTH Code is far more condensed and efficient, how many linesof FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around herewho have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
1. There is no way to estimate the size of source files
2. There is no description of source files
You have to click on files to see the content and were it fits in the whole. >> All the source combined for dozens as languages takes only 3.6 Mbyte
(that is byte with an M not byte with a G)
That is indicative of only toy lisp.
--
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
Don't praise the day before the evening. One swallow doesn't make spring.
You must not say "hey" before you have crossed the bridge. Don't sell the
hide of the bear until you shot it. Better one bird in the hand than ten in >> the air. First gain is a cat spinning. - the Wise from Antrim -
His name is Joel Martin, and you can find videos documenting the process on >youtube.
Heres an example if you are interested https://youtu.be/jVhupfthTEk
I made this sheet using the Github LOC and size data for the LISP project.
Forth uses 59% less code in the "CORE" file but not as much in the "step" lesson files.
I can't be sure but this might indicate that there was less Forth style factoring
in the "recipe" to build the project. ?
https://docs.google.com/spreadsheets/d/1BHJY-odMvyV2e5MhcfVPlctEaMGz9hdlLDkGZyx7CzI/edit?usp=sharing
I'd say it is a wash. Realize that step_A contains the whole clojure
and that steps are incremental.
E.g going from 9 to A there are only 18 lines different.
diff step9_try.fs stepA_mal.fs | wc -l
18
run contains:
gforth stepA_mal.fs
At the start forth looses when going up to speed building the
necessary structures, which is about normal.
I'm going to attempt it in ciforth, mapping the lisp objects to
Forth header structures, mapping using the fields {code data link name flags }
appropriately.
Environments map to wordlist defined by links.
In ciforth they can be chained and be nested.
Possibly use my mini classes.
Lesson 1 and 2 taught me why my previous attempt to lisp failed.
It is absolutely mandatory to first build the parse tree,
then the next step. If you attempt to do it the Forth
fashion step by step that is hard (or even impossible).
Groetjes Albert
--
Don't praise the day before the evening. One swallow doesn't make spring. You must not say "hey" before you have crossed the bridge. Don't sell the hide of the bear until you shot it. Better one bird in the hand than ten in the air. First gain is a cat spinning. - the Wise from Antrim -
Because FORTH Code is far more condensed and efficient, how many lines of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here who have recoded some C apps.
Thank you
Best Wishes
Andreas
Couldn't the Rosetta Code website be used to compare C and Forth
solutions to a variety of problems.
On Wednesday, May 31, 2023 at 6:00:58 PM UTC+2, Gerry Jackson wrote:
[..]
Couldn't the Rosetta Code website be used to compare C and Forth
solutions to a variety of problems.
I don't think so: Rosetta's point appears to be "...illustrate factors that >relate or separate languages..."
Even the original question is not very precise: what does "efficient"
mean exactly? Less energy used to flip bits?
-marcel
In article <7f127372-5c63-45d3...@googlegroups.com>,
Marcel Hendrix <m...@iae.nl> wrote:
On Wednesday, May 31, 2023 at 6:00:58 PM UTC+2, Gerry Jackson wrote: >[..]
Couldn't the Rosetta Code website be used to compare C and Forth
solutions to a variety of problems.
I don't think so: Rosetta's point appears to be "...illustrate factors that >relate or separate languages..."
Even the original question is not very precise: what does "efficient"Rosetta shows the strengths or weaknesses of the different languages. However I'm annoyed with the imprecise problem statements.
mean exactly? Less energy used to flip bits?
Also the problems are too small to be really an indication for projects.
-marcelGroetjes Albert
--
Don't praise the day before the evening. One swallow doesn't make spring. You must not say "hey" before you have crossed the bridge. Don't sell the hide of the bear until you shot it. Better one bird in the hand than ten in the air. First gain is a cat spinning. - the Wise from Antrim -
On Wednesday, May 31, 2023 at 6:00:58 PM UTC+2, Gerry Jackson wrote:
[..]
Couldn't the Rosetta Code website be used to compare C and Forth
solutions to a variety of problems.
I don't think so: Rosetta's point appears to be "...illustrate factors that relate or separate languages..."
Even the original question is not very precise: what does "efficient"
mean exactly? Less energy used to flip bits?
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many linesof FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around herewho have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
[..]Here is an exercise to make a LISP interpreter in many languages.
If harnessing the Forth power to parse lisp,
there are two questions to address.
On Thursday, June 1, 2023 at 2:37:45 PM UTC+2, none albert wrote:
[..]
[..]Here is an exercise to make a LISP interpreter in many languages.
If harnessing the Forth power to parse lisp,
there are two questions to address.
Just throw gray at it. Gray is a Really Good Program.
-marcel
Just throw gray at it. Gray is a Really Good Program.
In article <00e4b748-0cc2-4143...@googlegroups.com>,
Brian Fox <bria...@brianfox.ca> wrote:
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many lines >of FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around here >who have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
My premise is that INTERPRET (the Forth parser) is powerful
enough to parse lisp. I reject the idea of a tedious
character by character parser, such as seen in Schani lisp
or in the mal example.
If harnessing the Forth power to parse lisp,
there are two questions to address.
I replace the nomenclature for word by token that is more
suitable, and can be used for Forth too.
Look at the lisp expression
(+ 1 (* 3 10))
This is equivalent to : 3 10 * 1 +
We need to parse this, splitting it into tokens, that map to
language concept/type/whatever.
I. The '(' in '(+' is a token in their own right.
This is solved in ciforth by making "(" a prefix.
II. The '10' in '10))' is a token.
This is solved in ciforth by making ')' a delimiter character.
I have discussed how adding PREFIX to a Forth costs a mere 5
lines or less (in a well designed Forth) and this is not repeated
here. This is present in ciforth since 2001.
You can imagine a word ?START that return whether a character is
a delimiter. Approximately
: lisp-delimiters ":[](){};" ;
Previous NAME (formerly known as "BL WORD" ) looked like as follows.
It uses PP@@ . This fetches the next character, leaves a pointer
to the character and advances PP (approximately >IN)
NAME leaves a string constant (adr len) with the advantage that
the string is not copied.
: NAME ( -- sc)
BEGIN PP@@ ?BLANK WHILE DROP REPEAT ( first non blank )
BEGIN PP@@ ?BLANK NOT WHILE DROP REPEAT ( first blank )
( start end -- sc ) OVER - ;
Now we need this replaced by
: ?DELIM DUP ?BLANK SWAP ?START OR ;
: TOKEN ( -- sc)
BEGIN PP@@ ?BLANK WHILE DROP REPEAT ( first non blank )
BEGIN PP@@ DUP ?DELIM NOT WHILE DROP REPEAT ( first delim)
( start end -- string ) OVER - ;
Now lisp can be parsed by INTERPRET once the wordlist are created
and installed and NAME is revectored to TOKEN.
This is all the words that need to be present in
the wordlist lisp-ns :
----------------------------------------
lisp-ns SET-CURRENT
\ An empty prefix matches everything, sealing the `lisp-ns namespace.
: catch-all lisp-symbol ; PREFIX "" LATEST >NFA @ $!
'lisp-number aliases: 0 1 2 3 4 5 6 7 8 9
: ( lisp-list ; PREFIX
'EXIT ALIAS ) PREFIX
IMPORT FORTH .l .S .symtab inc lisp-off
FORTH-WORDLIST SET-CURRENT
----------------------------------------
lisp-list lisp-number lisp-list are part of lisp proper, not of the
parser, they build data structures that may differ among lisp, and
they are not the subject.
Let us start at the bottom.
Following the IMPORT are words that are alien words that are
recognized even in lisp. e.g. FORTH escapes to Forth, .S shows
a stack dump etc. They can be left out.
The word ( build a lisp-list, like so
: lisp-list #list INTERPRET \ #list is used as a sentinel.
0 BEGIN OVER #list <> WHILE BUILD-pair REPEAT NIP ;
So tokens are accumulated on the stack using a recursive(!) call
of INTERPRET that observes the input stream .
It ends with the token ) , that is just an alias of EXIT
such that this call of INTERPRET ends.
Then there is the meat.
'lisp-number aliases: 0 1 2 3 4 5 6 7 8 9
Single digit prefixes parse a number. No surprise there, familiar
technique in ciforth.
And then the surprising
: catch-all lisp-symbol ; PREFIX "" LATEST >NFA @ $!
catch-all catches the remaining tokens such as + .
But you say "+" doesn't match "catch-all" at all!
Changing the >NFA however to an empty string, and making
it a prefix, changes that, matching everything.
"+" is "" followed by "+" , check!
lisp-symbol does the rest.
The 5-line lisp parser qualifies for a Jeff Fox hyped 100 times increase
of compactness, compared the parser presented in mal-forth.
NOTE: I have reworked the Schani lisp interpreter (gforth). https://github.com/schani/forthlisp
This parser has been lifted from the reworked lisp.
In the Clojure I expect to add a few more lines, for e.g. fixed object
like : false true nill .
Marcel Hendrix <mhx@iae.nl> writes:
Just throw gray at it. Gray is a Really Good Program.
Thank you. But I think it's overkill for parsing S-Expressions (Lisp >Syntax).
- anton--
So tokens are accumulated on the stack using a recursive(!) call
of INTERPRET that observes the input stream .
It ends with the token ) , that is just an alias of EXIT
such that this call of INTERPRET ends.
Then there is the meat.
'lisp-number aliases: 0 1 2 3 4 5 6 7 8 9
Single digit prefixes parse a number. No surprise there, familiar
technique in ciforth.
And then the surprising
: catch-all lisp-symbol ; PREFIX "" LATEST >NFA @ $!
catch-all catches the remaining tokens such as + .
But you say "+" doesn't match "catch-all" at all!
Changing the >NFA however to an empty string, and making
it a prefix, changes that, matching everything.
"+" is "" followed by "+" , check!
lisp-symbol does the rest.
The 5-line lisp parser qualifies for a Jeff Fox hyped 100 times increase
of compactness, compared the parser presented in mal-forth.
NOTE: I have reworked the Schani lisp interpreter (gforth).
https://github.com/schani/forthlisp
This parser has been lifted from the reworked lisp.
In the Clojure I expect to add a few more lines, for e.g. fixed object
like : false true nill .
Albert I am wondering if you built the primitive operations as postfix operations
could you make the whole thing look "LISPy" with:
50 LIFO: XSTK \ a stack
: ( ' XSTK PUSH ;
: ) XSTK POP EXECUTE ;
I know it works with Forth primitives but LISP may push
this simple idea too hard.
I think it also depends on coding style. In C, I put multiple statements when doing so doesn't make it look cluttered. Many people only put one statement per line.
Also, IMO, Forth lends itself to single, multiple statement lines.
In the end though, I'd guess it would probably be somewhere around 1/5 of the number of lines of C code.
ccur...@gmail.com schrieb am Montag, 5. Juni 2023 um 15:22:34 UTC+2:
I think it also depends on coding style. In C, I put multiple statements when doing so doesn't make it look cluttered. Many people only put one statement per line.
Also, IMO, Forth lends itself to single, multiple statement lines.
In the end though, I'd guess it would probably be somewhere around 1/5 of the number of lines of C code.
What meets the eye as lines is rather irrelevant for modern C compilers. Preprocessor, libraries, AST, conversion to some IL with or without first optimizations
like constant folding, et cetera, make a big hodgepodge from any source.
When you really want expressiveness, use a higher level language than C.
But within Forth's tiny niche, Forth is a fit choice. ;o)
On 6/06/2023 12:30 am, minforth wrote:
ccur...@gmail.com schrieb am Montag, 5. Juni 2023 um 15:22:34 UTC+2:
I think it also depends on coding style. In C, I put multiple statements when doing so doesn't make it look cluttered. Many people only put one statement per line.
Also, IMO, Forth lends itself to single, multiple statement lines.
In the end though, I'd guess it would probably be somewhere around 1/5 of the number of lines of C code.
What meets the eye as lines is rather irrelevant for modern C compilers. Preprocessor, libraries, AST, conversion to some IL with or without first optimizations
like constant folding, et cetera, make a big hodgepodge from any source.
When you really want expressiveness, use a higher level language than C.Over which the programmer has even less control and influence.
But within Forth's tiny niche, Forth is a fit choice. ;o)Forth requires one to 'line up your ducks' in a way C doesn't. I recently
had a forth word that was rather clumsy and comprised:
13 logical steps (let's call them LOCs)
43 tokens (@ ! etc)
166 bytes
After two days work (involved reworking support functions) got it down to:
11 logical steps
21 tokens
72 bytes
'LOCs' remained relatively unchanged but a very different outcome. I don't know whether C lets one write bad code (perhaps it's all bad :) but Forth certainly does.
dxforth schrieb am Dienstag, 6. Juni 2023 um 04:06:54 UTC+2:
On 6/06/2023 12:30 am, minforth wrote:
ccur...@gmail.com schrieb am Montag, 5. Juni 2023 um 15:22:34 UTC+2:Over which the programmer has even less control and influence.
I think it also depends on coding style. In C, I put multiple statements when doing so doesn't make it look cluttered. Many people only put one statement per line.
Also, IMO, Forth lends itself to single, multiple statement lines.
In the end though, I'd guess it would probably be somewhere around 1/5 of the number of lines of C code.
What meets the eye as lines is rather irrelevant for modern C compilers. >>> Preprocessor, libraries, AST, conversion to some IL with or without first optimizations
like constant folding, et cetera, make a big hodgepodge from any source. >>>
When you really want expressiveness, use a higher level language than C.
But within Forth's tiny niche, Forth is a fit choice. ;o)Forth requires one to 'line up your ducks' in a way C doesn't. I recently
had a forth word that was rather clumsy and comprised:
13 logical steps (let's call them LOCs)
43 tokens (@ ! etc)
166 bytes
After two days work (involved reworking support functions) got it down to: >>
11 logical steps
21 tokens
72 bytes
'LOCs' remained relatively unchanged but a very different outcome. I don't >> know whether C lets one write bad code (perhaps it's all bad :) but Forth
certainly does.
I learnt the hard and long way to prefer code readability and maintainability anytime over short and clever trick programming.
Only you can estimate when your two days invested for optimization work
will break even by some gained nanoseconds of speed.
I learnt the hard and long way to prefer code readability and maintainability anytime over short and clever trick programming.
Only you can estimate when your two days invested for optimization work
will break even by some gained nanoseconds of speed.
On Tuesday, June 6, 2023 at 8:57:21 AM UTC+2, minforth wrote:
[..]
I learnt the hard and long way to prefer code readability and maintainability
anytime over short and clever trick programming.
Only you can estimate when your two days invested for optimization work will break even by some gained nanoseconds of speed.For the past 40 years I have struggled with slow computers and code that is too slow for what I want to do. Given doubled speed every 18 months,
my present computer is 40 12 * 18 / 1+ 2^x . or 134,217,728 times more powerful than the one I had in 1983 (It's probably only 10^5 times faster because software has become 10^5 times less efficient). I think we can safely say that more computing power allows to, or uncovers,
bigger problems we want to solve.
I agree that we have a budget of 10,000x to improve readability
and maintainability.
-marcel
I learnt the hard and long way to prefer code readability and maintainability >anytime over short and clever trick programming.
Only you can estimate when your two days invested for optimization workMost of my optimisation work in maintenance consisted is burning down
will break even by some gained nanoseconds of speed.
Marcel Hendrix schrieb am Dienstag, 6. Juni 2023 um 09:23:56 UTC+2:
On Tuesday, June 6, 2023 at 8:57:21 AM UTC+2, minforth wrote:
[..]
I learnt the hard and long way to prefer code readability and maintainability
anytime over short and clever trick programming.
Only you can estimate when your two days invested for optimization work will break even by some gained nanoseconds of speed.For the past 40 years I have struggled with slow computers and code that is
too slow for what I want to do. Given doubled speed every 18 months,
my present computer is 40 12 * 18 / 1+ 2^x . or 134,217,728 times more powerful than the one I had in 1983 (It's probably only 10^5 times faster because software has become 10^5 times less efficient). I think we can safely say that more computing power allows to, or uncovers,
bigger problems we want to solve.
I agree that we have a budget of 10,000x to improve readability
and maintainability.
-marcelI started with Forth "professionally" around 1979 doing motor control
using a SYM-1 board. The speed was barely up to the task, but requirements had been less then. You had to check the running Forth with an oscilloscope. :)
In article <2e47b4e3-a5d1-4b72-8635-98b0b6d03dd4n@googlegroups.com>,
minforth <minforth@arcor.de> wrote:
<SNIP>
I learnt the hard and long way to prefer code readability and maintainability
anytime over short and clever trick programming.
I guess I'm a lucky guy. My programming talent is so low that
I'm never tempted to do clever tricks. I'm forced to not only maintain readibility but meticulously document specification for every
word I write. Of lately I tend to write several tests for every word.
In article <2e47b4e3-a5d1-4b72...@googlegroups.com>,
minforth <minf...@arcor.de> wrote:
<SNIP>
Only you can estimate when your two days invested for optimization work >will break even by some gained nanoseconds of speed.Most of my optimisation work in maintenance consisted is burning down
clever tricks, unexpectedly resulting with a gain in speed.
Now comes creativity: with the algorithm visibly in front of my eyes I sometimes[..]
"out of the blue" just "detect or invent" new ways or shorter paths through the algorithm.
It is well known that improved algorithms are the best way towards speed gain.
On Tuesday, June 6, 2023 at 1:28:21 PM UTC+2, minforth wrote:
[..]
Now comes creativity: with the algorithm visibly in front of my eyes I sometimes[..]
"out of the blue" just "detect or invent" new ways or shorter paths through the algorithm.
It is well known that improved algorithms are the best way towards speed gain.
I'm currently working on least-squares methods, and take a bit more
time than usual to try and understand the algorithms by implementing
them from scratch. Some illustrations to your statements:
Mathematical notation is quite good at hiding details that are critical
to *real* understanding. A case in point is something like
"I - phi^T * gamma" which can be read with an "Oh, yeah" attitude, but actually 1. is not obvious when phi is a vector of functions pointers,
and 2. the size of "I" is (in my textbook) not stated explicitly. Worse, some authors use the symbol I for *any size* matrices (including 1 x 1), anywhere in the code.
By not using the canned routines, surprising nuggets can be found. E.g.,
the standard LS routine works with all the data at once and potentially
has humongous space and time complexity. By use of the Sherman–Morrison–Woodbury formula it is possible to unwind the
code to a recursive RLS routine that only needs the current input and
output plus a few state variables, with matrix inversion degenerated to
a single division. This idea is of course useful in many places, and maybe it also works to transform recursion into a massive linear equation.
Marcel Hendrix schrieb am Dienstag, 6. Juni 2023 um 19:29:25 UTC+2:[..]
On Tuesday, June 6, 2023 at 1:28:21 PM UTC+2, minforth wrote:
A fascinating topic![..]
My 5 cents (with far from being an expert here):
Adaptive filtering is only as good as the input signal quality is, with the system
having to be nearly ergodic. Non-linear noise like spikes, A/D conversion jitter,
or transient process behaviour devalidate RLS results in the same way as using unfiltered input signals for FFT. IOW the found coefficients are often useless despite correct linear algebra routines.
Therefore we often used hybrid concepts:
So far for un-Forthish coding styles ... ;-)Do you really think so?
On Tuesday, June 6, 2023 at 8:41:39 PM UTC+2, minforth wrote:
Marcel Hendrix schrieb am Dienstag, 6. Juni 2023 um 19:29:25 UTC+2:[..]
On Tuesday, June 6, 2023 at 1:28:21 PM UTC+2, minforth wrote:
A fascinating topic!
My 5 cents (with far from being an expert here):
Adaptive filtering is only as good as the input signal quality is, with the system
having to be nearly ergodic. Non-linear noise like spikes, A/D conversion jitter,
or transient process behaviour devalidate RLS results in the same way as using unfiltered input signals for FFT. IOW the found coefficients are often
useless despite correct linear algebra routines.
Therefore we often used hybrid concepts:[..]
That goes quite a bit farther than my copy of Astrom and Wittenmark, "Adaptive Control" 2nd Ed. ... Do you have a pointer for me?
On Saturday, May 27, 2023 at 1:00:02 PM UTC-4, Andreas Neuner wrote:
Because FORTH Code is far more condensed and efficient, how many linesof FORTH Code would I need to recode a 10000 LOC C application?
Only as a raw estimation. I'm sure people there are people around herewho have recoded some C apps.
Thank you
Best Wishes
Andreas
This may be of some use to compare C to Forth in a real project.
Here is an exercise to make a LISP interpreter in many languages.
https://github.com/kanaka/mal/tree/master/impls
I was surprised to see a GForth implementation there.
The wild card is that these LISPs are built to a recipe for the course
and so might not fully use idiomatic methods that take advantage
of the implementation language's features. (?)
Also, IMO, Forth lends itself to single, multiple statement lines.
For the past 40 years I have struggled with slow computers and code that is >too slow for what I want to do. Given doubled speed every 18 months,
my present computer is 40 12 * 18 / 1+ 2^x . or 134,217,728 times more >powerful than the one I had in 1983 (It's probably only 10^5 times faster >because software has become 10^5 times less efficient).
Marcel Hendrix <m...@iae.nl> writes:<snip>
For the past 40 years I have struggled with slow computers and code that is >too slow for what I want to do. Given doubled speed every 18 months,How did you measure that? I certainly have seen much less speedup in
the past 20 years for the stuff I care about.
The total speedup over 0.5.0 (32-bit) is a factor 3.2-4.48, the total
speedup over the 486 with the 0.5.0 prerelease is 454-705. Your
fomula predicts a factor of 524288.
We are experiencing a surge in artificial intelligence.
It is because of neural nets, not computer languages.
Groetjes Albert
Marcel Hendrix <m...@iae.nl> writes:[..]
For the past 40 years I have struggled with slow computers and code that is >too slow for what I want to do. Given doubled speed every 18 months,How did you measure that? I certainly have seen much less speedup in
the past 20 years for the stuff I care about.
The total speedup over 0.5.0 (32-bit) is a factor 3.2-4.48, the total speedup over the 486 with the 0.5.0 prerelease is 454-705. Your
fomula predicts a factor of 524288.
"ccur...@gmail.com" <ccur...@gmail.com> writes:I also use extra spaces to logically separate words that go together. (y)
Also, IMO, Forth lends itself to single, multiple statement lines.One of my most recent "aha!" moments with Forth was when I started to
use multiple spaces around sequences of words in order to express
the logical phrasing of the operations.
: myword ( u -- u' ) 1+ 2* dup . ;
Like that.
Andy Valencia
Home page: https://www.vsta.org/andy/
To contact me: https://www.vsta.org/contact/andy.html
On Wednesday, June 7, 2023 at 6:32:30 PM UTC+2, Anton Ertl wrote:
Marcel Hendrix <m...@iae.nl> writes:[..]
For the past 40 years I have struggled with slow computers and code that is >> >too slow for what I want to do. Given doubled speed every 18 months,How did you measure that? I certainly have seen much less speedup in
the past 20 years for the stuff I care about.
The total speedup over 0.5.0 (32-bit) is a factor 3.2-4.48, the total
speedup over the 486 with the 0.5.0 prerelease is 454-705. Your
fomula predicts a factor of 524288.
I found the "2x in 18 months" figure somewhere. Evidently, it's slightly >exaggerated. If run time were proportional to clock speed, I could
have calculated a doubling of runspeed every 38 months, say every
3 years. (In 1983 I had a 1MHz Z80, nowadays a 5.5GHz AMD 5800X).
With that figure I get a speedup of 64 over 18 years (2020-1992), which
means Gforth is doing better than expected (not surprising).
The way technology unfolds, I don't expect to see 11GHz (or another >brute-force way to get 2x faster) in the coming 3 years, which is rather >worrisome.
-marcel--
The way technology unfolds, I don't expect to see 11GHz (or another >brute-force way to get 2x faster) in the coming 3 years, which is rather >worrisome.I'm not worried about 11 Ghz. Exciting techniques replace it
neural nets, photonic chips, quantum computing.
On Thursday, June 8, 2023 at 12:05:39 PM UTC+2, none albert wrote:
[..]
Neural nets, yes. It was one of the first demo's I did in Forth, 40 years ago.The way technology unfolds, I don't expect to see 11GHz (or another >brute-force way to get 2x faster) in the coming 3 years, which is rather >worrisome.I'm not worried about 11 Ghz. Exciting techniques replace it
neural nets, photonic chips, quantum computing.
Let's hope photons and quantums come faster.
With the avarage IQ continuously decreasing on this planet, I have my doubts.
Sadly desktop/laptop PCs have not changed in any fundamental way since
their first market introduction, which can explain the observed factor difference.
But other bottlenecks like memory lane throughput,
or physical effects
of wavelength limitation at higher clock frequencies play another role.
Nevertheless today PCs lag grossly behind available technology. See: >https://www.offgridweb.com/preparation/infographic-the-growth-of-computer-processing-power/
On Wednesday, June 7, 2023 at 6:32:30=E2=80=AFPM UTC+2, Anton Ertl wrote:
Marcel Hendrix <m...@iae.nl> writes:=20is=20
For the past 40 years I have struggled with slow computers and code that=
[..]too slow for what I want to do. Given doubled speed every 18 months,How did you measure that? I certainly have seen much less speedup in=20
the past 20 years for the stuff I care about.
The total speedup over 0.5.0 (32-bit) is a factor 3.2-4.48, the total=20
speedup over the 486 with the 0.5.0 prerelease is 454-705. Your=20
fomula predicts a factor of 524288.
I found the "2x in 18 months" figure somewhere.
Evidently, it's slightly
exaggerated.
If run time were proportional to clock speed, I could
have calculated a doubling of runspeed every 38 months, say every
3 years. (In 1983 I had a 1MHz Z80, nowadays a 5.5GHz AMD 5800X).
With that figure I get a speedup of 64 over 18 years (2020-1992), which
means Gforth is doing better than expected (not surprising).
The way technology unfolds, I don't expect to see 11GHz (or another=20 >brute-force way to get 2x faster) in the coming 3 years, which is rather >worrisome.
Marcel Hendrix <m...@iae.nl> writes:[..]
1MHz Z80? Even the very first Z80 in 1976 clocked at 2.5MHz.[..]
OTOH, if you are interested in multiplying FP matrices, you will see
larger speedups between the 486 and the Ryzen 5800X, thanks to stuff
like:
* SIMD instructions (AVX)
* multi-core
and generally, modern hardware has improved more on throughput
(i.e. parallel) jobs than on latency-oriented stuff like the Gforth benchmarks.
On Thursday, June 8, 2023 at 4:59:02 PM UTC+2, Anton Ertl wrote:
[..]
OTOH, if you are interested in multiplying FP matrices, you will see
larger speedups between the 486 and the Ryzen 5800X, thanks to stuff
like:
* SIMD instructions (AVX)
* multi-core
and generally, modern hardware has improved more on throughput
(i.e. parallel) jobs than on latency-oriented stuff like the Gforth
benchmarks.
For NGSPICE, recompiling for AVX hardware makes zero difference.
I did some experiments coding my own SIMD matrix multiply in
assembler, but lots of effort only gave me a factor of ~2 increase
(others may do better), and no guarantee that my architecture-dependent >tricks would still work after a few years.
For circuit simulation, the best algorithms/programs I have seen scale
up to about 6 cores (with horrendous effort), then very quickly level off. >And of course, there is (or that is) Amdahl's law.
-marcel
For NGSPICE, recompiling for AVX hardware makes zero difference.
I did some experiments coding my own SIMD matrix multiply in
assembler, but lots of effort only gave me a factor of ~2 increase
(others may do better), and no guarantee that my architecture-dependent >tricks would still work after a few years.
For circuit simulation, the best algorithms/programs I have seen scale
up to about 6 cores (with horrendous effort), then very quickly level off. >And of course, there is (or that is) Amdahl's law.
I'm tempted to buy a double xeon (refurbished) on aliexpress ($1200 )
Two time 22 cores 44 thread and 256 Gbyte ECC memory.
On Thursday, June 8, 2023 at 12:38:34 PM UTC+2, minforth wrote:
With the avarage IQ continuously decreasing on this planet, I have my doubts.
On that, I have no worries: it will always be 100.
On Thursday, June 8, 2023 at 4:29:21=E2=80=AFPM UTC+2, Anton Ertl wrote:
Marcel Hendrix <m...@iae.nl> writes:[..]
1MHz Z80? Even the very first Z80 in 1976 clocked at 2.5MHz.[..]
My Sinclair ZX80 had a 3.25MHz clock. It's 'slow mode' gave 80%
of the CPU time to the display. So my '1MHz' is already much to generous.
On 2023-06-07, albert@cherry.(none) (albert) <albert@cherry> wrote:I too saw it coming, I've similar idea's but was nowhere near
We are experiencing a surge in artificial intelligence.
It is because of neural nets, not computer languages.
Probably since I was a child, I took it for granted that sufficiently
large neural networks will show intelligence: that it's just a number >crunching problem requiring the hardware to have lots of space (and
parallel computing power in order to work in anything resembling real
time).
Not only that, but also that I will likely see this, or the start of it,
well within my lifetime. That dawn is here, pretty much.
I also already decided then that I wasn't interested in massive number >crunching, regardless of its results.What is more interesting is to see an intelligence develop,
The intelligence that results from it can at most be about as
interesting as people. We've had people for millennia, and only
a few of them have been interesting, so ...
I like my computations to be nice and tight, using a small amount of >resources in order to produce a precise, repeatable result, whose
every aspect can be explained and traced to a piece of code, which
can be traced to a requirement.
Groetjes Albert
You should open your mind more. Firstly, the MAL ("make a lisp") project >isn't a good way to learn about Lisp. It has a specific educational
goal and focus: to guide people through implementing Lisp-like
evaluation in any programming language, by following a certain recipe.
Production Lisps are not built according to such a recipe.All compilers are based on C, because the chip manufacturers
This templating notation is heavily used in writing macros,
and other code-to-code transformation situations.
Sometimes it is used in manipulating data which isn't code, too.
Manipulating syntax is an different approach to metaprogramming than
what is going on in Forth; you're only selling yourself short
if you dismiss that casually without learning anything about it.
(And then dismissing it, if you're still inclined.)
On Thursday, June 8, 2023 at 9:16:06 PM UTC+2, none albert wrote:
[..]
I'm tempted to buy a double xeon (refurbished) on aliexpress ($1200 )
Two time 22 cores 44 thread and 256 Gbyte ECC memory.
I bought a refurbished HP Z840 from Creoserver in Zwolle. A real tank.
You won't get 256 GByte ECC for that price, but their stuff is fully
tested and they deliver it personally at your door.
-marcel--
In article <88f7087a-3875-463e...@googlegroups.com>,
Marcel Hendrix <m...@iae.nl> wrote:
I bought a refurbished HP Z840 from Creoserver in Zwolle. A real tank.I have an offer here from Maas. 256 Gbyte 2 * Xeon 13 core.
You won't get 256 GByte ECC for that price, but their stuff is fully >tested and they deliver it personally at your door.
potent graphics card. Euro 1600. Should I do it?
On Wednesday, June 14, 2023 at 5:54:11 PM UTC+2, none albert wrote:
In article <88f7087a-3875-463e...@googlegroups.com>,
Marcel Hendrix <m...@iae.nl> wrote:
I bought a refurbished HP Z840 from Creoserver in Zwolle. A real tank.I have an offer here from Maas. 256 Gbyte 2 * Xeon 13 core.
You won't get 256 GByte ECC for that price, but their stuff is fully
tested and they deliver it personally at your door.
potent graphics card. Euro 1600. Should I do it?
That's a Z640, not a Z840. A 13 core Xeon? Either 12 or 14.
They look like low clockspeed, maybe the RAM won't work with the
faster ones, or the PSU is too small for the maximum layout.
The vendor looks sound enough.
IIRC, the 640 has a lower power PSU that is difficult to replace.
The heatsinks on the CPU's are lower quality than those on the
840. Check out Youtube, it has sterling advice from real experts.
It's just my opinion, that rock probably does all you want it to do.
-marcel--
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 46:24:16 |
Calls: | 6,710 |
Calls today: | 3 |
Files: | 12,243 |
Messages: | 5,354,355 |
Posted today: | 1 |