I have started to learn Assembler out of an old book.
Not so long after I started learning OS/360 Fortran and PL/I, I found...
the compiler option for printing out the generated code in sort-of
assembly language. (Not actually assembleable, though.)
Compilers today don't write out the generated code in the same way,
On Tuesday, March 21, 2023 at 2:40:22 PM UTC-7, Alan Beck wrote:
I have started to learn Assembler out of an old book.
Compilers today don't write out the generated code in the same way,
and there aren't so many libraries around to read.
And, personally,
8086 is my least favorite to write assembly code in.
Learning C, and thinking about pointers and addresses, is a good start
toward assembly programming.
C programming works so well, that there are only a few
things you can't do in C, and so need assembly programs.
//Hello all,//
Hi,
I have started to learn Assembler out of an old book.
It is ancient (2003) but I don't think 8086 programming has changed
much. But the tools have.
I took assembly language in school but dropped out. Now I want another
go at it.
Would someone be my Mentor and answer a ton of questions that would
dwindle out as time went on?
If it's OK, we could do it here. Or netmail
Books are from a bookstore.
... I don't think 8086 programming has changed
much. But the tools have. ...
Would someone be my Mentor and answer a ton of questions that would
dwindle out as time went on?
[Please reply directly unless the response is related to compilers. -John]
gah4 <ga...@u.washington.edu> schrieb:
Compilers today don't write out the generated code in the same way,
Quite the opposite.
The standard on UNIXy systems is to write out assemblly language to
a file, which is then further processed with the actual assembler.
Well, to be sure that this is about compilers, my favorite complaint
is the lost art of small memory compilers. That is, ones that can
run in kilobytes instead of megabytes.
Not ones written in assembler. But it is possible to download
the source code to many libraries, for example glibc, and then
examine what it is compiled to.
On the Internet, there is a project for almost everything - in this
case Tiny C, which still seems to be under active development. Or
at least there are sill commits at https://repo.or.cz/w/tinycc.git .
However, there is a reason why compilers got so big - there is
always a balance to be struck between comilation speed, compiler
size and optimization.
An extreme example: According to "Abstracting Away the Machine", the
very first FORTRAN compiler was so slow that the size of programs
it could compile was limited by the MTBF of the IBM 704 of around
eight hours.
The balance has shifted over time, because of increasing computing
power and available memory that can be applied to compilation,
and because relatively more people use programs than use compilers
than ever before. So, in today's environment, there is little
incentive for writing small compilers.
Also, languages have become bigger, more expressive, more powerful,
more bloated (take your pick), which also increases the size
of compilers.
OK, the IBM PL/I (F) compiler, for what many consider a bloated
language, is designed to run (maybe not well) in 64K.
At the end of every compilation it tells how much memory was
used, how much available, and how much to keep the symbol table
in memory.
OK, the IBM PL/I (F) compiler, for what many consider a bloated
language, is designed to run (maybe not well) in 64K.
At the end of every compilation it tells how much memory was
used, how much available, and how much to keep the symbol table
in memory.
It's... 30-some passes, iirc?
[Well, phases or overlays but yes, IBM was really good at slicing compilers into pieces they could overlay. -John]
It is what IBM calls, I believe, dynamic overlay. Each module specifically requests others to be loaded into memory. If there is enough memory,
they can stay, otherwise they are removed.
[Never heard of dynamic overlays on S/360. -John]
Fortran G was not written by IBM, but contracted out. And is not
(mostly) in assembler, but in something called POP. That is, it
is interpreted by the POP interpreter, with POPcode written using
assembler macros. Doing that, for one, allows reusing the code
for other machines, though you still need to rewrite the code
generator. But also, at least likely, it decreases the size of
the compiler. POP instructions are optimized for things that
compiler need to do.
After a look at "open software" I was astonished by the number of
languages and steps involved in writing portable C code. Also updates of >popular programs (Firefox...) are delayed by months on some platforms,
IMO due to missing manpower on the target systems for checks and the >adaptation of "configure". Now I understand why many people prefer >interpreted languages (Java, JavaScript, Python, .NET...) for a >simplification of their software products and spreading.
What's the actual ranking of programming languages? A JetBrains study
does not list any compiled language in their first 7 ranks in 2022. C++ >follows on rank 8.
What does that trend mean to a compiler group? Interpreted languages
still need a front-end (parser) and back-end (interpreter), but don't
these tasks differ between languages compiled to hardware or interpretation?
DoDiGeorge
On Sat, 25 Mar 2023 13:07:57 +0100, Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
After a look at "open software" I was astonished by the number of
languages and steps involved in writing portable C code. Also updates of
popular programs (Firefox...) are delayed by months on some platforms,
IMO due to missing manpower on the target systems for checks and the
adaptation of "configure". Now I understand why many people prefer
interpreted languages (Java, JavaScript, Python, .NET...) for a
simplification of their software products and spreading.
Actually Python is the /only/ one of those that normally is
interpreted. And the interpreter is so slow the language would be
unusable were it not for the fact that all of its standard library
functions and most of its useful extensions are written in C.
My impression was that the FSF favors C and ./configure for "portable"
code.
Can somebody shed some light on the current practice of writing portable >C/C++ software, or any other compiled language, that (hopefully) does
not require additional human work before or after compilation for a
specific target platform?
Then, from the compiler writer viewpoint, it's not sufficient to define
a new language and a compiler for it, instead it must placed on top of
some popular "firmware" like Java VM, CLR or C/C++ standard libraries,
or else a dedicated back-end and libraries have to be implemented on
each supported platform.
On 3/26/23 1:54 AM, George Neuner wrote:
On Sat, 25 Mar 2023 13:07:57 +0100, Hans-Peter Diettrich
<DrDiettrich1@netscape.net> wrote:
After a look at "open software" I was astonished by the number of
languages and steps involved in writing portable C code. Also updates of >>> popular programs (Firefox...) are delayed by months on some platforms,
IMO due to missing manpower on the target systems for checks and the
adaptation of "configure". Now I understand why many people prefer
interpreted languages (Java, JavaScript, Python, .NET...) for a
simplification of their software products and spreading.
Actually Python is the /only/ one of those that normally is
interpreted. And the interpreter is so slow the language would be
unusable were it not for the fact that all of its standard library
functions and most of its useful extensions are written in C.
My impression of "interpretation" was aimed at the back-end, where
tokenized (virtual machine...) code has to be brought to a physical
machine, with a specific firmware (OS). Then the real back-end has to
reside on the target machine and OS, fully detached from the preceding >compiler stages.
Then, from the compiler writer viewpoint, it's not sufficient to define
a new language and a compiler for it, instead it must placed on top of
some popular "firmware" like Java VM, CLR or C/C++ standard libraries,
or else a dedicated back-end and libraries have to be implemented on
each supported platform.
My impression was that the FSF favors C and ./configure for "portable"
code. That's why I understand that any other way is easier for the >implementation of really portable software, that deserves no extra
tweaks for each supported target platform, for every single program. Can >somebody shed some light on the current practice of writing portable
C/C++ software, or any other compiled language, that (hopefully) does
not require additional human work before or after compilation for a
specific target platform?
DoDiGeorge
Right. When you work on a popular "managed" platform (e.g., JVM or
CLR), then its JIT compiler and CPU specific libraries gain you any
CPU specific optimizations that may be available, essentially for
free.
[The usual python implementation interprets bytecodes, but there are
also versions for .NET, the Java VM, and a JIT compiler. -John]
In article <23-03-029@comp.compilers>,
Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
My impression was that the FSF favors C and ./configure for "portable" >>code.
Like many things, this is the result of evolution. Autoconf is well
over 20 years old, and when it was created the ISO C and POSIX standards
had not yet spread throughout the Unix/Windows/macOS world. It and the
rest of the autotools solved a real problem.
Today, the C and C++ worlds are easier to program in, but it's still
not perfect and I don't think I'd want to do without the autotools. Particularly for the less POSIX-y systems, like MinGW and OpenVMS.
On Tuesday, March 28, 2023 at 1:14:29 AM UTC-7, Hans-Peter Diettrich wrote:
(snip)
Then, from the compiler writer viewpoint, it's not sufficient to define
a new language and a compiler for it, instead it must placed on top of
some popular "firmware" like Java VM, CLR or C/C++ standard libraries,
or else a dedicated back-end and libraries have to be implemented on
each supported platform.
From an announcement today here on an ACM organized conference:
"We encourage authors to prepare their artifacts for submission
and make them more portable, reusable and customizable using
open-source frameworks including Docker, OCCAM, reprozip,
CodeOcean and CK."
In article <23-03-029@comp.compilers>,
Hans-Peter Diettrich <DrDiettrich1@netscape.net> wrote:
My impression was that the FSF favors C and ./configure for "portable"
code.
Like many things, this is the result of evolution. Autoconf is well
over 20 years old, and when it was created the ISO C and POSIX standards
had not yet spread throughout the Unix/Windows/macOS world. It and the
rest of the autotools solved a real problem.
For system like Matlab and Octave, and I believe also for Python,
or one of many higher math languages, programs should spend
most of the time in the internal compiled library routines.
On 2023-03-28, Aharon Robbins <arnold@freefriends.org> wrote:
Today, the C and C++ worlds are easier to program in, but it's still
not perfect and I don't think I'd want to do without the autotools.
Particularly for the less POSIX-y systems, like MinGW and OpenVMS.
Counterpoint: Autotools are a real detriment to GNU project programs.
When a release is cut of a typical GNU program, special steps
are execute to prepare a tarball which has a compiled configure
script.
You cannot just do a "git clone" of a GNU program, and then run
./configure and build. You must run some "make boostrap" nonsense, and
that requires you to have various Autotools installed, and in specific >versions!
Most Autotools programs will not cleanly cross-compile. Autotools is the
main reason why distro build systems use QEMU to create a virtual target >environment with native tools and libraries, and then build the >"cross-compiled" program as if it were native.
My TXR language project has a hand-written, not generated, ./configure >script. What you get in a txr-285.tar.gz tarball is exactly what you
get if you do a "git clone" and "git checkout txr-285", modulo
the presence of a .git directory and differing timestamps.
You just ./configure and make.
None of my configure-time tests require the execution of a program;
For some situations, I have developed clever tricks to avoid it.
My impression was that the FSF favors C and ./configure for "portable"
code. That's why I understand that any other way is easier for the >implementation of really portable software, that deserves no extra
tweaks for each supported target platform, for every single program.
Can somebody shed some light on the current practice of writing portable >C/C++ software, or any other compiled language, that (hopefully) does
not require additional human work before or after compilation for a
specific target platform?
gah4 <ga...@u.washington.edu> schrieb:
For system like Matlab and Octave, and I believe also for Python,
or one of many higher math languages, programs should spend
most of the time in the internal compiled library routines.
They should, but sometimes they don't.
If you run into things not covered by compiled libraries, but which
are compute-intensive, then Matlab and (interpreted) Python run
as slow as molasses, orders of magnitude slower than compiled code.
At the company I work for, I'm told each Python project will only
use a certain specified version of Python will never be changed for
fear of incompatibilities - they treat each version as a new
programming language :-|
To bring this back a bit towards compilers - a language definition
is an integral part of compiler writing. If
Often I had
the impression that the author wanted the program not for use on Windows >machines. Kind of "source open for specific OS only" :-(
When a release is cut of a typical GNU program, special steps
are execute to prepare a tarball which has a compiled configure
script.
You cannot just do a "git clone" of a GNU program, and then run
./configure and build. You must run some "make boostrap" nonsense, and
that requires you to have various Autotools installed, and in specific >versions!
Most Autotools programs will not cleanly cross-compile. Autotools is tha
main reason why distro build systems use QEMU to create a virtual target >environment with native tools and libraries, and then build the >"cross-compiled" program as if it were native.
For instance, about a decade and a half ago I helped a company
replace Windriver cruft with an in-house distribution. Windriver's >cross-compiled Bash didn't have job control! Ctrl-Z, fg, bg stuff no
workie. The reason was that it was just cross-compiled straight, on an
x86 build box. It couldn't run the test to detect job control support,
and so it defaulted it off, even though the target machine had
"gnu-linux" in its string. In the in-house distro, my build steps for
bash exported numerous ac_cv_... internal variables to override the bad >defaults.
For some situations, I have developed clever tricks to avoid it. For >instance, if you want to know the size of a data type:. Here
is a fragment:
For a Unix, there were a few hoops we had to jump through to make
Gforth work: e.g., IRIX 6.5 had a bug in sigaltstack, so we put in a workaround for that; HP/UX's make dealt with files with the same mtime differently from other makes, so we put in a workaround for that.
Windows, even with Cygwin, puts up many more hoops to jump through;
Bernd Paysan actually jumped through them for Gforth, but a Windows
build is still quite a bit of work, so he does that only occasionally.
On 4/2/23 12:04 PM, Anton Ertl wrote:
For a Unix, there were a few hoops we had to jump through to make
Gforth work: e.g., IRIX 6.5 had a bug in sigaltstack, so we put in a
workaround for that; HP/UX's make dealt with files with the same mtime
differently from other makes, so we put in a workaround for that.
Windows, even with Cygwin, puts up many more hoops to jump through;
Bernd Paysan actually jumped through them for Gforth, but a Windows
build is still quite a bit of work, so he does that only occasionally.
Too bad that not all existing OS are POSIX compatible? ;-)
So my impression still is: have a language (plus library) and an
interpreter (VM, browser, compiler...) on each target system. Then >adaptations to a target system have to be made only once, for each
target, not for every single program.
Even for programs with extreme speed requirements the development can be
done from the general implementation, for tests etc., and a version
tweaked for a very specific target system, instead of the single target >version in the first place and problematic ports to many other platforms.
(G)FORTH IMO is a special case because it's (also) a development system. >Building (bootstrapping) a new FORTH system written in FORTH is quite >complicated, in contrast to languages with stand alone tools like
compiler, linker etc.
Hans-Peter Diettrich <DrDiettrich1@netscape.net> writes:
You mean: Write your program in Java, Python, Gforth, or the like?
Sure, they deal with compatibility problems for you, but you may want
to do things (or have performance) that they do not offer, or only
offer through a C interface (and in the latter case you run into the
C-level compatibility again).
(G)FORTH IMO is a special case because it's (also) a development system.
Building (bootstrapping) a new FORTH system written in FORTH is quite
complicated, in contrast to languages with stand alone tools like
compiler, linker etc.
Not really. Most self-respecting languages have their compiler(s) implemented in the language itself, resulting in having to bootstrap.
AFAIK the problem Gforth has with Windows is not the bootstrapping;
packaging and installation are different than for Unix.
Hans-Peter Diettrich <DrDiettrich1@netscape.net> writes:
On 4/2/23 12:04 PM, Anton Ertl wrote:
For a Unix, there were a few hoops we had to jump through to make
Gforth work: e.g., IRIX 6.5 had a bug in sigaltstack, so we put in a
workaround for that; HP/UX's make dealt with files with the same mtime
differently from other makes, so we put in a workaround for that.
Windows, even with Cygwin, puts up many more hoops to jump through;
Bernd Paysan actually jumped through them for Gforth, but a Windows
build is still quite a bit of work, so he does that only occasionally.
Too bad that not all existing OS are POSIX compatible? ;-)
Like many standards, POSIX is a subset of the functionality that
programs use. Windows NT used to have a POSIX subsystem in order to
make WNT comply with FIPS 151-2 needed to make WNT eligible for
certain USA government purchases. From what I read, it was useful for
that, but not much else.
(G)FORTH IMO is a special case because it's (also) a development system. >>Building (bootstrapping) a new FORTH system written in FORTH is quite >>complicated, in contrast to languages with stand alone tools like
compiler, linker etc.
Not really. Most self-respecting languages have their compiler(s) implemented in the language itself, resulting in having to bootstrap.
Most self-respecting languages have their compiler(s)
implemented in the language itself, resulting in having to bootstrap.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 300 |
Nodes: | 16 (2 / 14) |
Uptime: | 74:29:35 |
Calls: | 6,715 |
Calls today: | 3 |
Files: | 12,246 |
Messages: | 5,357,274 |