The C header <inttypes.h> provides implementation-specific mappings
between the C integer types "signed char"", "short", "int", and "long"
and aliases that specify an exact number of bits, namely "int8_t",
"int16_t", "int32_t", and "int64_t". These names include the sign bit,
so they correspond to FIXED BINARY(7, 0), ... FIXED BINARY(63, 0).
However, there is no guarantee that any or all of them exist: on a
processor without 16-bit support like ARMn where n < 4, there is no int16_t.
Fortunately, <inttypes.h> also provides mappings to two other sets of aliases: "int_least8_t", ... "int_least64_t" and "int_fast8_t", ... "int_fast64_t". The idea is that "int_least16_t" is the *smallest* type
that can hold a 16-bit signed value, whereas "int_fast16_t" is the
*fastest* type (in terms of simple operations) that can hold a 16-bit
value, and all eight aliases are guaranteed to exist. Thus on ARM3 both int_least16_t and int_fast16_t are necessarily 32-bit types, and even on ARM4, where there are 16-bit memory reference instructions but no 16-bit arithmetic instructions, int_fast16_t is still a 32-bit type. On my
64-bit Windows box, the least and exact types match the native types 1:1,
but the fast types are "int8_t" for "int_fast8_t" and "int32_t" for the other three.
So what is the best way to map FIXED BINARY types to C types, "least" (compact storage) or "fast" (faster execution)? This could be a compiler switch, but all translation units will have to have the switch set the
same way, which is bad if you want to have PL/I libraries; everything
will have to be compiled both ways and stored in both libraries. Perhaps PL/I array types should use "least" and scalar types should use "fast"?
I would suggest that PL/I use whatever is convenient and specified internally (8,16,32,64 incl. sign) Calls to C functions can map the data in and out, since calls to C are “special” anyway.
C started out as a nice simple language, but has gotten too fancy for its own good.
The C header <inttypes.h> provides implementation-specific mappings between the C integer types "signed char"", "short", "int", and "long" and aliases that specify an exact number of bits, namely "int8_t", "int16_t", "int32_t", and "int64_t". Thesenames include the sign bit, so they correspond to FIXED BINARY(7, 0), ... FIXED BINARY(63, 0). However, there is no guarantee that any or all of them exist: on a processor without 16-bit support like ARMn where n < 4, there is no int16_t.
Fortunately, <inttypes.h> also provides mappings to two other sets of aliases: "int_least8_t", ... "int_least64_t" and "int_fast8_t", ... "int_fast64_t". The idea is that "int_least16_t" is the *smallest* type that can hold a 16-bit signed value,whereas "int_fast16_t" is the *fastest* type (in terms of simple operations) that can hold a 16-bit value, and all eight aliases are guaranteed to exist. Thus on ARM3 both int_least16_t and int_fast16_t are necessarily 32-bit types, and even on ARM4,
So what is the best way to map FIXED BINARY types to C types, "least" (compact storage) or "fast" (faster execution)? This could be a compiler switch, but all translation units will have to have the switch set the same way, which is bad if you want tohave PL/I libraries; everything will have to be compiled both ways and stored in both libraries. Perhaps PL/I array types should use "least" and scalar types should use "fast"?
On Sunday, October 2, 2022 at 8:22:27 PM UTC-4, bearlyabus...@gmail.com wrote:
I would suggest that PL/I use whatever is convenient and specified
internally (8,16,32,64 incl. sign) Calls to C functions can map the data in >> and out, since calls to C are “special” anyway.
If you recall, pli2c is going to compile to C, not to assembly language.
So I have to know what I'm doing here, and in C's terms. Actually it's better: a generic compiler isn't going to know all this stuff about
what's fast on what processors.
In any case, the question "small vs. fast" is the same in any case. It wouldn't have occurred to me that when compiling for x86_64 you should
use 'long' if you want things to be fast, but gcc thinks so.
C started out as a nice simple language, but has gotten too fancy for its
own good.
Says the author of a PL/I compiler. But hey, the two languages have basically the same data model, which is what makes all my little plans
even practical for me.
John Cowan <cowan@ccil.org> wrote:
On Sunday, October 2, 2022 at 8:22:27 PM UTC-4, bearlyabus...@gmail.com wrote:
I would suggest that PL/I use whatever is convenient and specified
internally (8,16,32,64 incl. sign) Calls to C functions can map the data in >>> and out, since calls to C are “special” anyway.
If you recall, pli2c is going to compile to C, not to assembly language.
So I have to know what I'm doing here, and in C's terms. Actually it's
better: a generic compiler isn't going to know all this stuff about
what's fast on what processors.
In any case, the question "small vs. fast" is the same in any case. It
wouldn't have occurred to me that when compiling for x86_64 you should
use 'long' if you want things to be fast, but gcc thinks so.
C started out as a nice simple language, but has gotten too fancy for its >>> own good.
Says the author of a PL/I compiler. But hey, the two languages have
basically the same data model, which is what makes all my little plans
even practical for me.
The difference is that PL/I was almost completely defined from the start. C started out as a glorified assembler and has had stuff tacked on ever
since. At this point the two languages are similar.
On 10/2/22 6:00 PM, John Cowan wrote:
The C header <inttypes.h> provides implementation-specific mappings
between the C integer types "signed char"", "short", "int", and "long"
and aliases that specify an exact number of bits, namely "int8_t",
"int16_t", "int32_t", and "int64_t". These names include the sign bit,
so they correspond to FIXED BINARY(7, 0), ... FIXED BINARY(63, 0).
However, there is no guarantee that any or all of them exist: on a
processor without 16-bit support like ARMn where n < 4, there is no int16_t.
Not true in the Apple world (ARM), where int16_t works just fine.
Fortunately, <inttypes.h> also provides mappings to two other sets of
aliases: "int_least8_t", ... "int_least64_t" and "int_fast8_t", ...
"int_fast64_t". The idea is that "int_least16_t" is the *smallest* type
that can hold a 16-bit signed value, whereas "int_fast16_t" is the
*fastest* type (in terms of simple operations) that can hold a 16-bit
value, and all eight aliases are guaranteed to exist. Thus on ARM3 both
int_least16_t and int_fast16_t are necessarily 32-bit types, and even on
ARM4, where there are 16-bit memory reference instructions but no 16-bit
arithmetic instructions, int_fast16_t is still a 32-bit type. On my
64-bit Windows box, the least and exact types match the native types
1:1, but the fast types are "int8_t" for "int_fast8_t" and "int32_t" for the other three.
And “fast” seems to make no difference in the Apple world.
So what is the best way to map FIXED BINARY types to C types, "least"
(compact storage) or "fast" (faster execution)? This could be a
compiler switch, but all translation units will have to have the switch
set the same way, which is bad if you want to have PL/I libraries;
everything will have to be compiled both ways and stored in both
libraries. Perhaps PL/I array types should use "least" and scalar types
should use "fast"?
It depends on how expensive storage and cycles are. In the modern world—well, Apple’s Swift programming language just says that all integers should be unsigned 64 unless there is a compatibility reason to
do otherwise. Storage is cheap.
John W Kennedy <john.w.kennedy@gmail.com> wrote:
On 10/2/22 6:00 PM, John Cowan wrote:
The C header <inttypes.h> provides implementation-specific mappings
between the C integer types "signed char"", "short", "int", and "long"
and aliases that specify an exact number of bits, namely "int8_t",
"int16_t", "int32_t", and "int64_t". These names include the sign bit,
so they correspond to FIXED BINARY(7, 0), ... FIXED BINARY(63, 0).
However, there is no guarantee that any or all of them exist: on a
processor without 16-bit support like ARMn where n < 4, there is no int16_t.
Not true in the Apple world (ARM), where int16_t works just fine.
Fortunately, <inttypes.h> also provides mappings to two other sets of
aliases: "int_least8_t", ... "int_least64_t" and "int_fast8_t", ...
"int_fast64_t". The idea is that "int_least16_t" is the *smallest* type >>> that can hold a 16-bit signed value, whereas "int_fast16_t" is the
*fastest* type (in terms of simple operations) that can hold a 16-bit
value, and all eight aliases are guaranteed to exist. Thus on ARM3 both >>> int_least16_t and int_fast16_t are necessarily 32-bit types, and even on >>> ARM4, where there are 16-bit memory reference instructions but no 16-bit >>> arithmetic instructions, int_fast16_t is still a 32-bit type. On my
64-bit Windows box, the least and exact types match the native types
1:1, but the fast types are "int8_t" for "int_fast8_t" and "int32_t" for the other three.
And “fast” seems to make no difference in the Apple world.
So what is the best way to map FIXED BINARY types to C types, "least"
(compact storage) or "fast" (faster execution)? This could be a
compiler switch, but all translation units will have to have the switch
set the same way, which is bad if you want to have PL/I libraries;
everything will have to be compiled both ways and stored in both
libraries. Perhaps PL/I array types should use "least" and scalar types >>> should use "fast"?
It depends on how expensive storage and cycles are. In the modern
world—well, Apple’s Swift programming language just says that all
integers should be unsigned 64 unless there is a compatibility reason to
do otherwise. Storage is cheap.
This attitude is exactly what’s wrong today. “Cheap” is relative, and developers usually have the biggest and most powerful machines, so what’s cheap in terms of memory or CPU cycles to them may not be to others. I have
a older machine that ran quite well with 32-bit software. I upgraded to 64-bit and many common programs, now 64-bit, use a lot more memory and run worse (possibly due to paging) to do the same tasks. Sure, I could (maybe) add more memory, I suppose, or buy a new machine. I did buy a flash drive, intending to use it for paging, until told it was a bad idea.
I had forgotten you’re compiling to C. Doesn’t POSIX guarantee minimum sizes for SHORT, LONG, etc.?
It would seem that, for portability, you
should pick the POSIX representation that best fits the PL/I definition.
Later you could use PL/I defaults to override.
On Monday, October 3, 2022 at 2:55:06 PM UTC-4, bearlyabus...@gmail.com wrote:
I had forgotten you’re compiling to C. Doesn’t POSIX guarantee minimum >> sizes for SHORT, LONG, etc.?
Yes, it does.
It would seem that, for portability, you
should pick the POSIX representation that best fits the PL/I definition.
Well, that certainly *works*; the question is, is it efficient? And the answer is, it depends, and it's possible to do better.
Later you could use PL/I defaults to override.
No DEFAULT statement in Subset G.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 459 |
Nodes: | 16 (2 / 14) |
Uptime: | 00:18:54 |
Calls: | 9,345 |
Calls today: | 3 |
Files: | 13,538 |
Messages: | 6,082,275 |