Simple question: What are the differences between Ada's
Universal_Integer and a typical bigint type?
By bigint I mean a signed integer which expands and contracts to be as
wide as needed.
I read a comment that one cannot define an Ada object as being of type Universal_Integer but I wondered why not.
Wouldn't it make sense to have
Ada programs (or programs in another language, for that matter) treat
integer constants and expressions as of type bigint?
One additional point: In an expression which combines a constant (or
constant expression) with a declared object the value of the undeclared constant expression would be automatically converted. For example, in
x + 4
the undeclared bigint 4 would be automatically converted to the type of
x as long as it was in range.
Would there be any practical problems with treating integer literals in
that way?
Simple question: What are the differences between Ada's
Universal_Integer and a typical bigint type?
By bigint I mean a signed integer which expands and contracts to be as
wide as needed.
I read a comment that one cannot define an Ada object as being of type Universal_Integer but I wondered why not. Wouldn't it make sense to have
Ada programs (or programs in another language, for that matter) treat
integer constants and expressions as of type bigint?
One additional point: In an expression which combines a constant (or
constant expression) with a declared object the value of the undeclared constant expression would be automatically converted. For example, in
x + 4
the undeclared bigint 4 would be automatically converted to the type of
x as long as it was in range.
Would there be any practical problems with treating integer literals in
that way?
On 2021-10-06 18:00, James Harris wrote:
Simple question: What are the differences between Ada's
Universal_Integer and a typical bigint type?
The incoming standard will have bigint package in the standard library.
By bigint I mean a signed integer which expands and contracts to be as
wide as needed.
I read a comment that one cannot define an Ada object as being of type
Universal_Integer but I wondered why not.
The reason was not to burden small targets.
But the main difference is that a universal type is considered a member
of each type hierarchy, e.g. each integer type is a subtype of Universal_Integer. This is why these are all correct:
A : Array (1..10) of Boolean;
I : Integer := A'Length; -- Universal_Integer becoming Integer
J : Integer_64 := A'Length; -- Universal_Integer becoming Integer_64
Wouldn't it make sense to have Ada programs (or programs in another
language, for that matter) treat integer constants and expressions as
of type bigint?
No, because of above. Bigint is a normal type, so you have to explicitly convert from and to it.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 296 |
Nodes: | 16 (2 / 14) |
Uptime: | 67:40:30 |
Calls: | 6,654 |
Files: | 12,200 |
Messages: | 5,331,962 |