• Constants and Ada Universal_Integer

    From James Harris@21:1/5 to All on Wed Oct 6 17:00:58 2021
    Simple question: What are the differences between Ada's
    Universal_Integer and a typical bigint type?

    By bigint I mean a signed integer which expands and contracts to be as
    wide as needed.

    I read a comment that one cannot define an Ada object as being of type Universal_Integer but I wondered why not. Wouldn't it make sense to have
    Ada programs (or programs in another language, for that matter) treat
    integer constants and expressions as of type bigint?

    One additional point: In an expression which combines a constant (or
    constant expression) with a declared object the value of the undeclared constant expression would be automatically converted. For example, in

    x + 4

    the undeclared bigint 4 would be automatically converted to the type of
    x as long as it was in range.

    Would there be any practical problems with treating integer literals in
    that way?


    --
    James Harris

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to James Harris on Wed Oct 6 18:27:36 2021
    On 2021-10-06 18:00, James Harris wrote:
    Simple question: What are the differences between Ada's
    Universal_Integer and a typical bigint type?

    The incoming standard will have bigint package in the standard library.

    By bigint I mean a signed integer which expands and contracts to be as
    wide as needed.

    I read a comment that one cannot define an Ada object as being of type Universal_Integer but I wondered why not.

    The reason was not to burden small targets.

    But the main difference is that a universal type is considered a member
    of each type hierarchy, e.g. each integer type is a subtype of Universal_Integer. This is why these are all correct:

    A : Array (1..10) of Boolean;
    I : Integer := A'Length; -- Universal_Integer becoming Integer
    J : Integer_64 := A'Length; -- Universal_Integer becoming Integer_64

    Wouldn't it make sense to have
    Ada programs (or programs in another language, for that matter) treat
    integer constants and expressions as of type bigint?

    No, because of above. Bigint is a normal type, so you have to explicitly convert from and to it.

    One additional point: In an expression which combines a constant (or
    constant expression) with a declared object the value of the undeclared constant expression would be automatically converted. For example,  in

      x + 4

    the undeclared bigint 4 would be automatically converted to the type of
    x as long as it was in range.

    That would require mechanics Ada does not have, namely ad-hoc sub- and supertypes. The most close thing to that is C++ type conversion operators.

    <rant on> IMO, The major problem that prevented C++ from becoming a
    great language was templates. Instead of investing into the type system,
    e.g. user-defined conversions etc, they buried themselves in the mess.
    Ada has a similar problem with generics.
    <rant off>

    Would there be any practical problems with treating integer literals in
    that way?

    Ada community is very resistant towards OO, or better to say, towards
    advanced type systems. They are like Bart and you in these issues.

    In any case that would be a huge language change with consequences
    extremely difficult to foresee. If there were another language to try
    this stuff, Ada could learn from it. Unfortunately, language designers
    are busy solving imaginary problems and chasing ghosts of last century.
    You know these people... (:-))

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From antispam@math.uni.wroc.pl@21:1/5 to James Harris on Thu Oct 7 21:38:22 2021
    James Harris <james.harris.1@gmail.com> wrote:
    Simple question: What are the differences between Ada's
    Universal_Integer and a typical bigint type?

    By bigint I mean a signed integer which expands and contracts to be as
    wide as needed.

    I read a comment that one cannot define an Ada object as being of type Universal_Integer but I wondered why not. Wouldn't it make sense to have
    Ada programs (or programs in another language, for that matter) treat
    integer constants and expressions as of type bigint?

    For Ada there are problems, mostly due to original constraints on
    design. For other languages there is no essential problems.
    I program every day in Spad, where type 'Integer' is in fact
    bigint type. There is also type 'SingleInteger' which represent
    machine sized-integers (due to particulars of implementation
    range of 'SingleInteger' is slightly smaller than pure machine
    integers).

    One additional point: In an expression which combines a constant (or
    constant expression) with a declared object the value of the undeclared constant expression would be automatically converted. For example, in

    x + 4

    the undeclared bigint 4 would be automatically converted to the type of
    x as long as it was in range.

    ATM Spad needs explicit convertion to smaller type.

    Would there be any practical problems with treating integer literals in
    that way?

    In general when using integers of differing sizes there is correctness
    versus efficiency tradeof. Large types means that there is little
    (or no) possibility of overflow. However, when smaller types are
    large enough they frequently are much more efficient. Consider
    the following Spad function:

    f() == max()$SingleInteger + 2

    Currently Spad compiler generates code based on result type: if
    return type of 'f' is declared as 'Integer' (that is bignum)
    Spad compiler generates bignum addition. When return type is
    declared as 'SingleInteger', then Spad compiler generates
    addition in 'SingleInteger', which will overflow. For this
    example Spad rules in fact work resonably well, but in general
    it is tricky to decide which type to use (Spad tends to err
    on side of correctness and use bignums). In Spad there is
    possibilty to manualy choose types, that is you can write:

    x +$SingleInteger qconvert(2)@SingleInteger

    where '+$SingleInteger' means '+' for type 'SingleInteger' and 'qconvert(2)@SingleInteger' effectively changes type of constant.
    Note that Spad in used mostly for mathematical computation
    and there is tendecy for users to choose values close to
    type limits. So, there is quite nontrivial risk that
    addition or multiplication in fixed width type will overflow.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Harris@21:1/5 to Dmitry A. Kazakov on Sat Feb 12 17:40:20 2022
    On 06/10/2021 17:27, Dmitry A. Kazakov wrote:
    On 2021-10-06 18:00, James Harris wrote:
    Simple question: What are the differences between Ada's
    Universal_Integer and a typical bigint type?

    The incoming standard will have bigint package in the standard library.

    By bigint I mean a signed integer which expands and contracts to be as
    wide as needed.

    I read a comment that one cannot define an Ada object as being of type
    Universal_Integer but I wondered why not.

    The reason was not to burden small targets.

    I don't see the problem. Couldn't expressions involving /only/ literals
    be carried out at compile time?

    Similarly, couldn't compile-time expressions (including literals) which
    are combined with an identifier be imkplicitly 'converted' at compile
    time into the type of the identifier?

    If all the BigInt stuff can be carried out at compile time I cannot see
    how it would be a burden to small targets.


    But the main difference is that a universal type is considered a member
    of each type hierarchy, e.g. each integer type is a subtype of Universal_Integer. This is why these are all correct:

       A : Array (1..10) of Boolean;
       I : Integer    := A'Length; -- Universal_Integer becoming Integer
       J : Integer_64 := A'Length; -- Universal_Integer becoming Integer_64

    Wouldn't it make sense to have Ada programs (or programs in another
    language, for that matter) treat integer constants and expressions as
    of type bigint?

    No, because of above. Bigint is a normal type, so you have to explicitly convert from and to it.

    I am thinking to specify (in future) that all integer literals are of
    BigInt type but to have them combinable with identifiers by implicit compile-time conversions. For example,

    int A := 90 ;90 would be of type BigInt but converted to type int

    I guess you wouldn't much care for that. :-)


    --
    James Harris

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)