• Custom Storage Pool questions

    From Jere@21:1/5 to All on Sun Sep 12 17:53:47 2021
    I was learning about making user defined storage pools when
    I came across an article that made me pause and wonder how
    portable storage pools actually can be. In particular, I assumed
    that the Size_In_Storage_Elements parameter in the Allocate
    operation actually indicated the total number of storage elements
    needed.

    procedure Allocate(
    Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count) is abstract;

    But after reading the following AdaCore article, my assumption is now
    called into question:
    https://blog.adacore.com/header-storage-pools

    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next
    pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't

    So that seems to mean to make a storage pool, I have to make it
    compiler specific or else risk someone creating a type like an
    array and my allocation size and address values will be off.

    Is it intended not to be able to do portable Storage Pools or am
    I missing some Ada functionality that helps me out here. I
    scanned through the list of attributes but none seem to give
    any info about where the object's returned address is relative
    to the top of the memory actually allocated for the object. I saw
    the attribute Max_Size_In_Storage_Elements, but it doesn't seem
    to guarantee to include things like the array indices and it still
    doesn't solve the issue of knowing where the returned address
    needs to be relative to the top of allocated memory.

    I can easily use a generic to ensure that the types I care about
    are portably made by the pool, but I can't prevent someone from
    using my pool to create other objects that I hadn't accounted for.
    Unless there is a way to restrict a pool from allocating objects
    of other types?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to All on Mon Sep 13 00:29:35 2021
    Not sure what you are expecting. There is no requirement that objects are allocated contigiously. Indeed, Janus/Ada will call Allocate as many times
    as needed for each object; for instance, unconstrained arrays are in two
    parts (descriptor and data area).

    The only thing that you can assume in a portable library is that you get
    called the same number of times and sizes/alignment for Allocate and Deallocate; there's no assumptions about size or alignment that you can
    make.

    If you want to build a pool around some specific allocated size, then if it needs to be portable, (A) you have to calculate the allocated size, and (B)
    you have to have a mechanism for what to do if some other size is requested. (Allocate a whole block for smaller sizes, fall back to built-in heap for
    too large is what I usually do).

    More likely, you'll build a pool for a particular implementation. Pools are very low level by their nature, and useful ones are even more so (because
    they are using target facilities to allocate memory, or need to assume something about the allocations, or because they are doing icky things like address math, or ...).

    Randy.


    Randy.



    "Jere" <jhb.chat@gmail.com> wrote in message news:e3c5c553-4a7f-408a-aaa7-60ec0b70202dn@googlegroups.com...
    I was learning about making user defined storage pools when
    I came across an article that made me pause and wonder how
    portable storage pools actually can be. In particular, I assumed
    that the Size_In_Storage_Elements parameter in the Allocate
    operation actually indicated the total number of storage elements
    needed.

    procedure Allocate(
    Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count) is abstract;

    But after reading the following AdaCore article, my assumption is now
    called into question:
    https://blog.adacore.com/header-storage-pools

    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next
    pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't

    So that seems to mean to make a storage pool, I have to make it
    compiler specific or else risk someone creating a type like an
    array and my allocation size and address values will be off.

    Is it intended not to be able to do portable Storage Pools or am
    I missing some Ada functionality that helps me out here. I
    scanned through the list of attributes but none seem to give
    any info about where the object's returned address is relative
    to the top of the memory actually allocated for the object. I saw
    the attribute Max_Size_In_Storage_Elements, but it doesn't seem
    to guarantee to include things like the array indices and it still
    doesn't solve the issue of knowing where the returned address
    needs to be relative to the top of allocated memory.

    I can easily use a generic to ensure that the types I care about
    are portably made by the pool, but I can't prevent someone from
    using my pool to create other objects that I hadn't accounted for.
    Unless there is a way to restrict a pool from allocating objects
    of other types?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J-P. Rosen@21:1/5 to All on Mon Sep 13 13:12:39 2021
    Le 13/09/2021 à 02:53, Jere a écrit :
    I was learning about making user defined storage pools when
    I came across an article that made me pause and wonder how
    portable storage pools actually can be. In particular, I assumed
    that the Size_In_Storage_Elements parameter in the Allocate
    operation actually indicated the total number of storage elements
    needed.

    procedure Allocate(
    Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count) is abstract;

    But after reading the following AdaCore article, my assumption is now
    called into question:
    https://blog.adacore.com/header-storage-pools

    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next
    pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't
    [...]

    That blog shows a special use for Storage_Pools, where you allocate
    /user/ data on top of the requested memory. When called by the compiler,
    it is up to the compiler to compute how much memory is needed, and your
    duty is to just allocate that.

    --
    J-P. Rosen
    Adalog
    2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
    Tel: +33 1 45 29 21 52
    https://www.adalog.fr

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to J-P. Rosen on Mon Sep 13 17:48:15 2021
    On Monday, September 13, 2021 at 7:12:43 AM UTC-4, J-P. Rosen wrote:
    Le 13/09/2021 à 02:53, Jere a écrit :
    I was learning about making user defined storage pools when
    I came across an article that made me pause and wonder how
    portable storage pools actually can be. In particular, I assumed
    that the Size_In_Storage_Elements parameter in the Allocate
    operation actually indicated the total number of storage elements
    needed.

    procedure Allocate(
    Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count) is abstract;

    But after reading the following AdaCore article, my assumption is now called into question:
    https://blog.adacore.com/header-storage-pools

    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't
    [...]

    That blog shows a special use for Storage_Pools, where you allocate
    /user/ data on top of the requested memory. When called by the compiler,
    it is up to the compiler to compute how much memory is needed, and your
    duty is to just allocate that.

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.

    I agree I feel it is up to the compiler to provide the correct size to Allocate,
    but the blog would indicate that GNAT does not (or did not..old blog..so
    who knows?). Does the RM require that an implementation pass the full
    amount of memory needed to Allocate when new is called?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Randy Brukardt on Mon Sep 13 18:04:47 2021
    On Monday, September 13, 2021 at 1:29:39 AM UTC-4, Randy Brukardt wrote:
    Not sure what you are expecting. There is no requirement that objects are allocated contigiously. Indeed, Janus/Ada will call Allocate as many times
    as needed for each object; for instance, unconstrained arrays are in two parts (descriptor and data area).

    No expectations. Just questions. I wasn't concerned with whether the allocated memory was contiguous or not, but whether an implementation
    is required to supply the correct size of memory needed to allocate an object or if it is allowed to pass a value to Size that is less than the amount of memory actually needed. For example, the blog there indicates the
    maintainer of the custom storage pool needs to account for First/Last
    indexes of an unconstrained array separately instead of assuming that value is included as part of the Size parameter's value.

    If the Size parameter doesn't require that it includes space for First/Last
    for unconstrained arrays or Prev/Next for controlled objects (assuming
    that is even the implementation picked of course), then I'm not seeing
    a way to write a custom storage pool that is portable because you need
    to account for each implementation's "hidden" values that are not represented in the Size parameter. For example if Janus calculated Size to have
    both the size of the array and the size of First and Last but GNAT didn't
    and my storage pool assumed the JANUS method, then if someone
    used my storage pool with GNAT then it would access memory
    from some other location potentially and erroneously.

    The only thing that you can assume in a portable library is that you get called the same number of times and sizes/alignment for Allocate and Deallocate; there's no assumptions about size or alignment that you can
    make.
    So to be clear, you cannot assume that Size and Alignment are appropriate
    for the actual object being allocated correct? Size could actually be
    less than the actual amount of memory needed and the alignment may only
    apply to part of the object being allocated, not the full object?

    Is that correct? I'm asking because that is what the blog suggests with
    the example it gave.


    If you want to build a pool around some specific allocated size, then if it needs to be portable, (A) you have to calculate the allocated size, and (B) you have to have a mechanism for what to do if some other size is requested. (Allocate a whole block for smaller sizes, fall back to built-in heap for
    too large is what I usually do).

    Are there any good tricks to handle this? For example, if I design a
    storage pool around constructing a particular type of object, what is
    normally done to discourage another programmer from using the pool with
    an entirely different type? Maybe raise an exception if the size isn't exact? I'm not sure what else, unless maybe there is an Aspect/Attribute that
    can be set to ensure only a specific type of object can be constructed.



    HISTORY:



    "Jere" <> wrote in message
    news:e3c5c553-4a7f-408a...@googlegroups.com...
    I was learning about making user defined storage pools when
    I came across an article that made me pause and wonder how
    portable storage pools actually can be. In particular, I assumed
    that the Size_In_Storage_Elements parameter in the Allocate
    operation actually indicated the total number of storage elements
    needed.

    procedure Allocate(
    Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count) is abstract;

    But after reading the following AdaCore article, my assumption is now called into question:
    https://blog.adacore.com/header-storage-pools

    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't

    So that seems to mean to make a storage pool, I have to make it
    compiler specific or else risk someone creating a type like an
    array and my allocation size and address values will be off.

    Is it intended not to be able to do portable Storage Pools or am
    I missing some Ada functionality that helps me out here. I
    scanned through the list of attributes but none seem to give
    any info about where the object's returned address is relative
    to the top of the memory actually allocated for the object. I saw
    the attribute Max_Size_In_Storage_Elements, but it doesn't seem
    to guarantee to include things like the array indices and it still
    doesn't solve the issue of knowing where the returned address
    needs to be relative to the top of allocated memory.

    I can easily use a generic to ensure that the types I care about
    are portably made by the pool, but I can't prevent someone from
    using my pool to create other objects that I hadn't accounted for.
    Unless there is a way to restrict a pool from allocating objects
    of other types?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J-P. Rosen@21:1/5 to All on Tue Sep 14 08:08:48 2021
    Le 14/09/2021 à 02:48, Jere a écrit :
    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next
    pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't
    [...]

    That blog shows a special use for Storage_Pools, where you allocate
    /user/ data on top of the requested memory. When called by the compiler,
    it is up to the compiler to compute how much memory is needed, and your
    duty is to just allocate that.

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.

    I agree I feel it is up to the compiler to provide the correct size to Allocate,
    but the blog would indicate that GNAT does not (or did not..old blog..so
    who knows?). Does the RM require that an implementation pass the full
    amount of memory needed to Allocate when new is called?


    The RM says that an allocator allocates storage from the storage pool.
    You could argue that it does not say "allocates all needed storage...",
    but that would be a bit far fetched.

    Anyway, a blog is not the proper place to get information from for that
    kind of issue. Look at the Gnat documentation.

    --
    J-P. Rosen
    Adalog
    2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
    Tel: +33 1 45 29 21 52
    https://www.adalog.fr

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From J-P. Rosen@21:1/5 to All on Tue Sep 14 08:42:52 2021
    Le 14/09/2021 à 08:23, Dmitry A. Kazakov a écrit :
    Of course, a proper solution would be fixing Ada by adding another
    address attribute:

       X'Object_Address

    returning the first address of the object as allocated.
    But you cannot assume that the object is allocated as one big chunk.
    Bounds can be allocated at a different place. What would be
    X'Object_Address in that case?

    --
    J-P. Rosen
    Adalog
    2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
    Tel: +33 1 45 29 21 52
    https://www.adalog.fr

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Jere on Tue Sep 14 08:23:08 2021
    On 2021-09-14 02:48, Jere wrote:

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.

    I do not understand your concern. The blog discusses how to add service
    data to the objects allocated in the pool.

    I use such pools extensively in Simple Components. E.g. linked lists are implemented this way. The list links are allocated in front of list
    elements which can be of any type, unconstrained arrays included.

    The problem with unconstrained arrays is not that the bounds are not
    allocated, they are, but the semantics of X'Address when applied to arrays.

    A'Address is the address of the first array element, not of the array
    object. For a pool designer it constitutes a problem of getting the
    array object by address. This is what Emmanuel discusses in the blog.

    [ The motivation behind Ada choice was probably to keep the semantics implementation-independent. ]

    Consider for example a list of String elements. When Allocate is called
    with String, it returns the address of all String. But that is not the
    address you would get if you applied 'Address. You have to add/subtract
    some offset in order to get one from another.

    In Simple Components this offset is determined at run-time for each
    generic instance.

    Of course, a proper solution would be fixing Ada by adding another
    address attribute:

    X'Object_Address

    returning the first address of the object as allocated.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to J-P. Rosen on Tue Sep 14 09:00:13 2021
    On 2021-09-14 08:42, J-P. Rosen wrote:
    Le 14/09/2021 à 08:23, Dmitry A. Kazakov a écrit :
    Of course, a proper solution would be fixing Ada by adding another
    address attribute:

        X'Object_Address

    returning the first address of the object as allocated.
    But you cannot assume that the object is allocated as one big chunk.
    Bounds can be allocated at a different place. What would be
    X'Object_Address in that case?

    The object address, without bounds, same as X'Address.

    What Allocate returns is not what A'Address tells. The compiler always
    knows the difference, the programmer have to know it too. Nothing more.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Egil H H@21:1/5 to Jere on Tue Sep 14 03:54:52 2021
    On Tuesday, September 14, 2021 at 2:48:16 AM UTC+2, Jere wrote:

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.


    Yes, but if you look at that blog, they explain the default layout of fat pointers,
    and the special value that need to be set on access types for the layout to change. If you use such a GNAT-ism, your storage pool will also be bound
    to GNAT...


    ie:
    "GNAT typically uses a "fat pointer" for this purpose: the access itself is in fact
    a record of two pointers, one of which points to the bounds, the other points to
    the data. This representation is not appropriate in the case of the header storage pool, so we need to change the memory layout here."

    and:
    "we need to ensure that the bounds for unconstrained arrays are stored next to the element, not in a separate memory block, to improve performance. This is done by setting the Size attribute on the type. When we set this size to that of
    a standard pointer, GNAT automatically changes the layout,"


    --
    ~egilhh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to ehh on Tue Sep 14 17:11:38 2021
    On Tuesday, September 14, 2021 at 6:54:54 AM UTC-4, ehh wrote:
    On Tuesday, September 14, 2021 at 2:48:16 AM UTC+2, Jere wrote:

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.

    Yes, but if you look at that blog, they explain the default layout of fat pointers,
    and the special value that need to be set on access types for the layout to change. If you use such a GNAT-ism, your storage pool will also be bound
    to GNAT...


    ie:
    "GNAT typically uses a "fat pointer" for this purpose: the access itself is in fact
    a record of two pointers, one of which points to the bounds, the other points to
    the data. This representation is not appropriate in the case of the header storage pool, so we need to change the memory layout here."

    and:
    "we need to ensure that the bounds for unconstrained arrays are stored next to
    the element, not in a separate memory block, to improve performance. This is done by setting the Size attribute on the type. When we set this size to that of
    a standard pointer, GNAT automatically changes the layout,"


    What I am seeing in the blog is if I *do not* use the GNAT ism in my storage pool
    and assume that the Size parameter indicates the full size needed, then if someone
    uses my pool with a GNAT compiler it would erroneously access memory.

    You can clearly see the calculation in the blog is:
    Aligned_Size : constant Storage_Count := -- bytes
    Size + Header_Allocation + Extra_Offset;

    Where Size is specified by Allocate, Header_Allocation is the user supplied fields
    of the storage pool, and ***Extra_Offset*** is a separate size value that accounts
    for First/Last or Previous/Next

    Finalization_Master_Size : constant Storage_Count :=
    2 * Standard'Address_Size; -- for Previous and Next
    Extra_Offset : constant Storage_Count :=
    (Element_Type'Descriptor_Size + Finalization_Master_Size)
    / Storage_Unit; -- convert from bits to bytes

    The blog quotes:
    """The size for the bounds is given by the attribute Descriptor_Size. For most types,
    the value of this attribute is 0. However, in the case of unconstrained array types, it
    is the size of two integers"""

    These values are specified separate from the Size parameter and added to it in the
    Allocate function shown in the blog there.

    If I were to write my own storage pool and I didn't do the Extra_Offset calculation and just
    assumed the compiler would call Allocate with the correct Size value for everything,
    the logic in that blog then would indicate that GNAT would assume I did the Extra_Offset
    calculation anyways and it could access those fields even if I didn't explicitly allocate
    them separately as GNAT does. In that case the fields would be part of some other object's
    memory or random memory.

    My original question is if the intended premise in Ada is that Storage pools aren't expected to be
    created to be portable due to just being low level because compilers can just assume
    you will know to allocate hidden fields in addition to what the Size parameter specifies.

    Does that help clarify what I'm confused on? Sorry, I am not great with wording.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Dmitry A. Kazakov on Tue Sep 14 17:21:04 2021
    On Tuesday, September 14, 2021 at 2:23:15 AM UTC-4, Dmitry A. Kazakov wrote:
    On 2021-09-14 02:48, Jere wrote:

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.
    I do not understand your concern. The blog discusses how to add service
    data to the objects allocated in the pool.

    I tried to better articulate my concern in my response to egilhh if you want
    to take a quick look at that and see if it clarifies better.

    I use such pools extensively in Simple Components. E.g. linked lists are implemented this way. The list links are allocated in front of list
    elements which can be of any type, unconstrained arrays included.

    The blog I saw was old, so it is completely possible it no longer is
    true that GNAT does what the blog suggests. I'll take a look at your
    storage pools and see how they handle things like this.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to J-P. Rosen on Tue Sep 14 17:39:43 2021
    On Tuesday, September 14, 2021 at 2:08:49 AM UTC-4, J-P. Rosen wrote:
    Le 14/09/2021 à 02:48, Jere a écrit :
    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next
    pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account >>> for that, but the blog clearly shows that it doesn't
    [...]

    That blog shows a special use for Storage_Pools, where you allocate
    /user/ data on top of the requested memory. When called by the compiler, >> it is up to the compiler to compute how much memory is needed, and your >> duty is to just allocate that.

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained
    arrays in addition to the size specified by allocate.

    I agree I feel it is up to the compiler to provide the correct size to Allocate,
    but the blog would indicate that GNAT does not (or did not..old blog..so who knows?). Does the RM require that an implementation pass the full amount of memory needed to Allocate when new is called?

    The RM says that an allocator allocates storage from the storage pool.
    You could argue that it does not say "allocates all needed storage...",
    but that would be a bit far fetched.
    I agree, but the blog made reconsider how far fetched it was.


    Anyway, a blog is not the proper place to get information from for that
    kind of issue. Look at the Gnat documentation.
    --
    I'll take a look at the GNAT docs to see (and of course that blog is old,
    so GNAT may not do this anymore anyways), but I mainly asking in the
    frame of what Ada allows and/or expects. I'd like to be able to allocate storage simply without worrying how the compiler does it under the hood
    and just assume that any calls to Allocate will ask for the full amount of memory.

    Am I correct to assume that Ada doesn't provide any language means
    to restrict what types a pool can make objects of. The times that I have wanted to make a pool are generally for specific types and it is often
    simpler to design them if I can assume only those types are being generated

    Thanks for the response. I'm sorry for all the questions. That's how I
    learn and I realize it isn't a popular way to learn in the community, but
    I have always learned very differently than most.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Jere on Wed Sep 15 08:54:07 2021
    On 2021-09-15 02:21, Jere wrote:
    On Tuesday, September 14, 2021 at 2:23:15 AM UTC-4, Dmitry A. Kazakov wrote:
    On 2021-09-14 02:48, Jere wrote:

    Yes, but if you look at that blog, they are allocating space for the /user/ data
    and for the Next/Prev for controlled types and First/Last for unconstrained >>> arrays in addition to the size specified by allocate.
    I do not understand your concern. The blog discusses how to add service
    data to the objects allocated in the pool.

    I tried to better articulate my concern in my response to egilhh if you want to take a quick look at that and see if it clarifies better.

    Not really. It seems that you are under impression that Allocate must
    allocate more size than its Size parameter asks. The answer is no,
    unless *you* wanted to add something to each allocated object.

    I use such pools extensively in Simple Components. E.g. linked lists are
    implemented this way. The list links are allocated in front of list
    elements which can be of any type, unconstrained arrays included.

    The blog I saw was old, so it is completely possible it no longer is
    true that GNAT does what the blog suggests. I'll take a look at your
    storage pools and see how they handle things like this.

    I calculate the offset at run time. I keep the list of recently
    allocated blocks. The first time the address is asked from an element,
    that is, the first time an element allocated in the pool becomes known
    to the pool, its address is searched through the list of blocks. The
    minimal difference between the block address and the element address (X'Address) is the offset.

    This is more portable than GNAT-specific attribute Descriptor_Size.

    Again, if attributes to be added, then it should be the object address
    as allocated. The compiler always knows the proper address because this
    address is passed to Free, not X'Address!

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Jere on Wed Sep 15 08:01:50 2021
    Jere <jhb.chat@gmail.com> writes:

    Thanks for the response. I'm sorry for all the questions. That's how
    I learn and I realize it isn't a popular way to learn in the
    community, but I have always learned very differently than most.

    Seems to me you ask interesting questions which generate enlightening responses!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Jere on Wed Sep 15 17:43:57 2021
    Jere <jhb.chat@gmail.com> writes:

    But after reading the following AdaCore article, my assumption is now
    called into question:
    https://blog.adacore.com/header-storage-pools

    In particular, the blog there advocates for separately counting for
    things like unconstrained array First/Last indices or the Prev/Next
    pointers used for Controlled objects. Normally I would have assumed
    that the Size_In_Storage_Elements parameter in Allocate would account
    for that, but the blog clearly shows that it doesn't

    Well, I may well have missed the point somewhere, and maybe things have
    changed since 2015, but as far as I can see, with FSF GCC 11.1.0, the
    technique described in the blog is completely unnecessary.

    To save having to recompile the runtime with debug symbols, I wrote a
    tiny pool which delegates to GNAT's
    System.Pool_Global.Global_Pool_Object (the default pool), e.g.

    overriding procedure Allocate
    (Pool : in out My_Pool.Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count)
    is
    pragma Unreferenced (Pool);
    begin
    Global_Pool_Object.Allocate
    (Address => Storage_Address,
    Storage_Size => Size_In_Storage_Elements,
    Alignment => Alignment);
    end Allocate;

    and I find with

    Pool : My_Pool.Pool;

    type C is new Ada.Finalization.Controlled with null record;
    type Cs is array (Natural range <>) of C;
    type Csp is access Cs with Storage_Pool => Pool;
    procedure Free is new Ada.Unchecked_Deallocation (Cs, Csp);
    Pcs : Csp;

    begin

    Pcs := new Cs (0 .. 5);
    Free (Pcs);

    that

    * the alignment requested is 8 (was 4 for an array of Boolean);
    * the size requested is 72, which is 24 bytes more than required for the
    6 minimal POs;
    * the value returned by Allocate is 24 bytes more than the address of
    the array object Pcs (which is the same as that of Pcs(0));
    * the value passed to Deallocate is the same as that returned by
    Allocate.

    I think it's more than likely (!) that the extra allocation of 24 bytes
    is made up of 2 pointers at 8 bytes each, used to implement the
    finalization chain, and two integers at 4 bytes each, holding the array
    bounds.

    So I'd say that to create a pool with extra header information, you'd
    need to allocate space for your header + padding to ensure that the
    compiler's object is properly aligned + the compiler-requested size,
    aligned to the max of your header's alignment and the compiler-requested alignment.

    Mind, I don't quite see how to actually access the header info for a
    particular allocation ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Simon Wright on Wed Sep 15 18:03:43 2021
    Simon Wright <simon@pushface.org> writes:

    Well, I may well have missed the point somewhere, and maybe things
    have changed since 2015, but as far as I can see, with FSF GCC 11.1.0,
    the technique described in the blog is completely unnecessary.

    Looking at the current implementation in [1] it seems that I was right
    (I don't quite follow the complication of Typed.Header_Of at line 114;
    goes to show that if you're writing the compiler you can add attributes whenever it suits you, even if IMO they weren't actually needed)

    [1] https://github.com/AdaCore/gnatcoll-core/blob/master/src/gnatcoll-storage_pools-headers.adb

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Simon Wright on Wed Sep 15 21:07:22 2021
    On 2021-09-15 18:43, Simon Wright wrote:

    To save having to recompile the runtime with debug symbols, I wrote a
    tiny pool which delegates to GNAT's
    System.Pool_Global.Global_Pool_Object (the default pool), e.g.

    overriding procedure Allocate
    (Pool : in out My_Pool.Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count)
    is
    pragma Unreferenced (Pool);
    begin
    Global_Pool_Object.Allocate
    (Address => Storage_Address,
    Storage_Size => Size_In_Storage_Elements,
    Alignment => Alignment);
    end Allocate;

    and I find with

    Pool : My_Pool.Pool;

    type C is new Ada.Finalization.Controlled with null record;
    type Cs is array (Natural range <>) of C;
    type Csp is access Cs with Storage_Pool => Pool;

    Now define and implement the following function:

    function Get_Allocation_Time (Pointer : Csp) return Time;

    The function returns the time when the pointed object was allocated in
    the pool.

    [...]

    Mind, I don't quite see how to actually access the header info for a particular allocation ...

    By subtracting a fixed offset from Pointer.all'Address. The offset is to
    be determined, because X'Address lies.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Dmitry A. Kazakov on Wed Sep 15 21:40:38 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Mind, I don't quite see how to actually access the header info for a
    particular allocation ...

    By subtracting a fixed offset from Pointer.all'Address. The offset is
    to be determined, because X'Address lies.

    Obvs.

    But my main point was, the blog which was Jere's original problem is in
    fact (now?) wrong.

    Anyway, the Gnatcoll solution is here ... https://github.com/AdaCore/gnatcoll-core/blob/master/src/gnatcoll-storage_pools-headers.adb#L114

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emmanuel Briot@21:1/5 to All on Thu Sep 16 00:12:58 2021
    I am the original implementor of GNATCOLL.Storage_Pools.Headers, and I have been silent in this discussion because I must admit I forgot a lot of the details... To be sure, we did not add new attributes just for the sake of GNATCOLL, those existed
    previously so likely had already found other uses.

    As has been mentioned several time in the discussion, the compiler is already passing the full size it needs to Allocate, and the storage pool only needs to allocate that exact amount in general. This applies for the usual kinds of storage pools, which
    would for instance preallocate a pool for objects of fixed sizes, or add stronger alignment requirements.

    In the case of the GNATCOLL headers pool, we need to allocate more because the user wants to store extra data. For that data, we are left on our own to find the number of bytes we need, which is part of the computation we do: we of course need the number
    of bytes for the header's object_size, but also perhaps some extra bytes that are not returned by that object_size in particular for controlled types and arrays.
    Note again that those additional bytes are for the header type, not for the type the user is allocating (for which, again, the compiler already passes the number of bytes it needs).

    The next difficulty is then to convert from the object's 'Address back to your extra header data. This is when you need to know the size of the prefix added by the compiler (array bounds, tag,...) to skip them and then take into account the alignment,
    and finally the size of your header.
    Dmitry's suggested exercice (storing the timestamp of the allocation) seems like a useful one indeed. It would be nice indeed if GNATCOLL's code was too complicated, but I am afraid this isn't the case. We had used those pools to implement reference
    counting for various complex types, and ended up with that complexity.

    AdaCore (Olivier Hainque) has made a change to the implementation since the blog was published (https://github.com/AdaCore/gnatcoll-core/commits/master/src/gnatcoll-storage_pools-headers.adb), so I got some details wrong in the initial implementation
    apparently.

    Emmanuel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Simon Wright on Thu Sep 16 10:41:41 2021
    On 2021-09-15 22:40, Simon Wright wrote:

    But my main point was, the blog which was Jere's original problem is in
    fact (now?) wrong.

    Right, when nothing to be added to the allocated object, there is
    nothing to worry about.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Simon Wright on Thu Sep 16 16:32:52 2021
    On Wednesday, September 15, 2021 at 3:01:52 AM UTC-4, Simon Wright wrote:
    Jere <> writes:

    Thanks for the response. I'm sorry for all the questions. That's how
    I learn and I realize it isn't a popular way to learn in the
    community, but I have always learned very differently than most.
    Seems to me you ask interesting questions which generate enlightening responses!
    Thanks! though in this case, my question was ill formed after I missed a detail
    in the blog, so the mistake is on me. I will say I hold back some questions
    as it is very intimidating to ask on C.L.A. I mean the first response led off with "Not sure what you are expecting" so it is hard to know how to formulate
    a good question as I always seem to get some harsh responses (which I am
    sure is because I asked the question poorly). I'm unfortunately a very visual person and words are not my forte and I feel like when I ask questions about the boundaries of the language I manage to put folks on the defensive. I
    don't dislike Ada at all, it is my favorite language, but I think it is hard to craft questions on some topics without putting forth the impression that
    I don't like it, at least with my limited ability to word craft.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Emmanuel on Thu Sep 16 16:21:58 2021
    On Thursday, September 16, 2021 at 3:13:00 AM UTC-4, Emmanuel wrote:
    In the case of the GNATCOLL headers pool, we need to allocate more because the user wants to store extra data. For that data, we are left on our own to find the number of bytes we need, which is part of the computation we do: we of course need the
    number of bytes for the header's object_size, but also perhaps some extra bytes that are not returned by that object_size in particular for controlled types and arrays.
    Note again that those additional bytes are for the header type, not for the type the user is allocating (for which, again, the compiler already passes the number of bytes it needs).

    <SNIPEED>
    Emmanuel

    Thanks for the response Emmanuel. That clears it up for me. I think the confusion for me
    came from the terminology used then. In the blog, that extra space for First/Last and Prev/Next
    was mentioned as if it were for the element, which I mistook was the user's object being allocated
    and not the header portion. I didn't catch that as the generic formal's name, so that is my mistake.
    I guess in my head, I would have expected the formal name to be Header_Type or similar so I
    misread it in my haste.

    I appreciate the clarity and apologize if I caused too much of a stir. I was asking the question
    because I didn't understand, so I hope you don't think too poorly of me for it, despite my mistake.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emmanuel Briot@21:1/5 to All on Fri Sep 17 00:08:46 2021
    Thanks for the response Emmanuel. That clears it up for me. I think the confusion for me
    came from the terminology used then. In the blog, that extra space for First/Last and Prev/Next
    was mentioned as if it were for the element, which I mistook was the user's object being allocated
    and not the header portion. I didn't catch that as the generic formal's name, so that is my mistake.
    I guess in my head, I would have expected the formal name to be Header_Type or similar so I
    misread it in my haste.

    Sure, and maybe the blog was not as readable as it should have been. Unfortunately I no longer have
    any possibility to amend it, so I guess we'll live with it !

    Thanks for the questions

    Emmanuel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Jere on Fri Sep 17 08:18:20 2021
    Jere <jhb.chat@gmail.com> writes:

    On Thursday, September 16, 2021 at 3:13:00 AM UTC-4, Emmanuel wrote:
    In the case of the GNATCOLL headers pool, we need to allocate more
    because the user wants to store extra data. For that data, we are
    left on our own to find the number of bytes we need, which is part of
    the computation we do: we of course need the number of bytes for the
    header's object_size, but also perhaps some extra bytes that are not
    returned by that object_size in particular for controlled types and
    arrays.
    Note again that those additional bytes are for the header type, not
    for the type the user is allocating (for which, again, the compiler
    already passes the number of bytes it needs).

    Thanks for the response Emmanuel. That clears it up for me. I think
    the confusion for me came from the terminology used then. In the
    blog, that extra space for First/Last and Prev/Next was mentioned as
    if it were for the element, which I mistook was the user's object
    being allocated and not the header portion. I didn't catch that as
    the generic formal's name, so that is my mistake.

    Given this diagram from the blog, you can hardly be blamed (says a
    person who leapt to the same conclusion):

    +--------+-------+------+----------+------+---------+
    | Header | First | Last | Previous | Next | Element |
    +--------+-------+------+----------+------+---------+

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Jere on Fri Sep 17 15:56:20 2021
    On 2021-09-17 01:21, Jere wrote:

    I appreciate the clarity and apologize if I caused too much of a stir.

    It is not that we have huge traffic in c.l.a.

    I was asking the question
    because I didn't understand, so I hope you don't think too poorly of me for it, despite my mistake.

    Nope, especially because the issue with X'Address being unusable for
    memory pool developers is a long standing painful problem that need to
    be resolved. That will never happen until a measurable group of people
    start asking questions. So you are doubly welcome.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Dmitry A. Kazakov on Fri Sep 17 20:46:26 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for
    memory pool developers is a long standing painful problem that need to
    be resolved. That will never happen until a measurable group of people
    start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about, Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
    (storage units, I think, introduced in 2017)

    [1] https://docs.adacore.com/live/wave/gnat_rm/html/gnat_rm/gnat_rm/implementation_defined_attributes.html#attribute-descriptor-size
    [2] https://docs.adacore.com/live/wave/gnat_rm/html/gnat_rm/gnat_rm/implementation_defined_attributes.html#attribute-finalization-size

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Simon Wright on Fri Sep 17 22:39:05 2021
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for
    memory pool developers is a long standing painful problem that need to
    be resolved. That will never happen until a measurable group of people
    start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about, Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody really
    cares about.

    What is needed is the address passed to Deallocate should the object be
    freed = the address returned by Allocate. Is that too much to ask?

    BTW, finalization lists (#2) should have been removed from the language
    long ago. They have absolutely no use, except maybe for debugging, and introduce huge overhead. The semantics should have been either Unchecked_Deallocation or compiler allocated objects/components may call Finalize, nothing else.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Dmitry A. Kazakov on Sat Sep 18 00:17:50 2021
    On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for
    memory pool developers is a long standing painful problem that need to
    be resolved. That will never happen until a measurable group of people
    start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
    (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody really
    cares about.

    What is needed is the address passed to Deallocate should the object be
    freed = the address returned by Allocate. Is that too much to ask?


    That is already required by RM 13.11(21.7/3): "The value of the
    Storage_Address parameter for a call to Deallocate is the value returned
    in the Storage_Address parameter of the corresponding successful call to Allocate."

    The "size" parameters are also required to be the same in the calls to Deallocate and to Allocate.


    BTW, finalization lists (#2) should have been removed from the language
    long ago.


    Huh? Where does the RM _require_ finalization lists? I see them
    mentioned here and there as a _possible_ implementation technique, and
    an alternative "PC-map" technique is described in RM 7.6.1 (24.r .. 24.t).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Niklas Holsti on Sat Sep 18 09:49:16 2021
    On 2021-09-17 23:17, Niklas Holsti wrote:
    On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for
    memory pool developers is a long standing painful problem that need to >>>> be resolved. That will never happen until a measurable group of people >>>> start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
    (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody really
    cares about.

    What is needed is the address passed to Deallocate should the object
    be freed = the address returned by Allocate. Is that too much to ask?

    That is already required by RM 13.11(21.7/3): "The value of the Storage_Address parameter for a call to Deallocate is the value returned
    in the Storage_Address parameter of the corresponding successful call to Allocate."

    You missed the discussion totally. It is about X'Address attribute.

    The challenge: write pool with a function returning object allocation
    time by its pool-specific access type.

    BTW, finalization lists (#2) should have been removed from the
    language long ago.

    Huh? Where does the RM _require_ finalization lists?

    7.6.1 (11 1/3)

    I see them
    mentioned here and there as a _possible_ implementation technique, and
    an alternative "PC-map" technique is described in RM 7.6.1 (24.r .. 24.t).

    I don't care about techniques to implement meaningless stuff. It should
    be out, at least there must be a representation aspect for turning this
    mess off.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Dmitry A. Kazakov on Sat Sep 18 12:03:03 2021
    On 2021-09-18 10:49, Dmitry A. Kazakov wrote:
    On 2021-09-17 23:17, Niklas Holsti wrote:
    On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for >>>>> memory pool developers is a long standing painful problem that need to >>>>> be resolved. That will never happen until a measurable group of people >>>>> start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] >>>> (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody really
    cares about.

    What is needed is the address passed to Deallocate should the object
    be freed = the address returned by Allocate. Is that too much to ask?

    That is already required by RM 13.11(21.7/3): "The value of the
    Storage_Address parameter for a call to Deallocate is the value
    returned in the Storage_Address parameter of the corresponding
    successful call to Allocate."

    You missed the discussion totally. It is about X'Address attribute.


    Sure, I understand that the address returned by Allocate, and passed to Deallocate, for an object X, is not always X'Address, and that you would
    like some means to get the Allocate/Deallocate address from (an access
    to) X. But what you stated as not "too much to ask" is specifically
    required in the RM paragraph I quoted. Perhaps you meant to state
    something else, about X'Address or some other attribute, but that was
    not what you wrote.

    Given that an object can be allocated in multiple independent pieces, it
    seems unlikely that what you want will be provided. You would need some
    way of iterating over all the pieces allocated for a given object, or
    some way of defining a "main" or "prime" piece and a means to get the Allocate/Deallocate address for that piece.


    BTW, finalization lists (#2) should have been removed from the
    language long ago.

    Huh? Where does the RM _require_ finalization lists?

    7.6.1 (11 1/3)


    RM (2012) 7.6.1 (11.1/3) says only that objects must be finalized in
    reverse order of their creation. There is no mention of "list".


    I see them mentioned here and there as a _possible_ implementation
    technique, and an alternative "PC-map" technique is described in RM
    7.6.1 (24.r .. 24.t).

    I don't care about techniques to implement meaningless stuff. It should
    be out, at least there must be a representation aspect for turning this
    mess off.


    Then your complaint seems to be about something specified for the order
    of finalization, but you haven't said clearly what that something is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Niklas Holsti on Sat Sep 18 12:22:45 2021
    On 2021-09-18 11:03, Niklas Holsti wrote:
    On 2021-09-18 10:49, Dmitry A. Kazakov wrote:
    On 2021-09-17 23:17, Niklas Holsti wrote:
    On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for >>>>>> memory pool developers is a long standing painful problem that
    need to
    be resolved. That will never happen until a measurable group of
    people
    start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] >>>>> (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody
    really cares about.

    What is needed is the address passed to Deallocate should the object
    be freed = the address returned by Allocate. Is that too much to ask?

    That is already required by RM 13.11(21.7/3): "The value of the
    Storage_Address parameter for a call to Deallocate is the value
    returned in the Storage_Address parameter of the corresponding
    successful call to Allocate."

    You missed the discussion totally. It is about X'Address attribute.


    Sure, I understand that the address returned by Allocate, and passed to Deallocate, for an object X, is not always X'Address, and that you would
    like some means to get the Allocate/Deallocate address from (an access
    to) X. But what you stated as not "too much to ask" is specifically
    required in the RM paragraph I quoted. Perhaps you meant to state
    something else, about X'Address or some other attribute, but that was
    not what you wrote.

    I wrote about attributes, specifically GNAT-specific ones used in the
    blog to calculate the correct address. "Too much to ask" was about an
    attribute that would return the object address directly.

    Given that an object can be allocated in multiple independent pieces, it seems unlikely that what you want will be provided.

    Such implementations would automatically disqualify the compiler. Compiler-generated piecewise allocation is OK for the stack, not for
    user storage pools.

    And anyway, all this equally applies to X'Address.

    BTW, finalization lists (#2) should have been removed from the
    language long ago.

    Huh? Where does the RM _require_ finalization lists?

    7.6.1 (11 1/3)


    RM (2012) 7.6.1 (11.1/3) says only that objects must be finalized in
    reverse order of their creation. There is no mention of "list".

    It talks about "collection."

    Then your complaint seems to be about something specified for the order
    of finalization, but you haven't said clearly what that something is.

    No, it is about the overhead of maintaining "collections" associated
    with an access type in order to call Finalization for all members of the collection.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Randy Brukardt on Sat Sep 18 12:32:52 2021
    "Randy Brukardt" <randy@rrsoftware.com> writes:

    Not sure what you are expecting. There is no requirement that objects
    are allocated contigiously. Indeed, Janus/Ada will call Allocate as
    many times as needed for each object; for instance, unconstrained
    arrays are in two parts (descriptor and data area).

    The referenced blog[1] says

    "As we mentioned before, we need to ensure that the bounds for
    unconstrained arrays are stored next to the element, not in a
    separate memory block, to improve performance. This is done by
    setting the Size attribute on the type. When we set this size to
    that of a standard pointer, GNAT automatically changes the layout,
    so that we now have:

    +-------+------+---------+
    | First | Last | Element |
    +-------+------+---------+

    I _think_ I was aware of this before, in fact I remember using it, but
    not where! Is it documented anywhere?

    [1] https://blog.adacore.com/header-storage-pools

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Dmitry A. Kazakov on Sat Sep 18 18:59:27 2021
    On 2021-09-18 13:22, Dmitry A. Kazakov wrote:
    On 2021-09-18 11:03, Niklas Holsti wrote:
    On 2021-09-18 10:49, Dmitry A. Kazakov wrote:
    On 2021-09-17 23:17, Niklas Holsti wrote:
    On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for >>>>>>> memory pool developers is a long standing painful problem that
    need to
    be resolved. That will never happen until a measurable group of
    people
    start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and
    Finalization_Size[2]
    (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody
    really cares about.

    What is needed is the address passed to Deallocate should the
    object be freed = the address returned by Allocate. Is that too
    much to ask?

    That is already required by RM 13.11(21.7/3): "The value of the
    Storage_Address parameter for a call to Deallocate is the value
    returned in the Storage_Address parameter of the corresponding
    successful call to Allocate."

    You missed the discussion totally. It is about X'Address attribute.


    Sure, I understand that the address returned by Allocate, and passed
    to Deallocate, for an object X, is not always X'Address, and that you
    would like some means to get the Allocate/Deallocate address from (an
    access to) X. But what you stated as not "too much to ask" is
    specifically required in the RM paragraph I quoted. Perhaps you meant
    to state something else, about X'Address or some other attribute, but
    that was not what you wrote.

    I wrote about attributes, specifically GNAT-specific ones used in the
    blog to calculate the correct address.


    You wrote about the attributes in an earlier paragraph, not the one that
    said "too much to ask" -- see the quotes above. But ok, enough said.


    "Too much to ask" was about an
    attribute that would return the object address directly.

    Given that an object can be allocated in multiple independent pieces,
    it seems unlikely that what you want will be provided.

    Such implementations would automatically disqualify the compiler. Compiler-generated piecewise allocation is OK for the stack, not for
    user storage pools.


    That is your opinion (or need), to which you are entitled, of course,
    but it is not an RM requirement on compilers -- see Randy's posts about
    what Janus/Ada does.


    BTW, finalization lists (#2) should have been removed from the
    language long ago.

    Huh? Where does the RM _require_ finalization lists?

    7.6.1 (11 1/3)


    RM (2012) 7.6.1 (11.1/3) says only that objects must be finalized in
    reverse order of their creation.


    Oops, I quoted the above from 7.6.1 (11/3), which specifies the order of finalization in another case (finalization of a master). RM 7.6.1
    (11.1/3) leaves the order arbitrary for the finalization of a collection.


    There is no mention of "list".

    It talks about "collection."

    Then your complaint seems to be about something specified for the
    order of finalization, but you haven't said clearly what that
    something is.

    No, it is about the overhead of maintaining "collections" associated
    with an access type in order to call Finalization for all members of the collection.


    So you want a way to specify that for a given access type, although the accessed object type has a Finalize operation or needs finalization, the objects left over in the (at least conceptually) associated collection
    should _not_ be finalized when the scope of the access type is left?
    Have you proposed this to the ARG?

    To me it seems a risky think to do, subverting the normal semantics of initialization and finalization. Perhaps it could be motivated for library-level collections in programs that are known to never terminate
    (that is, to not need any clean-up when they do stop or are stopped).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Niklas Holsti on Sat Sep 18 18:19:23 2021
    On 2021-09-18 17:59, Niklas Holsti wrote:

    So you want a way to specify that for a given access type, although the accessed object type has a Finalize operation or needs finalization, the objects left over in the (at least conceptually) associated collection
    should _not_ be finalized when the scope of the access type is left?

    Exactly, especially because these objects are not deallocated, as you
    say they are left over. If they wanted GC they should do that. If they
    do not, then they should keep their hands off the objects maintained by
    the programmer.

    To me it seems a risky think to do, subverting the normal semantics of initialization and finalization.

    Quite the opposite, it is the collection rule that subverts semantics
    because objects are not freed, yet mangled.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Dmitry A. Kazakov on Sun Sep 19 13:36:11 2021
    On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
    On 2021-09-18 17:59, Niklas Holsti wrote:

    So you want a way to specify that for a given access type, although
    the accessed object type has a Finalize operation or needs
    finalization, the objects left over in the (at least conceptually)
    associated collection should _not_ be finalized when the scope of the
    access type is left?

    Exactly, especially because these objects are not deallocated, as you
    say they are left over. If they wanted GC they should do that. If they
    do not, then they should keep their hands off the objects maintained by
    the programmer.

    To me it seems a risky think to do, subverting the normal semantics of
    initialization and finalization.

    Quite the opposite, it is the collection rule that subverts semantics
    because objects are not freed, yet mangled.


    Local variables declared in a subprogram are also not explicitly freed (deallocated), yet they are automatically finalized when the subprogram returns.

    My understanding of Ada semantic principles is that any object that is initialized should also be finalized. Since the objects left in a
    collection associated with a local access type become inaccessible when
    the scope of the access type is left, finalizing them automatically,
    even without an explicit free, makes sense to me. If you disagree,
    suggest a change to the ARG and see if you can find supporters.

    Has this feature of Ada caused you real problems in real applications,
    or is it only a point of principle for you?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Niklas Holsti on Sun Sep 19 13:41:00 2021
    On 2021-09-19 12:36, Niklas Holsti wrote:
    On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
    On 2021-09-18 17:59, Niklas Holsti wrote:

    So you want a way to specify that for a given access type, although
    the accessed object type has a Finalize operation or needs
    finalization, the objects left over in the (at least conceptually)
    associated collection should _not_ be finalized when the scope of the
    access type is left?

    Exactly, especially because these objects are not deallocated, as you
    say they are left over. If they wanted GC they should do that. If they
    do not, then they should keep their hands off the objects maintained
    by the programmer.

    To me it seems a risky think to do, subverting the normal semantics
    of initialization and finalization.

    Quite the opposite, it is the collection rule that subverts semantics
    because objects are not freed, yet mangled.

    Local variables declared in a subprogram are also not explicitly freed (deallocated), yet they are automatically finalized when the subprogram returns.

    Local objects are certainly freed. Explicit or not, aggregated or not,
    is irrelevant.

    My understanding of Ada semantic principles is that any object that is initialized should also be finalized.

    IFF deallocated.

    An application that runs continuously will never deallocate, HENCE
    finalize certain objects.

    Has this feature of Ada caused you real problems in real applications,
    or is it only a point of principle for you?

    1. It is a massive overhead in both memory and performance terms with no purpose whatsoever. I fail to see where that sort of thing might be even marginally useful. Specialized pools, e.g. mark-and-release will deploy
    their own bookkeeping, not rely on this.

    2. What is worse that a collection is not bound to the pool. It is to an
    access type, which may have a narrower scope. So the user could declare
    an unfortunate access type, which would corrupt objects in the pool and
    the pool designer has no means to prevent that.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Randy Brukardt on Sun Sep 19 17:31:26 2021
    Followup question cause Randy's statement (below) got me thinking:
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about
    enforcing that the user's header is only created once? In the example
    Randy gave (unconstrained arrays), in Janus there is an allocation for the descriptor and a separate allocation for the data. If I am making a storage pool that is intending to create a hidden header for my objects, this means
    in Janus Ada (and potentially other compilers) I would instead create two headers, one for the descriptor and one for the data, when I might intend
    to have one header for the entire object.

    On Monday, September 13, 2021 at 1:29:39 AM UTC-4, Randy Brukardt wrote:
    Not sure what you are expecting. There is no requirement that objects are allocated contigiously. Indeed, Janus/Ada will call Allocate as many times
    as needed for each object; for instance, unconstrained arrays are in two parts (descriptor and data area).

    <SNIPPED>

    Randy.


    "Jere" <> wrote in message
    news:e3c5c553-4a7f-408a...@googlegroups.com...
    I was learning about making user defined storage pools when
    I came across an article that made me pause and wonder how
    portable storage pools actually can be. In particular, I assumed
    that the Size_In_Storage_Elements parameter in the Allocate
    operation actually indicated the total number of storage elements
    needed.

    <SNIPPED>


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Emmanuel Briot@21:1/5 to All on Sun Sep 19 23:48:20 2021
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about enforcing that the user's header is only created once?
    I think one cannot enforce that, because the calls to Allocate do not indicate (with parameters) which set of calls concern the same object allocation.

    I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:

    for X'Storage_Pool use Pool;
    for X'Descriptor_Storage_Pool use Other_Pool;

    That way the user can decide when to add (or not) extra headers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Jere on Mon Sep 20 09:34:47 2021
    On 2021-09-20 3:31, Jere wrote:
    Followup question cause Randy's statement (below) got me thinking:
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about
    enforcing that the user's header is only created once?


    I think one cannot enforce that, because the calls to Allocate do not
    indicate (with parameters) which set of calls concern the same object allocation.

    This is probably why Dmitry said that such compiler behaviour would
    "disqualify the compiler" for his uses.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 10:05:14 2021
    On 2021-09-19 14:41, Dmitry A. Kazakov wrote:
    On 2021-09-19 12:36, Niklas Holsti wrote:
    On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
    On 2021-09-18 17:59, Niklas Holsti wrote:

    So you want a way to specify that for a given access type, although
    the accessed object type has a Finalize operation or needs
    finalization, the objects left over in the (at least conceptually)
    associated collection should _not_ be finalized when the scope of
    the access type is left?

    Exactly, especially because these objects are not deallocated, as you
    say they are left over. If they wanted GC they should do that. If
    they do not, then they should keep their hands off the objects
    maintained by the programmer.

    To me it seems a risky think to do, subverting the normal semantics
    of initialization and finalization.

    Quite the opposite, it is the collection rule that subverts semantics
    because objects are not freed, yet mangled.

    Local variables declared in a subprogram are also not explicitly freed
    (deallocated), yet they are automatically finalized when the
    subprogram returns.

    Local objects are certainly freed. Explicit or not, aggregated or not,
    is irrelevant.


    Objects left over in a local collection may certainly be freed
    automatically, if the implementation has created a local pool for them.
    See ARM 13.11 (2.a): "Alternatively, [the implementation] might choose
    to create a new pool at each accessibility level, which might mean that
    storage is reclaimed for an access type when leaving the appropriate scope."


    My understanding of Ada semantic principles is that any object that is
    initialized should also be finalized.

    IFF deallocated.

    An application that runs continuously will never deallocate, HENCE
    finalize certain objects.


    And I agreed that in this case it could be nice to let the programmer
    specify that keeping collections is not needed.


    Has this feature of Ada caused you real problems in real applications,
    or is it only a point of principle for you?

    1. It is a massive overhead in both memory and performance terms with no purpose whatsoever. [...]


    Have you actually measured or observed that overhead in some application?


    2. What is worse that a collection is not bound to the pool. It is to an access type, which may have a narrower scope. So the user could declare
    an unfortunate access type, which would corrupt objects in the pool and
    the pool designer has no means to prevent that.


    So there is a possibility of programmer mistake, leading to unintended finalization of those (now inaccessible) objects.

    However, your semantic argument (as opposed to the overhead argument)
    seems to be based on an assumption that the objects "left over" in a
    local collection, and which thus are inaccessible, will still, somehow, participate in the later execution of the program, which is why you say
    that finalizing those objects would "corrupt" them.

    It seems to me that such continued participation is possible only if the objects contain tasks or are accessed through some kind of unchecked programming. Do you agree?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Niklas Holsti@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 11:08:52 2021
    On 2021-09-20 10:35, Dmitry A. Kazakov wrote:
    On 2021-09-20 09:05, Niklas Holsti wrote:


    [snipping context]


    However, your semantic argument (as opposed to the overhead argument)
    seems to be based on an assumption that the objects "left over" in a
    local collection, and which thus are inaccessible, will still,
    somehow, participate in the later execution of the program, which is
    why you say that finalizing those objects would "corrupt" them.

    It seems to me that such continued participation is possible only if
    the objects contain tasks or are accessed through some kind of
    unchecked programming. Do you agree?

    No. You can have them accessible over other access types with wider scopes:

       Collection_Pointer := new X;
       Global_Pointer := Collection_Pointer.all'Unchecked_Access;



    So, unchecked programming, as I said.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Niklas Holsti on Mon Sep 20 09:35:35 2021
    On 2021-09-20 09:05, Niklas Holsti wrote:
    On 2021-09-19 14:41, Dmitry A. Kazakov wrote:
    On 2021-09-19 12:36, Niklas Holsti wrote:
    On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
    On 2021-09-18 17:59, Niklas Holsti wrote:

    So you want a way to specify that for a given access type, although
    the accessed object type has a Finalize operation or needs
    finalization, the objects left over in the (at least conceptually)
    associated collection should _not_ be finalized when the scope of
    the access type is left?

    Exactly, especially because these objects are not deallocated, as
    you say they are left over. If they wanted GC they should do that.
    If they do not, then they should keep their hands off the objects
    maintained by the programmer.

    To me it seems a risky think to do, subverting the normal semantics
    of initialization and finalization.

    Quite the opposite, it is the collection rule that subverts
    semantics because objects are not freed, yet mangled.

    Local variables declared in a subprogram are also not explicitly
    freed (deallocated), yet they are automatically finalized when the
    subprogram returns.

    Local objects are certainly freed. Explicit or not, aggregated or not,
    is irrelevant.

    Objects left over in a local collection may certainly be freed
    automatically, if the implementation has created a local pool for them.
    See ARM 13.11 (2.a): "Alternatively, [the implementation] might choose
    to create a new pool at each accessibility level, which might mean that storage is reclaimed for an access type when leaving the appropriate
    scope."

    May or may not. The feature which correctness depends on scopes and lots
    of further assumptions has no place in a language like Ada.

    Has this feature of Ada caused you real problems in real
    applications, or is it only a point of principle for you?

    1. It is a massive overhead in both memory and performance terms with
    no purpose whatsoever. [...]

    Have you actually measured or observed that overhead in some application?

    How?

    However you could estimate it from the most likely implementation as a doubly-linked list. You add new element for each allocation and remove
    one for each deallocation. The elements are allocated in the same pool
    or in some other pool. Finalization is not time bounded, BTW. Nice?

    2. What is worse that a collection is not bound to the pool. It is to
    an access type, which may have a narrower scope. So the user could
    declare an unfortunate access type, which would corrupt objects in the
    pool and the pool designer has no means to prevent that.

    So there is a possibility of programmer mistake, leading to unintended finalization of those (now inaccessible) objects.

    However, your semantic argument (as opposed to the overhead argument)
    seems to be based on an assumption that the objects "left over" in a
    local collection, and which thus are inaccessible, will still, somehow, participate in the later execution of the program, which is why you say
    that finalizing those objects would "corrupt" them.

    It seems to me that such continued participation is possible only if the objects contain tasks or are accessed through some kind of unchecked programming. Do you agree?

    No. You can have them accessible over other access types with wider scopes:

    Collection_Pointer := new X;
    Global_Pointer := Collection_Pointer.all'Unchecked_Access;

    access discriminants etc. As I said, hands off!

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Emmanuel Briot on Mon Sep 20 09:35:21 2021
    On 2021-09-20 08:48, Emmanuel Briot wrote:
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about
    enforcing that the user's header is only created once?
    I think one cannot enforce that, because the calls to Allocate do not
    indicate (with parameters) which set of calls concern the same object
    allocation.

    I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:

    for X'Storage_Pool use Pool;
    for X'Descriptor_Storage_Pool use Other_Pool;

    That way the user can decide when to add (or not) extra headers.

    This will not work with arenas and stack pools. And it is error-prone
    because the attribute is associated with the access type. Furthermore,
    it is the descriptor you wanted to tag with extra data.

    One could add another primitive operation to Root_Storage_Pool:

    procedure Allocate_Secondary (
    Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
    Alignment : in Storage_Elements.Storage_Count;
    Segment_No : in Positive); -- Re-dispatches to Allocate

    The object allocation protocol would be:

    Allocate_Secondary (Pool, ..., 1);
    Allocate_Secondary (Pool, ..., 2);
    ...
    Allocate_Secondary (Pool, ..., N);
    Allocate (Pool, ...); -- Header goes here

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Niklas Holsti on Mon Sep 20 10:28:28 2021
    On 2021-09-20 10:08, Niklas Holsti wrote:
    On 2021-09-20 10:35, Dmitry A. Kazakov wrote:

    No. You can have them accessible over other access types with wider
    scopes:

        Collection_Pointer := new X;
        Global_Pointer := Collection_Pointer.all'Unchecked_Access;

    So, unchecked programming, as I said.

    Right, working with pools is all that thing. Maybe "new" should be named "unchecked_new" (:-))

    Finalize and Initialize certainly should have been Unchecked_Finalize
    and Unchecked_Initialize as they are not enforced. You can override the parent's Initialize and never call it. It is a plain primitive
    operations anybody can call any time any place. You can even call it
    before the object is fully initialized!

    So, why bother with objects the user manually allocates (and forgets to
    free)?

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Shark8@21:1/5 to briot.e on Mon Sep 20 09:59:11 2021
    On Monday, September 20, 2021 at 12:48:21 AM UTC-6, briot.e wrote:
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about enforcing that the user's header is only created once?
    I think one cannot enforce that, because the calls to Allocate do not indicate (with parameters) which set of calls concern the same object allocation.
    I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:

    for X'Storage_Pool use Pool;
    for X'Descriptor_Storage_Pool use Other_Pool;

    That way the user can decide when to add (or not) extra headers.
    Hmmm, smells like a place to use generics and subpools; perhaps something like:

    Generic
    Type Element(<>) is limited private;
    Type Descriptor(<>) is limited private;
    with Create( Item : Element ) return Descriptor;
    Package Descriptor_Subpool is
    Type Pool_Type is new System.Storage_Pools.Subpools.Root_Storage_Pool_With_Subpools with private;
    Private
    -- Element-subpool & descriptor-subpool defined here.
    -- Allocation of element also allocates Descriptor.
    End Descriptor_Subpool;

    Just top-of-the-head musings though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to All on Mon Sep 20 18:51:15 2021
    Sorry about that, I didn't understand what you were asking. And I get
    defensive about people who think that a pool should get some specific Size
    (and only that size), so I leapt to a conclusion and answered accordingly.

    The compiler requests all of the memory IT needs, but if the pool needs some additional memory for it's purposes (pretty common), it will need to add
    that space itself. It's hard to imagine how it could be otherwise, I guess I would have thought that goes without saying. (And that rather proves that
    there is nothing that goes without saying.)

    Randy.

    "Jere" <jhb.chat@gmail.com> wrote in message news:96e7199f-c354-402f-a6c6-2a0e042b6747n@googlegroups.com...
    On Wednesday, September 15, 2021 at 3:01:52 AM UTC-4, Simon Wright wrote:
    Jere <> writes:

    Thanks for the response. I'm sorry for all the questions. That's how
    I learn and I realize it isn't a popular way to learn in the
    community, but I have always learned very differently than most.
    Seems to me you ask interesting questions which generate enlightening
    responses!
    Thanks! though in this case, my question was ill formed after I missed a detail
    in the blog, so the mistake is on me. I will say I hold back some
    questions
    as it is very intimidating to ask on C.L.A. I mean the first response led off
    with "Not sure what you are expecting" so it is hard to know how to
    formulate
    a good question as I always seem to get some harsh responses (which I am
    sure is because I asked the question poorly). I'm unfortunately a very visual
    person and words are not my forte and I feel like when I ask questions
    about
    the boundaries of the language I manage to put folks on the defensive. I don't dislike Ada at all, it is my favorite language, but I think it is
    hard to
    craft questions on some topics without putting forth the impression that
    I don't like it, at least with my limited ability to word craft.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to J-P. Rosen on Mon Sep 20 18:58:34 2021
    "J-P. Rosen" <rosen@adalog.fr> wrote in message news:shpg9b$t23$1@dont-email.me...
    Le 14/09/2021 à 08:23, Dmitry A. Kazakov a écrit :
    Of course, a proper solution would be fixing Ada by adding another
    address attribute:

    X'Object_Address

    returning the first address of the object as allocated.
    But you cannot assume that the object is allocated as one big chunk.
    Bounds can be allocated at a different place. What would be
    X'Object_Address in that case?

    The address of the real object, which is the bounds. (I'm using "object" in
    the Janus/Ada compiler sense and not in the Ada sense.) The only way I could make sense of the implementation requirements for Janus/Ada was to have
    every object be statically sized. If the Ada object is *not* statically
    sized, then the Janus/Ada object is a descriptor that provides access to
    that Ada object data.

    Generally, one wants access to the statically sized object, as that provides access to everything else (there may be multiple levels if discriminant-dependent arrays are involved). That's not what 'Address is supposed to provide, so the address used internally to the compiler is the wrong answer in Ada terms, but it is the right answer for most uses (storage pools being an obvious example).

    When one specifies 'Address in Janus/Ada, you are specifying the address of
    the statically allocated data. The rest of the object lives in some storage pool and it makes absolutely no sense to try to force that somewhere.

    There's no sensible reason to expect 'Address to be
    implementation-independent; specifying the address of unconstrained arrays
    is nonsense unless you know that the same Ada compiler is creating the
    object you are accessing -- hardly a common case.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 18:48:02 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:shpf4d$1a6s$1@gioia.aioe.org...
    On 2021-09-14 02:48, Jere wrote:
    ...
    The problem with unconstrained arrays is not that the bounds are not allocated, they are, but the semantics of X'Address when applied to
    arrays.

    A'Address is the address of the first array element, not of the array
    object. For a pool designer it constitutes a problem of getting the array object by address. This is what Emmanuel discusses in the blog.

    Right, this is why Janus/Ada never "fixed" 'Address to follow the Ada requirement. (Our Ada 83 compiler treats the "object" as whatever the
    top-level piece is, for an unconstrained array, that's the bounds -- the
    data lies elsewhere and is separately allocated -- something that follows
    from slice semantics.)

    I suppose your suggestion of implementing yet-another-attribute is probably
    the right way to go, and then finding every use of 'Address in existing RRS Janus/Ada code and changing it to use the new attribute that works "right".

    Randy.



    [ The motivation behind Ada choice was probably to keep the semantics implementation-independent. ]

    Consider for example a list of String elements. When Allocate is called
    with String, it returns the address of all String. But that is not the address you would get if you applied 'Address. You have to add/subtract
    some offset in order to get one from another.

    In Simple Components this offset is determined at run-time for each
    generic instance.

    Of course, a proper solution would be fixing Ada by adding another address attribute:

    X'Object_Address

    returning the first address of the object as allocated.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to All on Mon Sep 20 19:06:58 2021
    "Jere" <jhb.chat@gmail.com> wrote in message news:036086ba-ea40-44cb-beb7-cded0f501cfbn@googlegroups.com...
    On Monday, September 13, 2021 at 1:29:39 AM UTC-4, Randy Brukardt wrote:
    Not sure what you are expecting. There is no requirement that objects are
    allocated contigiously. Indeed, Janus/Ada will call Allocate as many
    times
    as needed for each object; for instance, unconstrained arrays are in two
    parts (descriptor and data area).

    No expectations. Just questions. I wasn't concerned with whether the allocated memory was contiguous or not, but whether an implementation
    is required to supply the correct size of memory needed to allocate an
    object
    or if it is allowed to pass a value to Size that is less than the amount
    of
    memory actually needed. For example, the blog there indicates the
    maintainer of the custom storage pool needs to account for First/Last
    indexes of an unconstrained array separately instead of assuming that
    value is
    included as part of the Size parameter's value.

    If the Size parameter doesn't require that it includes space for
    First/Last
    for unconstrained arrays or Prev/Next for controlled objects (assuming
    that is even the implementation picked of course), then I'm not seeing
    a way to write a custom storage pool that is portable because you need
    to account for each implementation's "hidden" values that are not
    represented
    in the Size parameter.

    No, that would be wrong. The implementation has to calculate the Size of
    memory that it needs, no less.

    For example if Janus calculated Size to have
    both the size of the array and the size of First and Last but GNAT didn't
    and my storage pool assumed the JANUS method, then if someone
    used my storage pool with GNAT then it would access memory
    from some other location potentially and erroneously.

    No. What you cannot assume is that all of the memory is allocated at once. There can be multiple parts. But the compiler has to figure out the right
    size for each part, it can't tell you it needs 8 bytes and use 10. That
    would be a broken compiler.

    The only thing that you can assume in a portable library is that you get
    called the same number of times and sizes/alignment for Allocate and
    Deallocate; there's no assumptions about size or alignment that you can
    make.
    So to be clear, you cannot assume that Size and Alignment are appropriate
    for the actual object being allocated correct? Size could actually be
    less than the actual amount of memory needed and the alignment may only
    apply to part of the object being allocated, not the full object?

    Yes and no. You can't assume anything about the Size and Alignment passed.
    But whatever is passed has to be what the compiler actually needs.

    Is that correct? I'm asking because that is what the blog suggests with
    the example it gave.

    The blog sounds like nonsense for most uses. It sounds like someone is
    trying to do something very GNAT-specific -- and that's fine (I have lots of pools that assume the size of array descriptors in Janus/Ada to separate
    those from the array data allocations). But it's irrelevant for normal use.


    If you want to build a pool around some specific allocated size, then if
    it
    needs to be portable, (A) you have to calculate the allocated size, and
    (B)
    you have to have a mechanism for what to do if some other size is
    requested.
    (Allocate a whole block for smaller sizes, fall back to built-in heap for
    too large is what I usually do).

    Are there any good tricks to handle this? For example, if I design a
    storage pool around constructing a particular type of object, what is normally done to discourage another programmer from using the pool with
    an entirely different type? Maybe raise an exception if the size isn't exact?
    I'm not sure what else, unless maybe there is an Aspect/Attribute that
    can be set to ensure only a specific type of object can be constructed.

    I either raise Program_Error (if I'm lazy), or simply hand off "wrong-sized" allocations/deallocations to the standard Storage_Pool.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 19:18:31 2021
    Janus/Ada uses chains for everything; there is no attempt to do anything
    else. They make dealing with partially initialized objects (when some initialization fails) much easier, and are strongly related to the lists of allocated memory that Janus/Ada uses anyway.

    It's easy to deal with a stand-alone controlled object, but when you have components of dynamic components of dynamically sized object and have to
    deal with failures and allocated objects, the headaches just aren't worth it (IMHO). The cost is insignificant unless you actually have controlled
    objects, so you only pay for what you use (to steal a line from a commercial that runs far too often in the US).

    Randy.


    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si2ud7$fv2$1@gioia.aioe.org...
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for
    memory pool developers is a long standing painful problem that need to
    be resolved. That will never happen until a measurable group of people
    start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
    (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody really
    cares about.

    What is needed is the address passed to Deallocate should the object be
    freed = the address returned by Allocate. Is that too much to ask?

    BTW, finalization lists (#2) should have been removed from the language
    long ago. They have absolutely no use, except maybe for debugging, and introduce huge overhead. The semantics should have been either Unchecked_Deallocation or compiler allocated objects/components may call Finalize, nothing else.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 19:19:38 2021
    There is: Restriction No_Controlled_Types. - Randy

    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si45ls$1goa$1@gioia.aioe.org...
    On 2021-09-17 23:17, Niklas Holsti wrote:
    On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
    On 2021-09-17 21:46, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Nope, especially because the issue with X'Address being unusable for >>>>> memory pool developers is a long standing painful problem that need to >>>>> be resolved. That will never happen until a measurable group of people >>>>> start asking questions. So you are doubly welcome.

    There are two attributes that we should all have known about,
    Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] >>>> (storage units, I think, introduced in 2017)

    They are non-standard and have murky semantics I doubt anybody really
    cares about.

    What is needed is the address passed to Deallocate should the object be
    freed = the address returned by Allocate. Is that too much to ask?

    That is already required by RM 13.11(21.7/3): "The value of the
    Storage_Address parameter for a call to Deallocate is the value returned
    in the Storage_Address parameter of the corresponding successful call to
    Allocate."

    You missed the discussion totally. It is about X'Address attribute.

    The challenge: write pool with a function returning object allocation time
    by its pool-specific access type.

    BTW, finalization lists (#2) should have been removed from the language
    long ago.

    Huh? Where does the RM _require_ finalization lists?

    7.6.1 (11 1/3)

    I see them mentioned here and there as a _possible_ implementation
    technique, and an alternative "PC-map" technique is described in RM 7.6.1
    (24.r .. 24.t).

    I don't care about techniques to implement meaningless stuff. It should be out, at least there must be a representation aspect for turning this mess off.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 19:30:41 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si77kd$rka$1@gioia.aioe.org...
    On 2021-09-19 12:36, Niklas Holsti wrote:
    ...
    Local variables declared in a subprogram are also not explicitly freed
    (deallocated), yet they are automatically finalized when the subprogram
    returns.

    Local objects are certainly freed. Explicit or not, aggregated or not, is irrelevant.

    OK...

    My understanding of Ada semantic principles is that any object that is
    initialized should also be finalized.

    IFF deallocated.

    ...as you note above for stack objects, all objects are conceptually deallocated. Whether the memory is actually returned to a storage pool is irrelevant.

    The original Ada model was that Unchecked_Deallocation is something to be avoided if at all possible (thus the name), one would never want to tie finalization to such a thing.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 19:26:19 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si4ell$1b25$1@gioia.aioe.org...
    ...
    Given that an object can be allocated in multiple independent pieces, it
    seems unlikely that what you want will be provided.

    Such implementations would automatically disqualify the compiler. Compiler-generated piecewise allocation is OK for the stack, not for user storage pools.

    If someone wants to require contigious allocation of objects, there should
    be a representation attribute to specify it. And there should not be an nonsense restrictions on records with defaulted discriminants unless you specify that you require contiguous allocation. There is no good reason to
    need that for 99% of objects, why insist on a very expensive implementation
    of slices/unconstrained arrays unless it's required??

    ...
    No, it is about the overhead of maintaining "collections" associated with
    an access type in order to call Finalization for all members of the collection.

    How else would you ensure that Finalize is always called on an allocated object? Unchecked_Deallocation need not be called on an allocated object.
    The Ada model is that Finalize will ALWAYS be called on every controlled
    object before the program ends; there are no "leaks" of finalization. Otherwise, one cannot depend on finalization for anything important; you
    would often leak resources (especially for simple kernels that don't try to free unreleased resources themselves).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 19:37:56 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si77kd$rka$1@gioia.aioe.org...
    ...
    1. It is a massive overhead in both memory and performance terms with no purpose whatsoever. I fail to see where that sort of thing might be even marginally useful.

    The classic example of Finalization is file management on a simple kernel (I use CP/M as the example in my head). CP/M did not try to recover any
    resources on program exit, it was the programs responsibility to recover
    them all (or reboot after every run). If you had holes in finalization, you would easily leak files and since you could only open a limited number of
    them at a time, you could easily make a system non-responsive.

    You get similar things on some embedded kernels.

    If you only write programs that live in ROM and never, ever terminate, then
    you probably have different requirements. Most likely, you shouldn't be
    using Finalization at all (or at least not putting such object in allocated things).

    ...
    2. What is worse that a collection is not bound to the pool. It is to an access type, which may have a narrower scope. So the user could declare an unfortunate access type, which would corrupt objects in the pool and the
    pool designer has no means to prevent that.

    Pools are extremely low-level things that cannot be safe in any sense of the word. A badly designed pool will corrupt everything. Using a pool with the "wrong" access type generally has to be programmed for (as I answered
    earlier, if I assume anything about allocations, I check for violations and
    do something else.) And a pool can be used with many access types; many
    useful ones are.

    Some of what you want is provided by the subpool mechanism, but it is even
    more complex at runtime, so it probably doesn't help you.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 20 19:45:28 2021
    User calls on Initialize and Finalize have no special meaning; they're
    igorned for the purposes of language-defined finalization. The fact that they're normal routines and can be called by someone else means that some defensive programming is needed. That all happened because of the "scope reduction" of Ada 9x; the dedicated creation/finalization mechanism got
    dumped. Finalization was too important to lose completely, so Tucker cooked
    up the current much simpler mechanism in order to avoid the objections. It's not ideal for that reason -- but Finalize would still have been a normal subprogram that anyone could call (what else could it have been -- the mechanism of stream attributes could have been used instead). I don't think there is a way that one could have prevented user-defined calls to these routines (even if they had a special name, you still could have renamed an existing subprogram to the special name).

    Randy.


    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si9gnb$926$1@gioia.aioe.org...
    On 2021-09-20 10:08, Niklas Holsti wrote:
    On 2021-09-20 10:35, Dmitry A. Kazakov wrote:

    No. You can have them accessible over other access types with wider
    scopes:

    Collection_Pointer := new X;
    Global_Pointer := Collection_Pointer.all'Unchecked_Access;

    So, unchecked programming, as I said.

    Right, working with pools is all that thing. Maybe "new" should be named "unchecked_new" (:-))

    Finalize and Initialize certainly should have been Unchecked_Finalize and Unchecked_Initialize as they are not enforced. You can override the
    parent's Initialize and never call it. It is a plain primitive operations anybody can call any time any place. You can even call it before the
    object is fully initialized!

    So, why bother with objects the user manually allocates (and forgets to free)?

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to All on Mon Sep 20 19:50:38 2021
    A better solution would be to know the size of those bounds objects and
    treat them differently (I've done that). And the next allocation is going to
    be the data, so I don't do anything special for them. Probably would be nice
    to have an attribute for that. But no one has ever asked for any such thing,
    so I haven't defined anything.

    Such pools are highly implementation specific, so I haven't worried about
    this much..

    Randy.

    "Emmanuel Briot" <briot.emmanuel@gmail.com> wrote in message news:44be7c73-f69e-45da-9916-b14a43a05ea3n@googlegroups.com...
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about
    enforcing that the user's header is only created once?
    I think one cannot enforce that, because the calls to Allocate do not
    indicate (with parameters) which set of calls concern the same object
    allocation.

    I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:

    for X'Storage_Pool use Pool;
    for X'Descriptor_Storage_Pool use Other_Pool;

    That way the user can decide when to add (or not) extra headers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Niklas Holsti on Mon Sep 20 19:40:18 2021
    "Niklas Holsti" <niklas.holsti@tidorum.invalid> wrote in message news:iqqtskF3losU1@mid.individual.net...
    On 2021-09-20 10:35, Dmitry A. Kazakov wrote:
    On 2021-09-20 09:05, Niklas Holsti wrote:


    [snipping context]


    However, your semantic argument (as opposed to the overhead argument)
    seems to be based on an assumption that the objects "left over" in a
    local collection, and which thus are inaccessible, will still, somehow,
    participate in the later execution of the program, which is why you say
    that finalizing those objects would "corrupt" them.

    It seems to me that such continued participation is possible only if the >>> objects contain tasks or are accessed through some kind of unchecked
    programming. Do you agree?

    No. You can have them accessible over other access types with wider
    scopes:

    Collection_Pointer := new X;
    Global_Pointer := Collection_Pointer.all'Unchecked_Access;



    So, unchecked programming, as I said.

    Yup, and when you do stuff like that, you deserve for the compiler to shoot
    you in the head.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Tue Sep 21 08:28:13 2021
    On 2021-09-21 02:37, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si77kd$rka$1@gioia.aioe.org...
    ...
    1. It is a massive overhead in both memory and performance terms with no
    purpose whatsoever. I fail to see where that sort of thing might be even
    marginally useful.

    The classic example of Finalization is file management on a simple kernel (I use CP/M as the example in my head). CP/M did not try to recover any resources on program exit, it was the programs responsibility to recover
    them all (or reboot after every run). If you had holes in finalization, you would easily leak files and since you could only open a limited number of them at a time, you could easily make a system non-responsive.

    This is why system resources are handled by the OS rather than by the application. But I do not see how this justifies "collections."

    2. What is worse that a collection is not bound to the pool. It is to an
    access type, which may have a narrower scope. So the user could declare an >> unfortunate access type, which would corrupt objects in the pool and the
    pool designer has no means to prevent that.

    Pools are extremely low-level things that cannot be safe in any sense of the word. A badly designed pool will corrupt everything. Using a pool with the "wrong" access type generally has to be programmed for (as I answered earlier, if I assume anything about allocations, I check for violations and do something else.) And a pool can be used with many access types; many useful ones are.

    This is also true, but again unrelated to the point that tying
    finalization *without* deallocation to a pointer type is just wrong, semantically on any abstraction level.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Tue Sep 21 08:51:08 2021
    On 2021-09-21 02:26, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si4ell$1b25$1@gioia.aioe.org...
    ...
    Given that an object can be allocated in multiple independent pieces, it >>> seems unlikely that what you want will be provided.

    Such implementations would automatically disqualify the compiler.
    Compiler-generated piecewise allocation is OK for the stack, not for user
    storage pools.

    If someone wants to require contigious allocation of objects, there should
    be a representation attribute to specify it.

    It would be difficult, because the types are declared prior to pools.
    That is when object layout does change.

    If the layout does not then you need no attribute.

    You can always run a mock allocation to compute overall size and offsets
    to the pieces and then do one true allocation. And with stream
    attributes you need to implement introspection anyway. So this might
    have been an issue for Ada 83, but now one can simply require contiguous allocation in pools.

    And there should not be an
    nonsense restrictions on records with defaulted discriminants unless you specify that you require contiguous allocation.

    You can keep the object layout. It is only the question of "trolling"
    the pool, no how objects are represented there.

    No, it is about the overhead of maintaining "collections" associated with
    an access type in order to call Finalization for all members of the
    collection.

    How else would you ensure that Finalize is always called on an allocated object?

    I would not, because it is plain wrong. Finalize must be called for each *deallocated* object.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Randy Brukardt on Tue Sep 21 16:08:56 2021
    I think the only thing that misses is scenarios where the compiler vendor
    isn't allocating a descriptor/bounds but is still using multiple allocations for the
    object. I don't know if that is a practical use, but it is one the RM allows? If
    so, it is probably more useful to know if a specific Allocate call is somehow
    a unique call relative to the others (the first call, the last call, etc.) so that the
    developer could earmark that one to be the one to add the custom header
    to.

    We can't change the Allocate specification since it is what it is, but is there any consideration to adding functionality to the root storage pool type,
    maybe a classwide function that lets the compiler developer set an internal flag for that unique allocation and a classwide function for the storage
    pool developer to see if that flag was set for the allocation. Or some other mechanism. It seems like this would need to be some sort of runtime
    mechanism if the multiple allocations can occur in the absence of needing
    a descriptor or bounds.

    Or maybe a generic version of the Storage_Pools package that allows a
    header type to be specified, that gives the compiler vendor some
    interface that easily facilitates allocating the header along side any object at the time and place the vendor finds convenient, and provides the
    custom storage pool implementer a means of knowing when that happens
    so they can initialize the header in the allocate function that creates it.

    I'm obviously not a compiler developer so I don't know the practicalness
    of any of that. But I think one root problem for a custom storage pool developer is "when is it safe to make a custom header for my object?".



    On Monday, September 20, 2021 at 8:50:41 PM UTC-4, Randy Brukardt wrote:
    A better solution would be to know the size of those bounds objects and
    treat them differently (I've done that). And the next allocation is going to be the data, so I don't do anything special for them. Probably would be nice to have an attribute for that. But no one has ever asked for any such thing, so I haven't defined anything.

    Such pools are highly implementation specific, so I haven't worried about this much..

    Randy.

    "Emmanuel Briot" <> wrote in message news:44be7c73-f69e-45da...@googlegroups.com...
    If a compiler is allowed to break up an allocation into multiple
    calls to Allocate (and of course Deallocate), how does one go about
    enforcing that the user's header is only created once?
    I think one cannot enforce that, because the calls to Allocate do not
    indicate (with parameters) which set of calls concern the same object
    allocation.

    I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:

    for X'Storage_Pool use Pool;
    for X'Descriptor_Storage_Pool use Other_Pool;

    That way the user can decide when to add (or not) extra headers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Randy Brukardt on Tue Sep 21 15:40:16 2021
    It's ok! I realize I am not great at wording questions, so I just assume
    I asked it poorly. If it helps, your response did get me thinking more
    about the specifics of contiguous allocation vs not and how it would
    affect my design, which didn't even cross my mind before.

    On Monday, September 20, 2021 at 7:51:17 PM UTC-4, Randy Brukardt wrote:
    Sorry about that, I didn't understand what you were asking. And I get defensive about people who think that a pool should get some specific Size (and only that size), so I leapt to a conclusion and answered accordingly.

    The compiler requests all of the memory IT needs, but if the pool needs some additional memory for it's purposes (pretty common), it will need to add
    that space itself. It's hard to imagine how it could be otherwise, I guess I would have thought that goes without saying. (And that rather proves that there is nothing that goes without saying.)

    Randy.

    "Jere" <> wrote in message
    news:96e7199f-c354-402f...@googlegroups.com...
    On Wednesday, September 15, 2021 at 3:01:52 AM UTC-4, Simon Wright wrote:
    Jere <> writes:

    Thanks for the response. I'm sorry for all the questions. That's how
    I learn and I realize it isn't a popular way to learn in the
    community, but I have always learned very differently than most.
    Seems to me you ask interesting questions which generate enlightening
    responses!
    Thanks! though in this case, my question was ill formed after I missed a detail
    in the blog, so the mistake is on me. I will say I hold back some
    questions
    as it is very intimidating to ask on C.L.A. I mean the first response led off
    with "Not sure what you are expecting" so it is hard to know how to formulate
    a good question as I always seem to get some harsh responses (which I am sure is because I asked the question poorly). I'm unfortunately a very visual
    person and words are not my forte and I feel like when I ask questions about
    the boundaries of the language I manage to put folks on the defensive. I don't dislike Ada at all, it is my favorite language, but I think it is hard to
    craft questions on some topics without putting forth the impression that
    I don't like it, at least with my limited ability to word craft.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 27 23:31:34 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sibvcr$1ico$1@gioia.aioe.org...
    On 2021-09-21 02:26, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:si4ell$1b25$1@gioia.aioe.org...
    ...
    Given that an object can be allocated in multiple independent pieces,
    it
    seems unlikely that what you want will be provided.

    Such implementations would automatically disqualify the compiler.
    Compiler-generated piecewise allocation is OK for the stack, not for
    user
    storage pools.

    If someone wants to require contigious allocation of objects, there
    should
    be a representation attribute to specify it.

    It would be difficult, because the types are declared prior to pools. That
    is when object layout does change.

    If the layout does not then you need no attribute.

    You can always run a mock allocation to compute overall size and offsets
    to the pieces and then do one true allocation. And with stream attributes
    you need to implement introspection anyway. So this might have been an
    issue for Ada 83, but now one can simply require contiguous allocation in pools.

    And there should not be an
    nonsense restrictions on records with defaulted discriminants unless you
    specify that you require contiguous allocation.

    You can keep the object layout. It is only the question of "trolling" the pool, no how objects are represented there.

    No, it is about the overhead of maintaining "collections" associated
    with
    an access type in order to call Finalization for all members of the
    collection.

    How else would you ensure that Finalize is always called on an allocated
    object?

    I would not, because it is plain wrong. Finalize must be called for each *deallocated* object.

    Deallocation is irrelevant. Finalization is called when objects are about to
    be destroyed, by any method. Otherwise, you do not have watertight finalization, and it is near impossible to use it for anything important.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Mon Sep 27 23:38:30 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sibu1t$12ds$1@gioia.aioe.org...
    On 2021-09-21 02:37, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:si77kd$rka$1@gioia.aioe.org...
    ...
    1. It is a massive overhead in both memory and performance terms with no >>> purpose whatsoever. I fail to see where that sort of thing might be even >>> marginally useful.

    The classic example of Finalization is file management on a simple kernel
    (I
    use CP/M as the example in my head). CP/M did not try to recover any
    resources on program exit, it was the programs responsibility to recover
    them all (or reboot after every run). If you had holes in finalization,
    you
    would easily leak files and since you could only open a limited number of
    them at a time, you could easily make a system non-responsive.

    This is why system resources are handled by the OS rather than by the application. But I do not see how this justifies "collections."

    Ada programs need no OS; they're really only useful for (abstracted) I/O,
    for everything else, they're mainly in the way.

    2. What is worse that a collection is not bound to the pool. It is to an >>> access type, which may have a narrower scope. So the user could declare
    an
    unfortunate access type, which would corrupt objects in the pool and the >>> pool designer has no means to prevent that.

    Pools are extremely low-level things that cannot be safe in any sense of
    the
    word. A badly designed pool will corrupt everything. Using a pool with
    the
    "wrong" access type generally has to be programmed for (as I answered
    earlier, if I assume anything about allocations, I check for violations
    and
    do something else.) And a pool can be used with many access types; many
    useful ones are.

    This is also true, but again unrelated to the point that tying
    finalization *without* deallocation to a pointer type is just wrong, semantically on any abstraction level.

    If you didn't finalize everything, then a system like Claw would not work, since there would be objects that would have gotten destroyed (when the
    access type goes out of scope) and would still be on the various active
    object chains. (The whole reason that these things are controlled is so that they can be added to and removed from object chains as needed.)

    Now, you could say that no one should be declaring access types locally in subprograms -- I'd agree with that, but it isn't Ada.

    Even if your semantics only happened for library-level access types, then
    you'd still have problems when library-level objects go away. I suppose you could say those never go away either -- but again, that only makes sense for programs that never terminate. On Windows, you would have a mess.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to All on Mon Sep 27 23:42:12 2021
    "Jere" <jhb.chat@gmail.com> wrote in message news:6a073ced-4c3b-4e87-8063-555a93a5c3f6n@googlegroups.com...
    ...
    We can't change the Allocate specification since it is what it is, but is there
    any consideration to adding functionality to the root storage pool type,

    We tried that as a solution for the user-defined dereference problem, and it ended up going nowhere. Your problem is different but the issues of changing the Storage_Pool spec remain. Not sure it could be made to work (one does
    not want to force everyone to change their existing storage pools).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Tue Sep 28 08:56:41 2021
    On 2021-09-28 06:31, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sibvcr$1ico$1@gioia.aioe.org...
    On 2021-09-21 02:26, Randy Brukardt wrote:

    How else would you ensure that Finalize is always called on an allocated >>> object?

    I would not, because it is plain wrong. Finalize must be called for each
    *deallocated* object.

    Deallocation is irrelevant. Finalization is called when objects are about to be destroyed, by any method.

    And no object may be destroyed unless deallocated.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Tue Sep 28 09:00:01 2021
    On 2021-09-28 06:38, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sibu1t$12ds$1@gioia.aioe.org...

    This is also true, but again unrelated to the point that tying
    finalization *without* deallocation to a pointer type is just wrong,
    semantically on any abstraction level.

    If you didn't finalize everything, then a system like Claw would not work, since there would be objects that would have gotten destroyed (when the access type goes out of scope) and would still be on the various active object chains. (The whole reason that these things are controlled is so that they can be added to and removed from object chains as needed.)

    I did not say that the pointer type going out of scope should free
    objects allocated though it. I said that it should not finalize
    anything. It must simply silently die not touching anything.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Dmitry A. Kazakov on Tue Sep 28 08:52:31 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    On 2021-09-28 06:31, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sibvcr$1ico$1@gioia.aioe.org...
    On 2021-09-21 02:26, Randy Brukardt wrote:

    How else would you ensure that Finalize is always called on an allocated >>>> object?

    I would not, because it is plain wrong. Finalize must be called for each >>> *deallocated* object.
    Deallocation is irrelevant. Finalization is called when objects are
    about to
    be destroyed, by any method.

    And no object may be destroyed unless deallocated.

    Well, if it's important that an allocated object not be destroyed, don't allocate it from a storage pool that can go out of scope!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Simon Wright on Tue Sep 28 10:07:52 2021
    On 2021-09-28 09:52, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    And no object may be destroyed unless deallocated.

    Well, if it's important that an allocated object not be destroyed, don't allocate it from a storage pool that can go out of scope!

    That was never the case.

    The case is that an object allocated in a pool gets finalized because
    the access type (not the pool!) used to allocate the object goes out of
    the scope.

    This makes no sense whatsoever.

    Again, finalization must be tied with [logical] deallocation. Just like initialization is with allocation.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Tue Sep 28 17:04:05 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:siuigp$bqs$1@gioia.aioe.org...
    On 2021-09-28 09:52, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    And no object may be destroyed unless deallocated.

    Well, if it's important that an allocated object not be destroyed, don't
    allocate it from a storage pool that can go out of scope!

    That was never the case.

    The case is that an object allocated in a pool gets finalized because the access type (not the pool!) used to allocate the object goes out of the scope.

    This makes no sense whatsoever.

    Again, finalization must be tied with [logical] deallocation. Just like initialization is with allocation.

    But it is. All of the objects allocated from an access type are logically deallocated when the access type goes out of scope (and the memory can be recovered). Remember that Ada was designed so that one never needs to use Unchecked_Deallocation.

    I could see an unsafe language (like C) doing the sort of thing you suggest, but not Ada. Every object in Ada has a specific declaration point, initialization point, finalization point, and destruction point. There are
    no exceptions.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Wed Sep 29 09:57:32 2021
    On 2021-09-29 00:04, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:siuigp$bqs$1@gioia.aioe.org...
    On 2021-09-28 09:52, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    And no object may be destroyed unless deallocated.

    Well, if it's important that an allocated object not be destroyed, don't >>> allocate it from a storage pool that can go out of scope!

    That was never the case.

    The case is that an object allocated in a pool gets finalized because the
    access type (not the pool!) used to allocate the object goes out of the
    scope.

    This makes no sense whatsoever.

    Again, finalization must be tied with [logical] deallocation. Just like
    initialization is with allocation.

    But it is. All of the objects allocated from an access type are logically deallocated when the access type goes out of scope (and the memory can be recovered).

    Really? And where is the call to the pool's Deallocate in that case? You
    cannot have it both ways.

    Remember that Ada was designed so that one never needs to use Unchecked_Deallocation.

    Come on. There never existed Ada compiler with GC. And nobody could even implement GC with the meaningless semantics of "collections" in the way, killing objects at random. Either with GC or without it, there must be
    no such thing as "collections."

    I could see an unsafe language (like C) doing the sort of thing you suggest, but not Ada.

    How random finalizing user-allocated and user-freed objects is safe?

    And I suggest doing exactly nothing as opposed to *unsafe*, costly and meaningless behavior mandated by the standard now.

    Every object in Ada has a specific declaration point,
    initialization point, finalization point, and destruction point. There are
    no exceptions.

    Yes, and how it that related to the issue?

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Shark8@21:1/5 to Dmitry A. Kazakov on Wed Sep 29 07:41:51 2021
    On Wednesday, September 29, 2021 at 1:57:35 AM UTC-6, Dmitry A. Kazakov wrote:
    Come on. There never existed Ada compiler with GC.
    Untrue; GNAT for JVM, and GNAT for DOTNET.

    And nobody could even
    implement GC with the meaningless semantics of "collections" in the way, killing objects at random. Either with GC or without it, there must be
    no such thing as "collections."
    How does this follow?
    The 'element' type cannot go out of scope before the collection, and the collection going out of scope triggers its finalization/deallocation.

    I could see an unsafe language (like C) doing the sort of thing you suggest,
    but not Ada.
    How random finalizing user-allocated and user-freed objects is safe?
    Finalization *isn't* random, it happens at well-defined places.
    (And, IIRC, is idempotent; meaning that multiple calls have the same effect as a singular call.)

    And I suggest doing exactly nothing as opposed to *unsafe*, costly and meaningless behavior mandated by the standard now.
    Every object in Ada has a specific declaration point,
    initialization point, finalization point, and destruction point. There are no exceptions.
    Yes, and how it that related to the issue?
    Because these are the places that finalization (and deallocation/destruction) are defined to happen.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to All on Wed Sep 29 17:16:25 2021
    On 2021-09-29 16:41, Shark8 wrote:
    On Wednesday, September 29, 2021 at 1:57:35 AM UTC-6, Dmitry A. Kazakov wrote:
    Come on. There never existed Ada compiler with GC.
    Untrue; GNAT for JVM, and GNAT for DOTNET.

    Neither is full Ada, AFAIK.

    And nobody could even
    implement GC with the meaningless semantics of "collections" in the way,
    killing objects at random. Either with GC or without it, there must be
    no such thing as "collections."
    How does this follow?

    Because the rule disregards any object use. No collector, manual or
    automatic can deal with that mess.

    Finalization *isn't* random, it happens at well-defined places.

    Random = unrelated to the object's life time.

    (And, IIRC, is idempotent; meaning that multiple calls have the same effect as a singular call.)

    Which is obviously not.

    And I suggest doing exactly nothing as opposed to *unsafe*, costly and
    meaningless behavior mandated by the standard now.
    Every object in Ada has a specific declaration point,
    initialization point, finalization point, and destruction point. There are >>> no exceptions.
    Yes, and how it that related to the issue?
    Because these are the places that finalization (and deallocation/destruction) are defined to happen.

    So? How exactly any of this implies that the place of Finalization can
    be in a place other than the place of deallocation?

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Wed Sep 29 19:16:03 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj2008$1cmo$1@gioia.aioe.org...
    ...
    Random = unrelated to the object's life time.

    All objects have to disappear before their type disappears, so the object *cannot* live longer than the access type for which it is allocated from.
    Any use of the object after that point is erroneous, so Finalization has to happen before it as well.

    It's probably a lousy idea to share pool objects (as opposed to pool types) amongst access types. The pool object should have the same lifetime as the access type (we require that for subpools specifically because finalization doesn't make sense any other way). A similar rule should have been enforced
    for all pools, but it would be incompatible (alas).

    If you do have a longer lived pool and a shorter lived access type, you will end up with a bunch of zombie objects in the pool that cannot be used in any way (as any access is erroneous). All that can happen is a memory leak.
    Don't do that.

    ...
    And I suggest doing exactly nothing as opposed to *unsafe*, costly and
    meaningless behavior mandated by the standard now.
    Every object in Ada has a specific declaration point,
    initialization point, finalization point, and destruction point. There >>>> are
    no exceptions.
    Yes, and how it that related to the issue?
    Because these are the places that finalization (and
    deallocation/destruction) are defined to happen.

    So? How exactly any of this implies that the place of Finalization can be
    in a place other than the place of deallocation?

    Deallocation is at most a convinience in Ada; it isn't even required to do anything. One never can assume anything is actually recovered, so it is not
    a meaningful concept semantically.

    OTOH, object destruction happens before the type goes away, and finalization happens before that. That is the point here.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Thu Sep 30 10:08:02 2021
    On 2021-09-30 02:16, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj2008$1cmo$1@gioia.aioe.org...
    ...
    Random = unrelated to the object's life time.

    All objects have to disappear before their type disappears, so the object *cannot* live longer than the access type for which it is allocated from.

    The type of the access type /= the type of object. Only access objects
    must disappear and they do.

    It's probably a lousy idea to share pool objects (as opposed to pool types) amongst access types.

    You need these for access discriminants.

    If you do have a longer lived pool and a shorter lived access type, you will end up with a bunch of zombie objects in the pool that cannot be used in any way (as any access is erroneous). All that can happen is a memory leak.
    Don't do that.

    Nope, this is exactly how it works with most specialized pools, like
    arenas, stacks, reference counting pools etc.

    And I suggest doing exactly nothing as opposed to *unsafe*, costly and >>>> meaningless behavior mandated by the standard now.
    Every object in Ada has a specific declaration point,
    initialization point, finalization point, and destruction point. There >>>>> are
    no exceptions.
    Yes, and how it that related to the issue?
    Because these are the places that finalization (and
    deallocation/destruction) are defined to happen.

    So? How exactly any of this implies that the place of Finalization can be
    in a place other than the place of deallocation?

    Deallocation is at most a convinience in Ada; it isn't even required to do anything.

    So is Finialize.

    OTOH, object destruction happens before the type goes away, and finalization happens before that. That is the point here.

    See above, these are different objects of different types. The actual
    object type is alive and well (unless killed by some collection).

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Thu Sep 30 19:04:20 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj3r92$pla$3@gioia.aioe.org...
    On 2021-09-30 02:16, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sj2008$1cmo$1@gioia.aioe.org...
    ...
    Random = unrelated to the object's life time.

    All objects have to disappear before their type disappears, so the object
    *cannot* live longer than the access type for which it is allocated from.

    The type of the access type /= the type of object. Only access objects
    must disappear and they do.

    ?? There is nothing in a pool except unorganized memory. "Objects" only
    exist outside of the pool for some access type. There has to be some
    organizing type, else you would never know where/when things are finalized.

    It's probably a lousy idea to share pool objects (as opposed to pool
    types)
    amongst access types.

    You need these for access discriminants.

    Those (coextensions) are one of Ada's worst ideas; they have tremendous overhead without any value. Almost everything has to take them into account. Yuck. Access discriminants of existing objects are OK but really don't add anything over a component of an access type.

    If you do have a longer lived pool and a shorter lived access type, you
    will
    end up with a bunch of zombie objects in the pool that cannot be used in
    any
    way (as any access is erroneous). All that can happen is a memory leak.
    Don't do that.

    Nope, this is exactly how it works with most specialized pools, like
    arenas, stacks, reference counting pools etc.

    These things don't work as pools in Ada. You need to use the subpool
    mechanism to make them safe, because otherwise the objects go away before
    the type (given these sorts of mechanisms generally have some sort of block deallocation). Again, the only thing in a pool is a chunk of raw memory; the object lives elsewhere. Subpools take care of these lifetime issues (for controlled types, no one wanted to try to make that work for tasks).

    OTOH, object destruction happens before the type goes away, and
    finalization
    happens before that. That is the point here.

    See above, these are different objects of different types. The actual
    object type is alive and well (unless killed by some collection).

    And completely irrelevant. Allocated objects can only be deallocated from
    the same type as they were allocated. So they're zombie after the type goes away. Only use global general access types for allocation, never, ever
    anything nested.

    Indeed, I now believe that any nested access type is evil and mainly is
    useful to cause nasty cases for compilers. I'd ban them in an Ada-like
    language (that would also simplify accessibility greatly).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Fri Oct 1 10:25:52 2021
    On 2021-10-01 02:04, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj3r92$pla$3@gioia.aioe.org...
    On 2021-09-30 02:16, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sj2008$1cmo$1@gioia.aioe.org...
    ...
    Random = unrelated to the object's life time.

    All objects have to disappear before their type disappears, so the object >>> *cannot* live longer than the access type for which it is allocated from. >>
    The type of the access type /= the type of object. Only access objects
    must disappear and they do.

    ??

    type T is range 1..2;
    type P is access T;

    T /= P

    There is nothing in a pool except unorganized memory. "Objects" only
    exist outside of the pool for some access type.

    No the objects exist in the pool and are *accessible* via one or more
    access types, some of them could even be anonymous, BTW.

    There has to be some
    organizing type, else you would never know where/when things are finalized.

    Yes, and that is up to the programmer.

    It's probably a lousy idea to share pool objects (as opposed to pool
    types)
    amongst access types.

    You need these for access discriminants.

    Those (coextensions) are one of Ada's worst ideas; they have tremendous overhead without any value. Almost everything has to take them into account. Yuck. Access discriminants of existing objects are OK but really don't add anything over a component of an access type.

    Discriminants add safety when creating objects because the language
    requires initialization of. For components one have to use an aggregate
    which turns extremely difficult in practical cases or even impossible.

    If you do have a longer lived pool and a shorter lived access type, you
    will
    end up with a bunch of zombie objects in the pool that cannot be used in >>> any
    way (as any access is erroneous). All that can happen is a memory leak.
    Don't do that.

    Nope, this is exactly how it works with most specialized pools, like
    arenas, stacks, reference counting pools etc.

    These things don't work as pools in Ada.

    Yes, they normally have Deallocate as a void operation or raise an
    exception.

    You need to use the subpool
    mechanism to make them safe,

    I do not see how that could change anything without destroying the whole purpose of such pools, namely nearly zero-cost allocation and deallocation.

    because otherwise the objects go away before
    the type (given these sorts of mechanisms generally have some sort of block deallocation).

    If controlled types need to be used, which rarely happens, a bookkeeping
    is added to finalize them. Instead of IMO useless subpools, one could
    add some allocation bookkeeping support etc.

    Allocated objects can only be deallocated from
    the same type as they were allocated.

    You can perfectly deallocate any object by erasing its pool and you can finalize the object before doing that too. Furthermore you can use a
    local access type and instantiate Unchecked_Deallocation with it. There
    are many ways to skin the cat.

    The language must do no baseless assumptions about programmer's intentions.

    Indeed, I now believe that any nested access type is evil and mainly is useful to cause nasty cases for compilers. I'd ban them in an Ada-like language (that would also simplify accessibility greatly).

    See where collections have led you! (:-))

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Sat Oct 2 04:06:24 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj6gmg$1n1n$1@gioia.aioe.org...
    On 2021-10-01 02:04, Randy Brukardt wrote:
    ...
    Nope, this is exactly how it works with most specialized pools, like
    arenas, stacks, reference counting pools etc.

    These things don't work as pools in Ada.

    Yes, they normally have Deallocate as a void operation or raise an
    exception.

    No, they don't, because they don't work with controlled types, tasks, etc.
    And there is no good way to enforce that the things you allocate into them don't have controlled or task components. So they are unsafe unless you use
    the subpool mechanism.

    You need to use the subpool
    mechanism to make them safe,

    I do not see how that could change anything without destroying the whole purpose of such pools, namely nearly zero-cost allocation and
    deallocation.

    It ties any finalization to the subpool, so all of the contained objects get finalized when the subpool is freed. And lets the compiler know what's happening so it doesn't finalize the objects twice. Of course, if no finalization is involved, it doesn't do much of anything, but that's OK,
    you're prepared if any later maintenance adds finalization somewhere.

    ...
    because otherwise the objects go away before
    the type (given these sorts of mechanisms generally have some sort of
    block
    deallocation).

    If controlled types need to be used, which rarely happens, a bookkeeping
    is added to finalize them. Instead of IMO useless subpools, one could add some allocation bookkeeping support etc.

    That's again not safe in any sense. You shouldn't need to worry about
    whether some abstraction that you use uses finalization, especially as you can't know if someone adds it later.

    ...
    Indeed, I now believe that any nested access type is evil and mainly is
    useful to cause nasty cases for compilers. I'd ban them in an Ada-like
    language (that would also simplify accessibility greatly).

    See where collections have led you! (:-))

    No, that's mostly because of accessibility. I'd be happy if one banned doing any allocations with general access types (mixing global/stack allocated objects and allocated objects is pure evil IMHO), but that would be rather
    hard to enforce.

    Note that nested tagged types also cause many implementation problems,
    adding a lot of unnecessary overhead. I'd probably go as far as banning all nested types (as opposed to subtypes), as types are supposed to live the
    entire life of the program (possibly anonymously) and that is weird when applied to things in nested scopes whose definition could depend on dynamic stuff.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Sat Oct 2 12:18:19 2021
    On 2021-10-02 11:06, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj6gmg$1n1n$1@gioia.aioe.org...
    On 2021-10-01 02:04, Randy Brukardt wrote:
    ...
    Nope, this is exactly how it works with most specialized pools, like
    arenas, stacks, reference counting pools etc.

    These things don't work as pools in Ada.

    Yes, they normally have Deallocate as a void operation or raise an
    exception.

    No, they don't, because they don't work with controlled types, tasks, etc.

    Of course they do. E.g. with void Deallocation. When an instance of Unchecked_Deallocation is called, the object gets properly finalized,
    while the memory stays occupied in the pool until all arena is cleared.

    In the case of reference counted objects finalization is done using an
    instance Unchecked_Deallocation. Subpools are totally useless there.

    If necessary, I use a fake pool to run Unchecked_Deallocation on it
    without reclaiming any memory from the original pool.

    You need to use the subpool
    mechanism to make them safe,

    I do not see how that could change anything without destroying the whole
    purpose of such pools, namely nearly zero-cost allocation and
    deallocation.

    It ties any finalization to the subpool, so all of the contained objects get finalized when the subpool is freed.

    Yes, and there is no need doing that in most practical scenarios. Note
    also that subpool allocations need to be handled in the user code. It is
    highly undesirable and error-prone. So, whatever safety you might get
    from subpool's bookkeeping, you lose it at that point.

    ...
    because otherwise the objects go away before
    the type (given these sorts of mechanisms generally have some sort of
    block
    deallocation).

    If controlled types need to be used, which rarely happens, a bookkeeping
    is added to finalize them. Instead of IMO useless subpools, one could add
    some allocation bookkeeping support etc.

    That's again not safe in any sense. You shouldn't need to worry about
    whether some abstraction that you use uses finalization, especially as you can't know if someone adds it later.

    Why compiler assisted bookkeeping is safe for subpools, but unsafe as a stand-alone mechanism?

    ...
    Indeed, I now believe that any nested access type is evil and mainly is
    useful to cause nasty cases for compilers. I'd ban them in an Ada-like
    language (that would also simplify accessibility greatly).

    See where collections have led you! (:-))

    No, that's mostly because of accessibility. I'd be happy if one banned doing any allocations with general access types (mixing global/stack allocated objects and allocated objects is pure evil IMHO), but that would be rather hard to enforce.

    Yes, but there is an alternative option of fixing Unchecked_Deallocation through general access types.

    Note that nested tagged types also cause many implementation problems,
    adding a lot of unnecessary overhead.

    Sure, but again. there is a paramount use case that requires dynamic elaboration of tagged types, i.e. the relocatable libraries. You cannot
    ban them and you cannot forbid tagged extensions declared in a
    relocatable library. So getting rid of nesting tagged types will ease
    nothing.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jere@21:1/5 to Randy Brukardt on Sat Oct 2 16:19:53 2021
    I was thinking more along the lines of adding a classwide operation on the
    root storage pool type. That shouldn't change anyone's implementation
    ideally. Something like:

    -- Parameter is mode "in out" to allow for it to clear itself if the implementation
    -- so desires
    function Is_First_Allocation(Self : in out Root_Storage_Pool'Class) return Boolean;

    Added to System.Storage_Pools. Allow the implementation to implement it however they like under the hood. They could, for example, add a boolean
    to the private part of the root storage pool and add a child function/package that
    sets it when the compiler implementation calls for the first allocation. It can be
    implemented with a count. I'm sure there are a plethora of ways.

    Since the operation is classwide and it is optional, it wouldn't affect anyone's
    existing storage pools. it would basically just be there to give custom storage pool designers a hook to know when it is portably safe to add
    a custom header, regardless of the number of allocations an implementation chooses to do.

    It does place the burden on the compiler implementors to call it for the first allocation, but I can't imagine that is a huge burden with today's IDE tools?

    On Tuesday, September 28, 2021 at 12:42:16 AM UTC-4, Randy Brukardt wrote:
    "Jere" <> wrote in message
    news:6a073ced-4c3b-4e87...@googlegroups.com...
    ...
    We can't change the Allocate specification since it is what it is, but is there
    any consideration to adding functionality to the root storage pool type,
    We tried that as a solution for the user-defined dereference problem, and it ended up going nowhere. Your problem is different but the issues of changing the Storage_Pool spec remain. Not sure it could be made to work (one does
    not want to force everyone to change their existing storage pools).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Sat Oct 2 23:33:13 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj9blb$1srp$1@gioia.aioe.org...
    On 2021-10-02 11:06, Randy Brukardt wrote:
    ...
    That's again not safe in any sense. You shouldn't need to worry about
    whether some abstraction that you use uses finalization, especially as
    you
    can't know if someone adds it later.

    Why compiler assisted bookkeeping is safe for subpools, but unsafe as a stand-alone mechanism?

    There is no such stand-alone mechanism, and there cannot be one -- such bookkeeping requires an object to store the bookkeeping into, and there is
    none in the normal case. The only place to put such a thing is with the
    access type, thus the collection mechanism. Pools are 100% user-defined, and that can't be changed at this late date. (And if you did try to change it, you'd end up with something almost the same as subpools anyway.)

    ...
    Sure, but again. there is a paramount use case that requires dynamic elaboration of tagged types, i.e. the relocatable libraries. You cannot
    ban them

    I suppose, but you certainly don't have to use them. That sort of thing is nonsense that simply makes programs more fragile than they have to be. I
    just had a problem with Debian where some older programs compiled with GNAT refused to run because an update had invalidated some library. Had to dig
    out the source code and recompile.

    ...
    you cannot forbid tagged extensions declared in a relocatable library.

    Of course you can. The only thing you need to be compatible with is a C interface, which is the only thing you need to interface to existing
    libraries that you can't avoid.

    So getting rid of nesting tagged types will ease nothing.

    The problem is tagged types not declared at the library level. Relocatable librarys are still library level (they have their own global address space).
    So there wouldn't be the same problems as a tagged type declared in a subprogram (which requires carrying around a static link or display, and multiple part tags).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Jere on Sun Oct 3 10:52:35 2021
    On 2021-10-03 01:19, Jere wrote:
    I was thinking more along the lines of adding a classwide operation on the root storage pool type.

    You don't need that. As I proposed in another response, one add a new
    primitive operations:

    procedure Allocate_Segment
    ( Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size : Storage_Count;
    Alignment : Storage_Count;
    Sequence_No : Positive
    );

    Note, it is not abstract. The implementation dispatches to Allocate:

    procedure Allocate_Segment
    ( Pool : in out Root_Storage_Pool;
    Storage_Address : out Address;
    Size : Storage_Count;
    Alignment : Storage_Count;
    Head : Address; -- The first block address
    Sequence_No : Positive
    ) is
    begin
    Root_Storage_Pool'Class (Pool).Allocate
    ( Storage_Address,
    Size,
    Alignment
    );
    end Allocate_Segment;

    Similarly goes Deallocate_Segment.

    The object allocation protocol:

    Take mutex
    Allocate (...); -- The head, passed down in further calls
    Allocate_Segment (..., 1); -- First auxiliary block
    ...
    Allocate_Segment (..., N); -- Last auxiliary block
    Release mutex

    Deallocation protocol:

    Take mutex
    Dellocate_Segment (..., 1); -- Last auxiliary block
    ...
    Dellocate_Segment (..., N); -- First auxiliary block
    Deallocate (...); -- The head
    Release mutex

    That is 100% backward compatible.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Sun Oct 3 10:40:05 2021
    On 2021-10-03 06:33, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj9blb$1srp$1@gioia.aioe.org...
    On 2021-10-02 11:06, Randy Brukardt wrote:
    ...
    That's again not safe in any sense. You shouldn't need to worry
    about whether some abstraction that you use uses finalization,
    especially as you can't know if someone adds it later.

    Why compiler assisted bookkeeping is safe for subpools, but unsafe
    as a stand-alone mechanism?

    There is no such stand-alone mechanism, and there cannot be one

    Huh, collections are stand-alone and exist already. Just do them user-accessible and maintainable. The same is with the allocators. What
    is the problem adding a generic package

    generic
    type User_Type (<>) is private;
    type User_Pool is abstract new Root_Storage_Pool with private;
    package Generic_Parametrized_Pool is
    procedure User_Allocate
    ( Pool : in out User_Pool ;
    Storage_Address : out Address;
    Size : Storage_Count;
    Alignment : Storage_Count;
    Data : in out User_Type
    ) is abstract;
    ...

    To handle "subpool" kludges:

    P := new (Parameter) T;

    Not that Generic_Parametrized_Pool would be more usable than subpools.
    The problem with these is lack of Unchecked_Deallocation.

    Sure, but again. there is a paramount use case that requires
    dynamic elaboration of tagged types, i.e. the relocatable
    libraries. You cannot ban them

    I suppose, but you certainly don't have to use them.

    I must. It is impossible to maintain production grade software without
    its components linked as relocatable libraries.

    That sort of
    thing is nonsense that simply makes programs more fragile than they
    have to be. I just had a problem with Debian where some older
    programs compiled with GNAT refused to run because an update had
    invalidated some library. Had to dig out the source code and
    recompile.

    Yes, but static monolithic linking is even more fragile. Typically a
    customer orders software off the shelf. It means that he says I need,
    e.g. HTTP client, ModBus master, CANOpen etc. It is simply impossible to re-link everything for each customer and run integration tests. So the
    software is organized as a set of plug-in relocatable libraries, each of
    them maintained, versioned and tested separately. You cannot turn clock
    20 years back.

    ...
    you cannot forbid tagged extensions declared in a relocatable
    library.

    Of course you can.

    How? Ada does not determine the way you link an executable. If I put a
    package in a library it is there. If the package derives from a tagged type

    The only thing you need to be compatible with is a C interface, which
    is the only thing you need to interface to existing libraries that
    you can't avoid.

    That would kill most of Ada libraries.

    So getting rid of nesting tagged types will ease nothing.

    The problem is tagged types not declared at the library level.
    Relocatable librarys are still library level (they have their own
    global address space).

    When the library is loaded dynamically, there is no way to know in the executable the tag of the extension or prepare an extension of the
    dispatching table. I think it is far worse than a nested declaration,
    when is you have some information.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Wed Oct 13 20:26:09 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sjbr0j$am3$1@gioia.aioe.org...
    On 2021-10-03 01:19, Jere wrote:
    ...
    That is 100% backward compatible.

    Not quite, as it would have problems if someone had declared a pool with similar Allocate_Segment/Deallocate_Segment routines. Admittedly, a fairly unlikely occurrence.

    A secondary problem is that the mutex currently lives inside the pool; you would have to expose some interface for that as well. (A set-up where the
    mutex for allocations is global over the entire system is not going to fly.)

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Wed Oct 13 20:21:42 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sjbq96$3cl$1@gioia.aioe.org...
    On 2021-10-03 06:33, Randy Brukardt wrote:
    ...
    Yes, but static monolithic linking is even more fragile. Typically a
    customer orders software off the shelf. It means that he says I need, e.g. HTTP client, ModBus master, CANOpen etc. It is simply impossible to
    re-link everything for each customer and run integration tests.

    ??? When you are dynamically loading stuff, you simply are assuming
    everything is OK. (Which is usually nonsense, but for the sake of argument, assume that it is OK to do.) When you statically link, you surely can make
    the same assumption. It doesn't make sense to say you have to run
    integration tests when you statically link and not do the same when you dynamically load stuff.

    Of course, when you statically link Ada code, you can include that code in
    your static analysis (including, of course, the various guarantees that the
    Ada language and compiler bring). When you dynamically load, you can have
    none of that.

    So the software is organized as a set of plug-in relocatable libraries,
    each of them maintained, versioned and tested separately. You cannot turn clock 20 years back.

    And you still have to do integration testing when using them together -- or
    you could have done the same at the Ada source code level (that is, version, maintain, and test separately) and still have the advantages of Ada
    checking.

    ...
    Of course you can.

    How? Ada does not determine the way you link an executable. If I put a package in a library it is there. If the package derives from a tagged
    type

    This I don't understand at all. A dynamically loaded library necessarily has
    a C interface (if it is generally useful, if not, it might as well be maintained as Ada source code, there's no advantage to dynamic linking in
    that case and lots of disavantages), and that can't export a tagged type.

    In any case, a tagged type extension is a compile-time thing -- the compiler has to know all of the details of the type.

    The only thing you need to be compatible with is a C interface, which
    is the only thing you need to interface to existing libraries that
    you can't avoid.

    That would kill most of Ada libraries.

    There's no use to an Ada dynamic library -- if it's only for your organization's use, static linking is way better. And if it is for
    everyone's use, it has to have a C interface, thus no tagged types.

    So getting rid of nesting tagged types will ease nothing.

    The problem is tagged types not declared at the library level.
    Relocatable librarys are still library level (they have their own global
    address space).

    When the library is loaded dynamically, there is no way to know in the executable the tag of the extension or prepare an extension of the dispatching table. I think it is far worse than a nested declaration, when
    is you have some information.

    Ignoring that fact that this is useless construct, it is not at all hard to
    do, because you have to know that the tag and subprograms are declared in
    the dynamically loaded thing. Thus, one has to use a wrapper to call them indirectly, but that's easy to do when everything is library level. It's essentially the same as shared generics, which Janus/Ada has been doing for decades -- including tagged type derivation.

    The problem comes about when you have things whose lifetime is limited and
    need to have a static link or display to access them. Managing that is a nightmare, no matter how you try to do it.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From philip.munts@gmail.com@21:1/5 to Randy Brukardt on Wed Oct 13 20:12:16 2021
    On Wednesday, October 13, 2021 at 6:21:45 PM UTC-7, Randy Brukardt wrote:

    There's no use to an Ada dynamic library -- if it's only for your organization's use, static linking is way better. And if it is for everyone's use, it has to have a C interface, thus no tagged types.

    A few months ago a customer requested Python for Windows support for a piece of hardware I sold him. The shortest path, which proved to be surprisingly elegant and very easy to implement, was to create a Windows .dll for him with a GNAT library project.
    I just wrote a few new Ada subprograms to encapsulate my existing (and substantial) Ada support code. Those wrapper subprograms do indeed present a C interface using "PRAGMA Export(Convention => C..."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Thu Oct 14 09:31:11 2021
    On 2021-10-14 03:21, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sjbq96$3cl$1@gioia.aioe.org...
    On 2021-10-03 06:33, Randy Brukardt wrote:
    ...
    Yes, but static monolithic linking is even more fragile. Typically a
    customer orders software off the shelf. It means that he says I need, e.g. >> HTTP client, ModBus master, CANOpen etc. It is simply impossible to
    re-link everything for each customer and run integration tests.

    ??? When you are dynamically loading stuff, you simply are assuming everything is OK.

    A relocatable DLL is tested with a test application.

    (Which is usually nonsense, but for the sake of argument,
    assume that it is OK to do.) When you statically link, you surely can make the same assumption.

    A static library is tested same way, true, but integration of a static
    library is different and testing that is not possible without developing
    some massive tool-chain, like Linux distributions had in early days.

    Of course, when you statically link Ada code, you can include that code in your static analysis (including, of course, the various guarantees that the Ada language and compiler bring). When you dynamically load, you can have none of that.

    Yes, but maintainability trumps everything.

    So the software is organized as a set of plug-in relocatable libraries,
    each of them maintained, versioned and tested separately. You cannot turn
    clock 20 years back.

    And you still have to do integration testing when using them together -- or you could have done the same at the Ada source code level (that is, version, maintain, and test separately) and still have the advantages of Ada
    checking.

    Theoretically yes, in practice it is a combinatorial explosion. Dynamic libraries flatten that. Yes, this requires normalization of plug-in
    interfaces etc.

    ...
    Of course you can.

    How? Ada does not determine the way you link an executable. If I put a
    package in a library it is there. If the package derives from a tagged
    type

    This I don't understand at all. A dynamically loaded library necessarily has a C interface (if it is generally useful, if not, it might as well be maintained as Ada source code, there's no advantage to dynamic linking in that case and lots of disavantages), and that can't export a tagged type.

    No C interfaces. Apart from maintenance, other issues are licensing and security. A typical case is a component that is licensed in a different
    way or may not be shipped to other customers at all. And you can have alternative or mutually incompatible components.

    In any case, a tagged type extension is a compile-time thing -- the compiler has to know all of the details of the type.

    The only thing you need to be compatible with is a C interface, which
    is the only thing you need to interface to existing libraries that
    you can't avoid.

    That would kill most of Ada libraries.

    There's no use to an Ada dynamic library -- if it's only for your organization's use, static linking is way better.

    You compare static vs. import library. The case I am talking about is
    static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we
    do dlsym on entries with Ada calling conventions. No C stuff.

    And if it is for
    everyone's use, it has to have a C interface, thus no tagged types.

    We have close to a hundred of dynamically linked Ada libraries. Only one
    of them has C interface, not surprisingly, with the sole functionality
    to provide C API. But even that one library has tagged extensions of
    inside of it. The standard Ada library is full of tagged types. Your C-interfaced library is free to derive from any of them. You cannot
    prevent that.

    So getting rid of nesting tagged types will ease nothing.

    The problem is tagged types not declared at the library level.
    Relocatable librarys are still library level (they have their own global >>> address space).

    When the library is loaded dynamically, there is no way to know in the
    executable the tag of the extension or prepare an extension of the
    dispatching table. I think it is far worse than a nested declaration, when >> is you have some information.

    Ignoring that fact that this is useless construct, it is not at all hard to do, because you have to know that the tag and subprograms are declared in
    the dynamically loaded thing.

    I do not see how this helps with, say, Ada.Tags.Expanded_Name getting a
    tag from the library as an argument.

    Thus, one has to use a wrapper to call them
    indirectly, but that's easy to do when everything is library level. It's essentially the same as shared generics, which Janus/Ada has been doing for decades -- including tagged type derivation.

    That is OK, but you still have to expand dispatching tables upon loading
    the library and shrink them upon unloading (though the latter is not
    supported, I guess).

    The problem comes about when you have things whose lifetime is limited and need to have a static link or display to access them. Managing that is a nightmare, no matter how you try to do it.

    The lifetime of library objects in a dynamically loaded library is
    limited by loading/unloading of the library.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Thu Oct 14 19:36:42 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sk8mbv$15ca$1@gioia.aioe.org...
    On 2021-10-14 03:21, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sjbq96$3cl$1@gioia.aioe.org...
    On 2021-10-03 06:33, Randy Brukardt wrote:
    ...
    Yes, but static monolithic linking is even more fragile. Typically a
    customer orders software off the shelf. It means that he says I need,
    e.g.
    HTTP client, ModBus master, CANOpen etc. It is simply impossible to
    re-link everything for each customer and run integration tests.

    ??? When you are dynamically loading stuff, you simply are assuming
    everything is OK.

    A relocatable DLL is tested with a test application.

    Testing cannot ensure that a contract hasn't been violated, especially the implicit ones that get created by the runtime behavior of a library. At
    best, you can test a few percent of the way a library can be used (and
    people are good at finding unanticipated ways to use a library).

    (Which is usually nonsense, but for the sake of argument,
    assume that it is OK to do.) When you statically link, you surely can
    make
    the same assumption.

    A static library is tested same way, true, but integration of a static library is different and testing that is not possible without developing
    some massive tool-chain, like Linux distributions had in early days.

    ??? Your "test application" is just a way of running unit tests against a library. You can surely do exactly the same testing with a statically-linked library, it's hard to imagine how a library is packaged would make any difference.

    The problem with dynamically loaded libraries is exactly that they change
    out of sync with the rest of the application, and thus tend to break that application when some behavior of the original library is changed. It's
    quite possible that the behavior in question should never have been depended upon, but contracts aren't strong enough to really describe dynamic behavior (especially use cases and timing). At least with a statically linked
    library, you know it won't change without at least rerunning your acceptance tests.

    Of course, when you statically link Ada code, you can include that code
    in
    your static analysis (including, of course, the various guarantees that
    the
    Ada language and compiler bring). When you dynamically load, you can have
    none of that.

    Yes, but maintainability trumps everything.

    I agree with the sentiment, but the only way to get any sort of
    maintenability is with strong contracts and lots of static analysis.
    Otherwise, subtle changes in a library will break the users and there will
    be no way to find where the dependency is. Nothing I've ever worked on has
    ever been close to maintainable because there is so much that Ada cannot describe (even though Ada itself is certainly a help in this area). You just have to re-test to make sure that no major problems have been introduced (there's a reason that compiler writer's rerun a huge test suite every day).

    So the software is organized as a set of plug-in relocatable libraries,
    each of them maintained, versioned and tested separately. You cannot
    turn
    clock 20 years back.

    And you still have to do integration testing when using them together --
    or
    you could have done the same at the Ada source code level (that is,
    version,
    maintain, and test separately) and still have the advantages of Ada
    checking.

    Theoretically yes, in practice it is a combinatorial explosion.

    Only if you don't use unit tests. But then how you can test a dynamic
    library escapes me. (I've never used unit tests with Janus/Ada because it is too hard to set up the initial conditions for a meaningful test. The easiest way to do that is to compile something, but of course you no longer can do
    unit tests as you have the entire rest of the system dragged along.)

    Dynamic libraries flatten that. Yes, this requires normalization of
    plug-in interfaces etc.

    As noted above, I don't see how. If testing a dynamic library is possible, surely running the same tests against a static library would give the same results (and assurances).

    ...
    There's no use to an Ada dynamic library -- if it's only for your
    organization's use, static linking is way better.

    You compare static vs. import library. The case I am talking about is
    static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we do dlsym on entries with Ada calling conventions. No C stuff.

    That sort of stuff is just plain evil. :-)

    I don't see any way that such loading could work with Ada semantics; there
    is an assumption that all of your ancestors exist before you can do
    anything. The elaboration checks were intended to check that.

    ...

    ...
    Ignoring that fact that this is useless construct, it is not at all hard
    to
    do, because you have to know that the tag and subprograms are declared in
    the dynamically loaded thing.

    I do not see how this helps with, say, Ada.Tags.Expanded_Name getting a
    tag from the library as an argument.

    Ada,Tags can be implemented dynamically; it's a lot easier to do so with
    nested tagged types whose tags go away. Essentially, one registers tags when they are declared, and deregisters them when they go away. That fits very
    well with dynamically loaded libraries.

    Thus, one has to use a wrapper to call them
    indirectly, but that's easy to do when everything is library level. It's
    essentially the same as shared generics, which Janus/Ada has been doing
    for
    decades -- including tagged type derivation.

    That is OK, but you still have to expand dispatching tables upon loading
    the library and shrink them upon unloading (though the latter is not supported, I guess).

    ??? The dispatching tables are defined statically by the compiler, and never change. What I'd do for dynamically loaded libraries is use a wrapper that indirectly calls the dynamically loaded libraries' subprograms. So loading
    the library (actually, declaring the extension) simply has to set up an
    array of pointers to the dynamically loaded subprograms. (You can't call
    them statically because you don't know where they'll be.) The dispatch
    tables never change.

    The problem comes about when you have things whose lifetime is limited
    and
    need to have a static link or display to access them. Managing that is a
    nightmare, no matter how you try to do it.

    The lifetime of library objects in a dynamically loaded library is limited
    by loading/unloading of the library.

    They're still treated as library-level, and they can only be loaded before anything that is going to use them. Otherwise, the Ada elaboration model is hosed, and that is fundamental to compiling Ada code.

    You could of course write an implementation that doesn't care about safe
    code, and let people do whatever that they like even though it is nonsense
    from an Ada perspective. Such code is simply user-beware, and has no
    guarantee of working in the future (after a compiler change).

    I won't do that, there are some principles that I won't compromise to get customers.

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stephen Leake@21:1/5 to Randy Brukardt on Fri Oct 15 01:08:54 2021
    "Randy Brukardt" <randy@rrsoftware.com> writes:


    That is OK, but you still have to expand dispatching tables upon loading
    the library and shrink them upon unloading (though the latter is not
    supported, I guess).

    ??? The dispatching tables are defined statically by the compiler, and never change.

    It would be nice if different variants of a dynamically loaded library
    could introduce different derived types; that would support a "plugin"
    model nicely.

    For example, suppose an editor defines a library interface for computing
    indent for various languages. Then one variant could provide Ada,
    another Pascal, etc. Each could be a derived type.

    I think you are saying this is simply not possible with Ada tagged types.

    --
    -- Stephe

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Fri Oct 15 10:15:30 2021
    On 2021-10-15 02:36, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sk8mbv$15ca$1@gioia.aioe.org...
    On 2021-10-14 03:21, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sjbq96$3cl$1@gioia.aioe.org...
    On 2021-10-03 06:33, Randy Brukardt wrote:
    ...
    Yes, but static monolithic linking is even more fragile. Typically a
    customer orders software off the shelf. It means that he says I need,
    e.g.
    HTTP client, ModBus master, CANOpen etc. It is simply impossible to
    re-link everything for each customer and run integration tests.

    ??? When you are dynamically loading stuff, you simply are assuming
    everything is OK.

    A relocatable DLL is tested with a test application.

    Testing cannot ensure that a contract hasn't been violated, especially the implicit ones that get created by the runtime behavior of a library. At
    best, you can test a few percent of the way a library can be used (and
    people are good at finding unanticipated ways to use a library).

    Yes, high integrity system would likely have a monolithic design, but
    this too is changing because the size of systems keeps on growing.

    (Which is usually nonsense, but for the sake of argument,
    assume that it is OK to do.) When you statically link, you surely can
    make
    the same assumption.

    A static library is tested same way, true, but integration of a static
    library is different and testing that is not possible without developing
    some massive tool-chain, like Linux distributions had in early days.

    ??? Your "test application" is just a way of running unit tests against a library. You can surely do exactly the same testing with a statically-linked library, it's hard to imagine how a library is packaged would make any difference.

    It is linking static stuff alternately that need to be tested. If you
    use a subset of statically linked components you need to change the code
    that uses them correspondingly, unless you do an equivalent of
    dynamically loaded components without any advantages of.

    As an example, consider switching between GNUTLS and OpenSSL for
    encryption of say MQTT connections.

    Yes, but maintainability trumps everything.

    I agree with the sentiment, but the only way to get any sort of maintenability is with strong contracts and lots of static analysis.

    Yes, contracts is a weak part of dynamically loaded stuff. In our case a component registers itself after its library is loaded by providing an
    instance of tagged object.

    Otherwise, subtle changes in a library will break the users and there will
    be no way to find where the dependency is. Nothing I've ever worked on has ever been close to maintainable because there is so much that Ada cannot describe (even though Ada itself is certainly a help in this area). You just have to re-test to make sure that no major problems have been introduced (there's a reason that compiler writer's rerun a huge test suite every day).

    Yes, but it is not economically viable anymore. Nobody would pay for that.

    Only if you don't use unit tests. But then how you can test a dynamic
    library escapes me. (I've never used unit tests with Janus/Ada because it is too hard to set up the initial conditions for a meaningful test. The easiest way to do that is to compile something, but of course you no longer can do unit tests as you have the entire rest of the system dragged along.)

    We test Ada packages statically linked and we have
    semi-unit/semi-integration tests that load the library first. It is not
    a big deal.

    Dynamic libraries flatten that. Yes, this requires normalization of
    plug-in interfaces etc.

    As noted above, I don't see how. If testing a dynamic library is possible, surely running the same tests against a static library would give the same results (and assurances).

    Only if you create some equivalent of "static" plug-in with all
    disadvantages of proper plug-in and none of the advantages.

    There's no use to an Ada dynamic library -- if it's only for your
    organization's use, static linking is way better.

    You compare static vs. import library. The case I am talking about is
    static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we do
    dlsym on entries with Ada calling conventions. No C stuff.

    That sort of stuff is just plain evil. :-)

    Yes! (:-))

    I don't see any way that such loading could work with Ada semantics; there
    is an assumption that all of your ancestors exist before you can do
    anything. The elaboration checks were intended to check that.

    We have a core libraries which are import libraries for the plug-in.
    When a plug-in is loaded, the core libraries are elaborated unless
    already loaded, the plug-in library itself is not elaborated, because
    automatic elaboration would deadlock under Windows. Then a dedicated
    entry point is called in the plug-in library. The first thing it does is
    a call to the plug-in elaboration code. GNAT generates an
    <library-name>init entry for that. After this the plug-in registers
    itself providing a tagged object, which primitive operations are
    basically the library's true interface.

    I know it sounds horrific, but it works pretty well.

    ??? The dispatching tables are defined statically by the compiler, and never change. What I'd do for dynamically loaded libraries is use a wrapper that indirectly calls the dynamically loaded libraries' subprograms. So loading the library (actually, declaring the extension) simply has to set up an
    array of pointers to the dynamically loaded subprograms. (You can't call
    them statically because you don't know where they'll be.) The dispatch
    tables never change.

    And how do you dispatch? Consider the case:

    The core library:

    package A is
    type T is tagged ...;
    procedure Foo (X : in out T);

    procedure Trill_Me (X : in out T'Class);
    end A;

    package body A is
    procedure Trill_Me (X : in out T'Class) is
    begin
    X.Foo; -- Dispatches to Foo overridden in a loadable library
    end Trill_Me;
    end A;

    Inside the loadable library:

    type S is new T with ...;
    overriding procedure Foo (X : in out S);
    ...
    X : B;
    ...
    Trill_Me (X);

    Do you keep a pointer to the dispatching table inside the object, like
    C++ does? Because I had a more general model in mind, when dispatching
    tables were attached to the primitive operations rather than objects.

    The problem comes about when you have things whose lifetime is limited
    and
    need to have a static link or display to access them. Managing that is a >>> nightmare, no matter how you try to do it.

    The lifetime of library objects in a dynamically loaded library is limited >> by loading/unloading of the library.

    They're still treated as library-level,

    Right, and this is the problem, because semantically anything inside a dynamically loaded library is not just nested, worse, it is more like new/Unchecked_Deallocation, but with things like types etc.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Stephen Leake on Fri Oct 15 10:18:30 2021
    On 2021-10-15 10:08, Stephen Leake wrote:
    "Randy Brukardt" <randy@rrsoftware.com> writes:

    That is OK, but you still have to expand dispatching tables upon loading >>> the library and shrink them upon unloading (though the latter is not
    supported, I guess).

    ??? The dispatching tables are defined statically by the compiler, and never >> change.

    It would be nice if different variants of a dynamically loaded library
    could introduce different derived types; that would support a "plugin"
    model nicely.

    For example, suppose an editor defines a library interface for computing indent for various languages. Then one variant could provide Ada,
    another Pascal, etc. Each could be a derived type.

    I think you are saying this is simply not possible with Ada tagged types.

    This is exactly what we do. At least with GNAT it works just fine.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Stephen Leake on Fri Oct 15 17:22:43 2021
    "Stephen Leake" <stephen_leake@stephe-leake.org> wrote in message news:86v91ylnft.fsf@stephe-leake.org...
    "Randy Brukardt" <randy@rrsoftware.com> writes:


    That is OK, but you still have to expand dispatching tables upon loading >>> the library and shrink them upon unloading (though the latter is not
    supported, I guess).

    ??? The dispatching tables are defined statically by the compiler, and
    never
    change.

    It would be nice if different variants of a dynamically loaded library
    could introduce different derived types; that would support a "plugin"
    model nicely.

    For example, suppose an editor defines a library interface for computing indent for various languages. Then one variant could provide Ada,
    another Pascal, etc. Each could be a derived type.

    I think you are saying this is simply not possible with Ada tagged types.

    The case Dmitry was talking about (or at least that I thought he was talking about) is deriving a new type from a dynamically loaded type. That can be implemented, but you have to know the type you are deriving from in the Ada model (classwide parents are illegal in Ada). So it doesn't buy a huge
    amount.

    You are talking about exposing different implementations of the same Ada
    type in a dynamically loaded library, and that of course works fine. You
    can't have truly different types in the Ada model, since you are sharing the specification (essentially, only the body is dynamically loaded; even if the code for the spec is included in the dynamic loaded unit, the static
    properties have to be the same).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Randy Brukardt@21:1/5 to Dmitry A. Kazakov on Fri Oct 15 17:44:44 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:skbdb5$u50$1@gioia.aioe.org...
    On 2021-10-15 02:36, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sk8mbv$15ca$1@gioia.aioe.org...
    On 2021-10-14 03:21, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
    news:sjbq96$3cl$1@gioia.aioe.org...
    On 2021-10-03 06:33, Randy Brukardt wrote:
    ...
    A static library is tested same way, true, but integration of a static
    library is different and testing that is not possible without developing >>> some massive tool-chain, like Linux distributions had in early days.

    ??? Your "test application" is just a way of running unit tests against a
    library. You can surely do exactly the same testing with a
    statically-linked
    library, it's hard to imagine how a library is packaged would make any
    difference.

    It is linking static stuff alternately that need to be tested. If you use
    a subset of statically linked components you need to change the code that uses them correspondingly, unless you do an equivalent of dynamically
    loaded components without any advantages of.

    As an example, consider switching between GNUTLS and OpenSSL for
    encryption of say MQTT connections.

    I guess I don't follow. The connections between units is static (in Ada, and any dynamic loading in Ada has to follow the same model), so any unit
    testing on a unit tests all of its dependencies as well. If you don't use a unit at all, it isn't included in the closure, and whether or not it passes
    any tests is irrelevant.

    In the case you describe, you'd have a binding that abstracts the two underlying libraries, and you'd unit test that. Assuming it passes with both implementations, it shouldn't matter which is used in a particular program.
    How the foreign langauge code is implemented (static or dynamic binding, programming language, etc.) is irrelevant to the Ada program. So again I
    don't see the problem.

    ....
    Otherwise, subtle changes in a library will break the users and there
    will
    be no way to find where the dependency is. Nothing I've ever worked on
    has
    ever been close to maintainable because there is so much that Ada cannot
    describe (even though Ada itself is certainly a help in this area). You
    just
    have to re-test to make sure that no major problems have been introduced
    (there's a reason that compiler writer's rerun a huge test suite every
    day).

    Yes, but it is not economically viable anymore. Nobody would pay for that.

    Really? They have no choice -- otherwise, your product will fail
    periodically without any way to find out why. Has software gotten so bad
    that no one cares about that? I certainly would never do that to our
    customers (and I hate support anyway, I want to reduce it as much as
    possible).

    Only if you don't use unit tests. But then how you can test a dynamic
    library escapes me. (I've never used unit tests with Janus/Ada because it
    is
    too hard to set up the initial conditions for a meaningful test. The
    easiest
    way to do that is to compile something, but of course you no longer can
    do
    unit tests as you have the entire rest of the system dragged along.)

    We test Ada packages statically linked and we have
    semi-unit/semi-integration tests that load the library first. It is not a
    big deal.

    Which sounds like there is no reason to use dynamic linking except to make
    your applications far more fragile. I suppose it doesn't matter as much if
    the libraries are all under your control, but that is rarely the case in the real world. (Your example of connection encryption is a good one; changes to the underlying stuff tends to break existing programs. Not much that can be done about it, of course, but those updates don't really fix anything by themselves, the using programs tend to need to be repaired.)

    Dynamic libraries flatten that. Yes, this requires normalization of
    plug-in interfaces etc.

    As noted above, I don't see how. If testing a dynamic library is
    possible,
    surely running the same tests against a static library would give the
    same
    results (and assurances).

    Only if you create some equivalent of "static" plug-in with all
    disadvantages of proper plug-in and none of the advantages.

    There's no plug-ins in a statically linked system. What would be the point?
    I'm assuming that we're talking about Ada interfaced libraries here (C is a different kettle of fish), so you're talking about switching implementations
    of a single Ada spec. We've done that going back to the beginning of Ada
    time; it's managed by decent build tools and has gotten pretty simple in
    most Ada compilers. So what would a plug-in buy?

    There's no use to an Ada dynamic library -- if it's only for your
    organization's use, static linking is way better.

    You compare static vs. import library. The case I am talking about is
    static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we
    do
    dlsym on entries with Ada calling conventions. No C stuff.

    That sort of stuff is just plain evil. :-)

    Yes! (:-))

    I don't see any way that such loading could work with Ada semantics;
    there
    is an assumption that all of your ancestors exist before you can do
    anything. The elaboration checks were intended to check that.

    We have a core libraries which are import libraries for the plug-in. When
    a plug-in is loaded, the core libraries are elaborated unless already
    loaded, the plug-in library itself is not elaborated, because automatic elaboration would deadlock under Windows. Then a dedicated entry point is called in the plug-in library. The first thing it does is a call to the plug-in elaboration code. GNAT generates an <library-name>init entry for that. After this the plug-in registers itself providing a tagged object, which primitive operations are basically the library's true interface.

    I know it sounds horrific, but it works pretty well.

    It does sound horrific, and it doesn't seem to buy much.

    ??? The dispatching tables are defined statically by the compiler, and
    never
    change. What I'd do for dynamically loaded libraries is use a wrapper
    that
    indirectly calls the dynamically loaded libraries' subprograms. So
    loading
    the library (actually, declaring the extension) simply has to set up an
    array of pointers to the dynamically loaded subprograms. (You can't call
    them statically because you don't know where they'll be.) The dispatch
    tables never change.

    And how do you dispatch? Consider the case:

    The core library:

    package A is
    type T is tagged ...;
    procedure Foo (X : in out T);

    procedure Trill_Me (X : in out T'Class);
    end A;

    package body A is
    procedure Trill_Me (X : in out T'Class) is
    begin
    X.Foo; -- Dispatches to Foo overridden in a loadable library
    end Trill_Me;
    end A;

    Inside the loadable library:

    type S is new T with ...;
    overriding procedure Foo (X : in out S);
    ...
    X : B;
    ...
    Trill_Me (X);

    Do you keep a pointer to the dispatching table inside the object, like C++ does? Because I had a more general model in mind, when dispatching tables were attached to the primitive operations rather than objects.

    A tag is a property of a type in Ada, and it includes the dispatch table.
    You could have a model where the dispatch table didn't live in the object
    (but that's not Ada, you have to be able to recover the original tag of the object), but that wouldn't change anything about the structure of the
    tables. I don't see any model that makes sense associated with the
    operations. The whole point of a tagged type is that it is a set of
    operations called in a consistent way, breaking that up makes no sense.

    ....
    The problem comes about when you have things whose lifetime is limited >>>> and
    need to have a static link or display to access them. Managing that is >>>> a
    nightmare, no matter how you try to do it.

    The lifetime of library objects in a dynamically loaded library is
    limited
    by loading/unloading of the library.

    They're still treated as library-level,

    Right, and this is the problem, because semantically anything inside a dynamically loaded library is not just nested, worse, it is more like new/Unchecked_Deallocation, but with things like types etc.

    You can't unload a library until all of the things that depend upon it have been unloaded, so from the perspective of a compiler, it acts like library-level. The whole mess about loading/unloading is on the user (which
    is what I meant about "unsafe" yesterday), and if you get it wrong, your program is erroneous and can do any manner of things. It's no more worth it
    for a compiler to worry about bad unloading than it is to worry about
    dangling pointers. At best, those sorts of things have to be handled dynamically (and the compiler doesn't care much about dynamic behavior).

    Randy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Randy Brukardt on Sat Oct 16 11:00:05 2021
    On 2021-10-16 00:44, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:skbdb5$u50$1@gioia.aioe.org...
    On 2021-10-15 02:36, Randy Brukardt wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message

    In the case you describe, you'd have a binding that abstracts the
    two underlying libraries, and you'd unit test that. Assuming it
    passes with both implementations, it shouldn't matter which is used
    in a particular program.

    You need to test switching between implementations. In the case of
    static linking that part is not even code and thus non-testable. You
    must test each concrete assembly of components each time and each
    assembly with have alternating code. Why do you think people keep on
    asking for Ada preprocessor in c.l.a?

    Yes, but it is not economically viable anymore. Nobody would pay
    for that.

    Has software gotten so
    bad that no one cares about that?

    Yes, and worse.

    I certainly would never do that to
    our customers (and I hate support anyway, I want to reduce it as
    much as possible).

    Compilers are not ware anymore. We live in a post-market era with
    neo-feudal economic relationships.

    There's no plug-ins in a statically linked system. What would be the
    point? I'm assuming that we're talking about Ada interfaced
    libraries here (C is a different kettle of fish), so you're talking
    about switching implementations of a single Ada spec. We've done that
    going back to the beginning of Ada time; it's managed by decent build
    tools and has gotten pretty simple in most Ada compilers. So what
    would a plug-in buy?

    Yes, that is about build tools to develop and maintain, because
    dependencies handling and adjusting the code invoking optional
    components is specific to the problem domain. Such stuff does exist,
    e.g. this is how VxWorks or Yokto Linux images are configured. No way
    regular software could go this way.

    I don't see any way that such loading could work with Ada
    semantics; there is an assumption that all of your ancestors
    exist before you can do anything. The elaboration checks were
    intended to check that.

    We have a core libraries which are import libraries for the
    plug-in. When a plug-in is loaded, the core libraries are
    elaborated unless already loaded, the plug-in library itself is not
    elaborated, because automatic elaboration would deadlock under
    Windows. Then a dedicated entry point is called in the plug-in
    library. The first thing it does is a call to the plug-in
    elaboration code. GNAT generates an <library-name>init entry for
    that. After this the plug-in registers itself providing a tagged
    object, which primitive operations are basically the library's true
    interface.

    I know it sounds horrific, but it works pretty well.

    It does sound horrific, and it doesn't seem to buy much.

    We started with a monolithic solution and were forced to redesign it
    when it became unmaintainable. Apart from the the fact that it bluntly
    refused to fit in 256K RAM of a target platform ...

    A tag is a property of a type in Ada, and it includes the dispatch
    table.

    No, it is just one possible implementation. It has a disadvantage that
    you must search a global map tag->vptr and you must keep the whole table
    per each tagged type. I suppose one could exchange the map tag->ptr with ptr->tag. Then dispatching will be cheaper and X'Tag more expensive. In
    any case you must adjust either map when elaborating the library.

    You can't unload a library until all of the things that depend upon
    it have been unloaded, so from the perspective of a compiler, it acts
    like library-level. The whole mess about loading/unloading is on the
    user (which is what I meant about "unsafe" yesterday), and if you get
    it wrong, your program is erroneous and can do any manner of things.
    It's no more worth it for a compiler to worry about bad unloading
    than it is to worry about dangling pointers.

    Still arguing for collections, huh? (:-))

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Wright@21:1/5 to Dmitry A. Kazakov on Sat Oct 16 15:32:08 2021
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Why do you think people keep on asking for Ada preprocessor in c.l.a?

    Certainly not something I've noticed

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dmitry A. Kazakov@21:1/5 to Simon Wright on Sat Oct 16 17:06:00 2021
    On 2021-10-16 16:32, Simon Wright wrote:
    "Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

    Why do you think people keep on asking for Ada preprocessor in c.l.a?

    Certainly not something I've noticed

    Comes periodically. People falsely believe that conditional compilation
    could allow static linking for dynamically configured projects.

    --
    Regards,
    Dmitry A. Kazakov
    http://www.dmitry-kazakov.de

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Shark8@21:1/5 to Dmitry A. Kazakov on Mon Oct 18 07:23:00 2021
    On Saturday, October 16, 2021 at 9:06:03 AM UTC-6, Dmitry A. Kazakov wrote:
    On 2021-10-16 16:32, Simon Wright wrote:
    "Dmitry A. Kazakov" <mai...> writes:

    Why do you think people keep on asking for Ada preprocessor in c.l.a?

    Certainly not something I've noticed
    Comes periodically. People falsely believe that conditional compilation
    could allow static linking for dynamically configured projects.

    Having worked on C/C++ projects that used a bunch of preprocessor stuff, esp IFDEF/IFNDEF, I have to say that even if it *were* theoretically possible the resultant mess would be so unmaintainable as to simply not be worth it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)