I was learning about making user defined storage pools when
I came across an article that made me pause and wonder how
portable storage pools actually can be. In particular, I assumed
that the Size_In_Storage_Elements parameter in the Allocate
operation actually indicated the total number of storage elements
needed.
procedure Allocate(
Pool : in out Root_Storage_Pool;
Storage_Address : out Address;
Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
Alignment : in Storage_Elements.Storage_Count) is abstract;
But after reading the following AdaCore article, my assumption is now
called into question:
https://blog.adacore.com/header-storage-pools
In particular, the blog there advocates for separately counting for
things like unconstrained array First/Last indices or the Prev/Next
pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account
for that, but the blog clearly shows that it doesn't
So that seems to mean to make a storage pool, I have to make it
compiler specific or else risk someone creating a type like an
array and my allocation size and address values will be off.
Is it intended not to be able to do portable Storage Pools or am
I missing some Ada functionality that helps me out here. I
scanned through the list of attributes but none seem to give
any info about where the object's returned address is relative
to the top of the memory actually allocated for the object. I saw
the attribute Max_Size_In_Storage_Elements, but it doesn't seem
to guarantee to include things like the array indices and it still
doesn't solve the issue of knowing where the returned address
needs to be relative to the top of allocated memory.
I can easily use a generic to ensure that the types I care about
are portably made by the pool, but I can't prevent someone from
using my pool to create other objects that I hadn't accounted for.
Unless there is a way to restrict a pool from allocating objects
of other types?
I was learning about making user defined storage pools when[...]
I came across an article that made me pause and wonder how
portable storage pools actually can be. In particular, I assumed
that the Size_In_Storage_Elements parameter in the Allocate
operation actually indicated the total number of storage elements
needed.
procedure Allocate(
Pool : in out Root_Storage_Pool;
Storage_Address : out Address;
Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
Alignment : in Storage_Elements.Storage_Count) is abstract;
But after reading the following AdaCore article, my assumption is now
called into question:
https://blog.adacore.com/header-storage-pools
In particular, the blog there advocates for separately counting for
things like unconstrained array First/Last indices or the Prev/Next
pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account
for that, but the blog clearly shows that it doesn't
Le 13/09/2021 à 02:53, Jere a écrit :
I was learning about making user defined storage pools when
I came across an article that made me pause and wonder how
portable storage pools actually can be. In particular, I assumed
that the Size_In_Storage_Elements parameter in the Allocate
operation actually indicated the total number of storage elements
needed.
procedure Allocate(
Pool : in out Root_Storage_Pool;
Storage_Address : out Address;
Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
Alignment : in Storage_Elements.Storage_Count) is abstract;
But after reading the following AdaCore article, my assumption is now called into question:
https://blog.adacore.com/header-storage-pools
In particular, the blog there advocates for separately counting for[...]
things like unconstrained array First/Last indices or the Prev/Next pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account
for that, but the blog clearly shows that it doesn't
That blog shows a special use for Storage_Pools, where you allocate
/user/ data on top of the requested memory. When called by the compiler,
it is up to the compiler to compute how much memory is needed, and your
duty is to just allocate that.
Not sure what you are expecting. There is no requirement that objects are allocated contigiously. Indeed, Janus/Ada will call Allocate as many times
as needed for each object; for instance, unconstrained arrays are in two parts (descriptor and data area).
The only thing that you can assume in a portable library is that you get called the same number of times and sizes/alignment for Allocate and Deallocate; there's no assumptions about size or alignment that you canSo to be clear, you cannot assume that Size and Alignment are appropriate
make.
If you want to build a pool around some specific allocated size, then if it needs to be portable, (A) you have to calculate the allocated size, and (B) you have to have a mechanism for what to do if some other size is requested. (Allocate a whole block for smaller sizes, fall back to built-in heap for
too large is what I usually do).
"Jere" <> wrote in message
news:e3c5c553-4a7f-408a...@googlegroups.com...
I was learning about making user defined storage pools when
I came across an article that made me pause and wonder how
portable storage pools actually can be. In particular, I assumed
that the Size_In_Storage_Elements parameter in the Allocate
operation actually indicated the total number of storage elements
needed.
procedure Allocate(
Pool : in out Root_Storage_Pool;
Storage_Address : out Address;
Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
Alignment : in Storage_Elements.Storage_Count) is abstract;
But after reading the following AdaCore article, my assumption is now called into question:
https://blog.adacore.com/header-storage-pools
In particular, the blog there advocates for separately counting for
things like unconstrained array First/Last indices or the Prev/Next pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account
for that, but the blog clearly shows that it doesn't
So that seems to mean to make a storage pool, I have to make it
compiler specific or else risk someone creating a type like an
array and my allocation size and address values will be off.
Is it intended not to be able to do portable Storage Pools or am
I missing some Ada functionality that helps me out here. I
scanned through the list of attributes but none seem to give
any info about where the object's returned address is relative
to the top of the memory actually allocated for the object. I saw
the attribute Max_Size_In_Storage_Elements, but it doesn't seem
to guarantee to include things like the array indices and it still
doesn't solve the issue of knowing where the returned address
needs to be relative to the top of allocated memory.
I can easily use a generic to ensure that the types I care about
are portably made by the pool, but I can't prevent someone from
using my pool to create other objects that I hadn't accounted for.
Unless there is a way to restrict a pool from allocating objects
of other types?
Yes, but if you look at that blog, they are allocating space for the /user/ dataIn particular, the blog there advocates for separately counting for[...]
things like unconstrained array First/Last indices or the Prev/Next
pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account
for that, but the blog clearly shows that it doesn't
That blog shows a special use for Storage_Pools, where you allocate
/user/ data on top of the requested memory. When called by the compiler,
it is up to the compiler to compute how much memory is needed, and your
duty is to just allocate that.
and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.
I agree I feel it is up to the compiler to provide the correct size to Allocate,
but the blog would indicate that GNAT does not (or did not..old blog..so
who knows?). Does the RM require that an implementation pass the full
amount of memory needed to Allocate when new is called?
Of course, a proper solution would be fixing Ada by adding anotherBut you cannot assume that the object is allocated as one big chunk.
address attribute:
  X'Object_Address
returning the first address of the object as allocated.
Yes, but if you look at that blog, they are allocating space for the /user/ data
and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.
Le 14/09/2021 à 08:23, Dmitry A. Kazakov a écrit :
Of course, a proper solution would be fixing Ada by adding anotherBut you cannot assume that the object is allocated as one big chunk.
address attribute:
   X'Object_Address
returning the first address of the object as allocated.
Bounds can be allocated at a different place. What would be
X'Object_Address in that case?
Yes, but if you look at that blog, they are allocating space for the /user/ data
and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.
On Tuesday, September 14, 2021 at 2:48:16 AM UTC+2, Jere wrote:
Yes, but if you look at that blog, they are allocating space for the /user/ data
and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.
Yes, but if you look at that blog, they explain the default layout of fat pointers,
and the special value that need to be set on access types for the layout to change. If you use such a GNAT-ism, your storage pool will also be bound
to GNAT...
ie:
"GNAT typically uses a "fat pointer" for this purpose: the access itself is in fact
a record of two pointers, one of which points to the bounds, the other points to
the data. This representation is not appropriate in the case of the header storage pool, so we need to change the memory layout here."
and:
"we need to ensure that the bounds for unconstrained arrays are stored next to
the element, not in a separate memory block, to improve performance. This is done by setting the Size attribute on the type. When we set this size to that of
a standard pointer, GNAT automatically changes the layout,"
On 2021-09-14 02:48, Jere wrote:
Yes, but if you look at that blog, they are allocating space for the /user/ dataI do not understand your concern. The blog discusses how to add service
and for the Next/Prev for controlled types and First/Last for unconstrained arrays in addition to the size specified by allocate.
data to the objects allocated in the pool.
I use such pools extensively in Simple Components. E.g. linked lists are implemented this way. The list links are allocated in front of list
elements which can be of any type, unconstrained arrays included.
Le 14/09/2021 à 02:48, Jere a écrit :I agree, but the blog made reconsider how far fetched it was.
Yes, but if you look at that blog, they are allocating space for the /user/ dataIn particular, the blog there advocates for separately counting for[...]
things like unconstrained array First/Last indices or the Prev/Next
pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account >>> for that, but the blog clearly shows that it doesn't
That blog shows a special use for Storage_Pools, where you allocate
/user/ data on top of the requested memory. When called by the compiler, >> it is up to the compiler to compute how much memory is needed, and your >> duty is to just allocate that.
and for the Next/Prev for controlled types and First/Last for unconstrained
arrays in addition to the size specified by allocate.
I agree I feel it is up to the compiler to provide the correct size to Allocate,
but the blog would indicate that GNAT does not (or did not..old blog..so who knows?). Does the RM require that an implementation pass the full amount of memory needed to Allocate when new is called?
The RM says that an allocator allocates storage from the storage pool.
You could argue that it does not say "allocates all needed storage...",
but that would be a bit far fetched.
Anyway, a blog is not the proper place to get information from for thatI'll take a look at the GNAT docs to see (and of course that blog is old,
kind of issue. Look at the Gnat documentation.
--
On Tuesday, September 14, 2021 at 2:23:15 AM UTC-4, Dmitry A. Kazakov wrote:
On 2021-09-14 02:48, Jere wrote:I tried to better articulate my concern in my response to egilhh if you want to take a quick look at that and see if it clarifies better.
Yes, but if you look at that blog, they are allocating space for the /user/ dataI do not understand your concern. The blog discusses how to add service
and for the Next/Prev for controlled types and First/Last for unconstrained >>> arrays in addition to the size specified by allocate.
data to the objects allocated in the pool.
I use such pools extensively in Simple Components. E.g. linked lists areThe blog I saw was old, so it is completely possible it no longer is
implemented this way. The list links are allocated in front of list
elements which can be of any type, unconstrained arrays included.
true that GNAT does what the blog suggests. I'll take a look at your
storage pools and see how they handle things like this.
Thanks for the response. I'm sorry for all the questions. That's how
I learn and I realize it isn't a popular way to learn in the
community, but I have always learned very differently than most.
But after reading the following AdaCore article, my assumption is now
called into question:
https://blog.adacore.com/header-storage-pools
In particular, the blog there advocates for separately counting for
things like unconstrained array First/Last indices or the Prev/Next
pointers used for Controlled objects. Normally I would have assumed
that the Size_In_Storage_Elements parameter in Allocate would account
for that, but the blog clearly shows that it doesn't
Well, I may well have missed the point somewhere, and maybe things
have changed since 2015, but as far as I can see, with FSF GCC 11.1.0,
the technique described in the blog is completely unnecessary.
To save having to recompile the runtime with debug symbols, I wrote a
tiny pool which delegates to GNAT's
System.Pool_Global.Global_Pool_Object (the default pool), e.g.
overriding procedure Allocate
(Pool : in out My_Pool.Pool;
Storage_Address : out Address;
Size_In_Storage_Elements : in Storage_Elements.Storage_Count;
Alignment : in Storage_Elements.Storage_Count)
is
pragma Unreferenced (Pool);
begin
Global_Pool_Object.Allocate
(Address => Storage_Address,
Storage_Size => Size_In_Storage_Elements,
Alignment => Alignment);
end Allocate;
and I find with
Pool : My_Pool.Pool;
type C is new Ada.Finalization.Controlled with null record;
type Cs is array (Natural range <>) of C;
type Csp is access Cs with Storage_Pool => Pool;
Mind, I don't quite see how to actually access the header info for a particular allocation ...
Mind, I don't quite see how to actually access the header info for a
particular allocation ...
By subtracting a fixed offset from Pointer.all'Address. The offset is
to be determined, because X'Address lies.
But my main point was, the blog which was Jere's original problem is in
fact (now?) wrong.
Jere <> writes:Thanks! though in this case, my question was ill formed after I missed a detail
Thanks for the response. I'm sorry for all the questions. That's howSeems to me you ask interesting questions which generate enlightening responses!
I learn and I realize it isn't a popular way to learn in the
community, but I have always learned very differently than most.
In the case of the GNATCOLL headers pool, we need to allocate more because the user wants to store extra data. For that data, we are left on our own to find the number of bytes we need, which is part of the computation we do: we of course need thenumber of bytes for the header's object_size, but also perhaps some extra bytes that are not returned by that object_size in particular for controlled types and arrays.
Note again that those additional bytes are for the header type, not for the type the user is allocating (for which, again, the compiler already passes the number of bytes it needs).
<SNIPEED>
Emmanuel
Thanks for the response Emmanuel. That clears it up for me. I think the confusion for me
came from the terminology used then. In the blog, that extra space for First/Last and Prev/Next
was mentioned as if it were for the element, which I mistook was the user's object being allocated
and not the header portion. I didn't catch that as the generic formal's name, so that is my mistake.
I guess in my head, I would have expected the formal name to be Header_Type or similar so I
misread it in my haste.
On Thursday, September 16, 2021 at 3:13:00 AM UTC-4, Emmanuel wrote:
In the case of the GNATCOLL headers pool, we need to allocate more
because the user wants to store extra data. For that data, we are
left on our own to find the number of bytes we need, which is part of
the computation we do: we of course need the number of bytes for the
header's object_size, but also perhaps some extra bytes that are not
returned by that object_size in particular for controlled types and
arrays.
Note again that those additional bytes are for the header type, not
for the type the user is allocating (for which, again, the compiler
already passes the number of bytes it needs).
Thanks for the response Emmanuel. That clears it up for me. I think
the confusion for me came from the terminology used then. In the
blog, that extra space for First/Last and Prev/Next was mentioned as
if it were for the element, which I mistook was the user's object
being allocated and not the header portion. I didn't catch that as
the generic formal's name, so that is my mistake.
I appreciate the clarity and apologize if I caused too much of a stir.
I was asking the question
because I didn't understand, so I hope you don't think too poorly of me for it, despite my mistake.
Nope, especially because the issue with X'Address being unusable for
memory pool developers is a long standing painful problem that need to
be resolved. That will never happen until a measurable group of people
start asking questions. So you are doubly welcome.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for
memory pool developers is a long standing painful problem that need to
be resolved. That will never happen until a measurable group of people
start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about, Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] (storage units, I think, introduced in 2017)
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for
memory pool developers is a long standing painful problem that need to
be resolved. That will never happen until a measurable group of people
start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
(storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody really
cares about.
What is needed is the address passed to Deallocate should the object be
freed = the address returned by Allocate. Is that too much to ask?
BTW, finalization lists (#2) should have been removed from the language
long ago.
On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for
memory pool developers is a long standing painful problem that need to >>>> be resolved. That will never happen until a measurable group of people >>>> start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
(storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody really
cares about.
What is needed is the address passed to Deallocate should the object
be freed = the address returned by Allocate. Is that too much to ask?
That is already required by RM 13.11(21.7/3): "The value of the Storage_Address parameter for a call to Deallocate is the value returned
in the Storage_Address parameter of the corresponding successful call to Allocate."
BTW, finalization lists (#2) should have been removed from the
language long ago.
Huh? Where does the RM _require_ finalization lists?
I see them
mentioned here and there as a _possible_ implementation technique, and
an alternative "PC-map" technique is described in RM 7.6.1 (24.r .. 24.t).
On 2021-09-17 23:17, Niklas Holsti wrote:
On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for >>>>> memory pool developers is a long standing painful problem that need to >>>>> be resolved. That will never happen until a measurable group of people >>>>> start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] >>>> (storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody really
cares about.
What is needed is the address passed to Deallocate should the object
be freed = the address returned by Allocate. Is that too much to ask?
That is already required by RM 13.11(21.7/3): "The value of the
Storage_Address parameter for a call to Deallocate is the value
returned in the Storage_Address parameter of the corresponding
successful call to Allocate."
You missed the discussion totally. It is about X'Address attribute.
BTW, finalization lists (#2) should have been removed from the
language long ago.
Huh? Where does the RM _require_ finalization lists?
7.6.1 (11 1/3)
I see them mentioned here and there as a _possible_ implementation
technique, and an alternative "PC-map" technique is described in RM
7.6.1 (24.r .. 24.t).
I don't care about techniques to implement meaningless stuff. It should
be out, at least there must be a representation aspect for turning this
mess off.
On 2021-09-18 10:49, Dmitry A. Kazakov wrote:
On 2021-09-17 23:17, Niklas Holsti wrote:
On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for >>>>>> memory pool developers is a long standing painful problem that
need to
be resolved. That will never happen until a measurable group of
people
start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] >>>>> (storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody
really cares about.
What is needed is the address passed to Deallocate should the object
be freed = the address returned by Allocate. Is that too much to ask?
That is already required by RM 13.11(21.7/3): "The value of the
Storage_Address parameter for a call to Deallocate is the value
returned in the Storage_Address parameter of the corresponding
successful call to Allocate."
You missed the discussion totally. It is about X'Address attribute.
Sure, I understand that the address returned by Allocate, and passed to Deallocate, for an object X, is not always X'Address, and that you would
like some means to get the Allocate/Deallocate address from (an access
to) X. But what you stated as not "too much to ask" is specifically
required in the RM paragraph I quoted. Perhaps you meant to state
something else, about X'Address or some other attribute, but that was
not what you wrote.
Given that an object can be allocated in multiple independent pieces, it seems unlikely that what you want will be provided.
BTW, finalization lists (#2) should have been removed from the
language long ago.
Huh? Where does the RM _require_ finalization lists?
7.6.1 (11 1/3)
RM (2012) 7.6.1 (11.1/3) says only that objects must be finalized in
reverse order of their creation. There is no mention of "list".
Then your complaint seems to be about something specified for the order
of finalization, but you haven't said clearly what that something is.
Not sure what you are expecting. There is no requirement that objects
are allocated contigiously. Indeed, Janus/Ada will call Allocate as
many times as needed for each object; for instance, unconstrained
arrays are in two parts (descriptor and data area).
On 2021-09-18 11:03, Niklas Holsti wrote:
On 2021-09-18 10:49, Dmitry A. Kazakov wrote:
On 2021-09-17 23:17, Niklas Holsti wrote:
On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for >>>>>>> memory pool developers is a long standing painful problem that
need to
be resolved. That will never happen until a measurable group of
people
start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and
Finalization_Size[2]
(storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody
really cares about.
What is needed is the address passed to Deallocate should the
object be freed = the address returned by Allocate. Is that too
much to ask?
That is already required by RM 13.11(21.7/3): "The value of the
Storage_Address parameter for a call to Deallocate is the value
returned in the Storage_Address parameter of the corresponding
successful call to Allocate."
You missed the discussion totally. It is about X'Address attribute.
Sure, I understand that the address returned by Allocate, and passed
to Deallocate, for an object X, is not always X'Address, and that you
would like some means to get the Allocate/Deallocate address from (an
access to) X. But what you stated as not "too much to ask" is
specifically required in the RM paragraph I quoted. Perhaps you meant
to state something else, about X'Address or some other attribute, but
that was not what you wrote.
I wrote about attributes, specifically GNAT-specific ones used in the
blog to calculate the correct address.
"Too much to ask" was about an
attribute that would return the object address directly.
Given that an object can be allocated in multiple independent pieces,
it seems unlikely that what you want will be provided.
Such implementations would automatically disqualify the compiler. Compiler-generated piecewise allocation is OK for the stack, not for
user storage pools.
BTW, finalization lists (#2) should have been removed from the
language long ago.
Huh? Where does the RM _require_ finalization lists?
7.6.1 (11 1/3)
RM (2012) 7.6.1 (11.1/3) says only that objects must be finalized in
reverse order of their creation.
There is no mention of "list".
It talks about "collection."
Then your complaint seems to be about something specified for the
order of finalization, but you haven't said clearly what that
something is.
No, it is about the overhead of maintaining "collections" associated
with an access type in order to call Finalization for all members of the collection.
So you want a way to specify that for a given access type, although the accessed object type has a Finalize operation or needs finalization, the objects left over in the (at least conceptually) associated collection
should _not_ be finalized when the scope of the access type is left?
To me it seems a risky think to do, subverting the normal semantics of initialization and finalization.
On 2021-09-18 17:59, Niklas Holsti wrote:
So you want a way to specify that for a given access type, although
the accessed object type has a Finalize operation or needs
finalization, the objects left over in the (at least conceptually)
associated collection should _not_ be finalized when the scope of the
access type is left?
Exactly, especially because these objects are not deallocated, as you
say they are left over. If they wanted GC they should do that. If they
do not, then they should keep their hands off the objects maintained by
the programmer.
To me it seems a risky think to do, subverting the normal semantics of
initialization and finalization.
Quite the opposite, it is the collection rule that subverts semantics
because objects are not freed, yet mangled.
On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
On 2021-09-18 17:59, Niklas Holsti wrote:
So you want a way to specify that for a given access type, although
the accessed object type has a Finalize operation or needs
finalization, the objects left over in the (at least conceptually)
associated collection should _not_ be finalized when the scope of the
access type is left?
Exactly, especially because these objects are not deallocated, as you
say they are left over. If they wanted GC they should do that. If they
do not, then they should keep their hands off the objects maintained
by the programmer.
To me it seems a risky think to do, subverting the normal semantics
of initialization and finalization.
Quite the opposite, it is the collection rule that subverts semantics
because objects are not freed, yet mangled.
Local variables declared in a subprogram are also not explicitly freed (deallocated), yet they are automatically finalized when the subprogram returns.
My understanding of Ada semantic principles is that any object that is initialized should also be finalized.
Has this feature of Ada caused you real problems in real applications,
or is it only a point of principle for you?
Not sure what you are expecting. There is no requirement that objects are allocated contigiously. Indeed, Janus/Ada will call Allocate as many times
as needed for each object; for instance, unconstrained arrays are in two parts (descriptor and data area).
<SNIPPED>
Randy.
"Jere" <> wrote in message
news:e3c5c553-4a7f-408a...@googlegroups.com...
I was learning about making user defined storage pools when
I came across an article that made me pause and wonder how
portable storage pools actually can be. In particular, I assumed
that the Size_In_Storage_Elements parameter in the Allocate
operation actually indicated the total number of storage elements
needed.
<SNIPPED>
If a compiler is allowed to break up an allocation into multipleI think one cannot enforce that, because the calls to Allocate do not indicate (with parameters) which set of calls concern the same object allocation.
calls to Allocate (and of course Deallocate), how does one go about enforcing that the user's header is only created once?
Followup question cause Randy's statement (below) got me thinking:
If a compiler is allowed to break up an allocation into multiple
calls to Allocate (and of course Deallocate), how does one go about
enforcing that the user's header is only created once?
On 2021-09-19 12:36, Niklas Holsti wrote:
On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
On 2021-09-18 17:59, Niklas Holsti wrote:
So you want a way to specify that for a given access type, although
the accessed object type has a Finalize operation or needs
finalization, the objects left over in the (at least conceptually)
associated collection should _not_ be finalized when the scope of
the access type is left?
Exactly, especially because these objects are not deallocated, as you
say they are left over. If they wanted GC they should do that. If
they do not, then they should keep their hands off the objects
maintained by the programmer.
To me it seems a risky think to do, subverting the normal semantics
of initialization and finalization.
Quite the opposite, it is the collection rule that subverts semantics
because objects are not freed, yet mangled.
Local variables declared in a subprogram are also not explicitly freed
(deallocated), yet they are automatically finalized when the
subprogram returns.
Local objects are certainly freed. Explicit or not, aggregated or not,
is irrelevant.
My understanding of Ada semantic principles is that any object that is
initialized should also be finalized.
IFF deallocated.
An application that runs continuously will never deallocate, HENCE
finalize certain objects.
Has this feature of Ada caused you real problems in real applications,
or is it only a point of principle for you?
1. It is a massive overhead in both memory and performance terms with no purpose whatsoever. [...]
2. What is worse that a collection is not bound to the pool. It is to an access type, which may have a narrower scope. So the user could declare
an unfortunate access type, which would corrupt objects in the pool and
the pool designer has no means to prevent that.
On 2021-09-20 09:05, Niklas Holsti wrote:
However, your semantic argument (as opposed to the overhead argument)
seems to be based on an assumption that the objects "left over" in a
local collection, and which thus are inaccessible, will still,
somehow, participate in the later execution of the program, which is
why you say that finalizing those objects would "corrupt" them.
It seems to me that such continued participation is possible only if
the objects contain tasks or are accessed through some kind of
unchecked programming. Do you agree?
No. You can have them accessible over other access types with wider scopes:
  Collection_Pointer := new X;
  Global_Pointer := Collection_Pointer.all'Unchecked_Access;
On 2021-09-19 14:41, Dmitry A. Kazakov wrote:
On 2021-09-19 12:36, Niklas Holsti wrote:
On 2021-09-18 19:19, Dmitry A. Kazakov wrote:
On 2021-09-18 17:59, Niklas Holsti wrote:
So you want a way to specify that for a given access type, although
the accessed object type has a Finalize operation or needs
finalization, the objects left over in the (at least conceptually)
associated collection should _not_ be finalized when the scope of
the access type is left?
Exactly, especially because these objects are not deallocated, as
you say they are left over. If they wanted GC they should do that.
If they do not, then they should keep their hands off the objects
maintained by the programmer.
To me it seems a risky think to do, subverting the normal semantics
of initialization and finalization.
Quite the opposite, it is the collection rule that subverts
semantics because objects are not freed, yet mangled.
Local variables declared in a subprogram are also not explicitly
freed (deallocated), yet they are automatically finalized when the
subprogram returns.
Local objects are certainly freed. Explicit or not, aggregated or not,
is irrelevant.
Objects left over in a local collection may certainly be freed
automatically, if the implementation has created a local pool for them.
See ARM 13.11 (2.a): "Alternatively, [the implementation] might choose
to create a new pool at each accessibility level, which might mean that storage is reclaimed for an access type when leaving the appropriate
scope."
Has this feature of Ada caused you real problems in real
applications, or is it only a point of principle for you?
1. It is a massive overhead in both memory and performance terms with
no purpose whatsoever. [...]
Have you actually measured or observed that overhead in some application?
2. What is worse that a collection is not bound to the pool. It is to
an access type, which may have a narrower scope. So the user could
declare an unfortunate access type, which would corrupt objects in the
pool and the pool designer has no means to prevent that.
So there is a possibility of programmer mistake, leading to unintended finalization of those (now inaccessible) objects.
However, your semantic argument (as opposed to the overhead argument)
seems to be based on an assumption that the objects "left over" in a
local collection, and which thus are inaccessible, will still, somehow, participate in the later execution of the program, which is why you say
that finalizing those objects would "corrupt" them.
It seems to me that such continued participation is possible only if the objects contain tasks or are accessed through some kind of unchecked programming. Do you agree?
If a compiler is allowed to break up an allocation into multipleI think one cannot enforce that, because the calls to Allocate do not
calls to Allocate (and of course Deallocate), how does one go about
enforcing that the user's header is only created once?
indicate (with parameters) which set of calls concern the same object
allocation.
I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:
for X'Storage_Pool use Pool;
for X'Descriptor_Storage_Pool use Other_Pool;
That way the user can decide when to add (or not) extra headers.
On 2021-09-20 10:35, Dmitry A. Kazakov wrote:
No. You can have them accessible over other access types with widerSo, unchecked programming, as I said.
scopes:
   Collection_Pointer := new X;
   Global_Pointer := Collection_Pointer.all'Unchecked_Access;
Hmmm, smells like a place to use generics and subpools; perhaps something like:I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:If a compiler is allowed to break up an allocation into multipleI think one cannot enforce that, because the calls to Allocate do not indicate (with parameters) which set of calls concern the same object allocation.
calls to Allocate (and of course Deallocate), how does one go about enforcing that the user's header is only created once?
for X'Storage_Pool use Pool;
for X'Descriptor_Storage_Pool use Other_Pool;
That way the user can decide when to add (or not) extra headers.
On Wednesday, September 15, 2021 at 3:01:52 AM UTC-4, Simon Wright wrote:
Jere <> writes:Thanks! though in this case, my question was ill formed after I missed a detail
Thanks for the response. I'm sorry for all the questions. That's howSeems to me you ask interesting questions which generate enlightening
I learn and I realize it isn't a popular way to learn in the
community, but I have always learned very differently than most.
responses!
in the blog, so the mistake is on me. I will say I hold back some
questions
as it is very intimidating to ask on C.L.A. I mean the first response led off
with "Not sure what you are expecting" so it is hard to know how to
formulate
a good question as I always seem to get some harsh responses (which I am
sure is because I asked the question poorly). I'm unfortunately a very visual
person and words are not my forte and I feel like when I ask questions
about
the boundaries of the language I manage to put folks on the defensive. I don't dislike Ada at all, it is my favorite language, but I think it is
hard to
craft questions on some topics without putting forth the impression that
I don't like it, at least with my limited ability to word craft.
Le 14/09/2021 à 08:23, Dmitry A. Kazakov a écrit :
Of course, a proper solution would be fixing Ada by adding anotherBut you cannot assume that the object is allocated as one big chunk.
address attribute:
X'Object_Address
returning the first address of the object as allocated.
Bounds can be allocated at a different place. What would be
X'Object_Address in that case?
On 2021-09-14 02:48, Jere wrote:...
The problem with unconstrained arrays is not that the bounds are not allocated, they are, but the semantics of X'Address when applied to
arrays.
A'Address is the address of the first array element, not of the array
object. For a pool designer it constitutes a problem of getting the array object by address. This is what Emmanuel discusses in the blog.
[ The motivation behind Ada choice was probably to keep the semantics implementation-independent. ]
Consider for example a list of String elements. When Allocate is called
with String, it returns the address of all String. But that is not the address you would get if you applied 'Address. You have to add/subtract
some offset in order to get one from another.
In Simple Components this offset is determined at run-time for each
generic instance.
Of course, a proper solution would be fixing Ada by adding another address attribute:
X'Object_Address
returning the first address of the object as allocated.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
On Monday, September 13, 2021 at 1:29:39 AM UTC-4, Randy Brukardt wrote:
Not sure what you are expecting. There is no requirement that objects areNo expectations. Just questions. I wasn't concerned with whether the allocated memory was contiguous or not, but whether an implementation
allocated contigiously. Indeed, Janus/Ada will call Allocate as many
times
as needed for each object; for instance, unconstrained arrays are in two
parts (descriptor and data area).
is required to supply the correct size of memory needed to allocate an
object
or if it is allowed to pass a value to Size that is less than the amount
of
memory actually needed. For example, the blog there indicates the
maintainer of the custom storage pool needs to account for First/Last
indexes of an unconstrained array separately instead of assuming that
value is
included as part of the Size parameter's value.
If the Size parameter doesn't require that it includes space for
First/Last
for unconstrained arrays or Prev/Next for controlled objects (assuming
that is even the implementation picked of course), then I'm not seeing
a way to write a custom storage pool that is portable because you need
to account for each implementation's "hidden" values that are not
represented
in the Size parameter.
For example if Janus calculated Size to have
both the size of the array and the size of First and Last but GNAT didn't
and my storage pool assumed the JANUS method, then if someone
used my storage pool with GNAT then it would access memory
from some other location potentially and erroneously.
The only thing that you can assume in a portable library is that you getSo to be clear, you cannot assume that Size and Alignment are appropriate
called the same number of times and sizes/alignment for Allocate and
Deallocate; there's no assumptions about size or alignment that you can
make.
for the actual object being allocated correct? Size could actually be
less than the actual amount of memory needed and the alignment may only
apply to part of the object being allocated, not the full object?
Is that correct? I'm asking because that is what the blog suggests with
the example it gave.
Are there any good tricks to handle this? For example, if I design a
If you want to build a pool around some specific allocated size, then if
it
needs to be portable, (A) you have to calculate the allocated size, and
(B)
you have to have a mechanism for what to do if some other size is
requested.
(Allocate a whole block for smaller sizes, fall back to built-in heap for
too large is what I usually do).
storage pool around constructing a particular type of object, what is normally done to discourage another programmer from using the pool with
an entirely different type? Maybe raise an exception if the size isn't exact?
I'm not sure what else, unless maybe there is an Aspect/Attribute that
can be set to ensure only a specific type of object can be constructed.
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for
memory pool developers is a long standing painful problem that need to
be resolved. That will never happen until a measurable group of people
start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2]
(storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody really
cares about.
What is needed is the address passed to Deallocate should the object be
freed = the address returned by Allocate. Is that too much to ask?
BTW, finalization lists (#2) should have been removed from the language
long ago. They have absolutely no use, except maybe for debugging, and introduce huge overhead. The semantics should have been either Unchecked_Deallocation or compiler allocated objects/components may call Finalize, nothing else.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
On 2021-09-17 23:17, Niklas Holsti wrote:
On 2021-09-17 23:39, Dmitry A. Kazakov wrote:
On 2021-09-17 21:46, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Nope, especially because the issue with X'Address being unusable for >>>>> memory pool developers is a long standing painful problem that need to >>>>> be resolved. That will never happen until a measurable group of people >>>>> start asking questions. So you are doubly welcome.
There are two attributes that we should all have known about,
Descriptor_Size[1] (bits, introduced in 2011) and Finalization_Size[2] >>>> (storage units, I think, introduced in 2017)
They are non-standard and have murky semantics I doubt anybody really
cares about.
What is needed is the address passed to Deallocate should the object be
freed = the address returned by Allocate. Is that too much to ask?
That is already required by RM 13.11(21.7/3): "The value of the
Storage_Address parameter for a call to Deallocate is the value returned
in the Storage_Address parameter of the corresponding successful call to
Allocate."
You missed the discussion totally. It is about X'Address attribute.
The challenge: write pool with a function returning object allocation time
by its pool-specific access type.
BTW, finalization lists (#2) should have been removed from the language
long ago.
Huh? Where does the RM _require_ finalization lists?
7.6.1 (11 1/3)
I see them mentioned here and there as a _possible_ implementation
technique, and an alternative "PC-map" technique is described in RM 7.6.1
(24.r .. 24.t).
I don't care about techniques to implement meaningless stuff. It should be out, at least there must be a representation aspect for turning this mess off.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
On 2021-09-19 12:36, Niklas Holsti wrote:...
Local variables declared in a subprogram are also not explicitly freed
(deallocated), yet they are automatically finalized when the subprogram
returns.
Local objects are certainly freed. Explicit or not, aggregated or not, is irrelevant.
My understanding of Ada semantic principles is that any object that is
initialized should also be finalized.
IFF deallocated.
Given that an object can be allocated in multiple independent pieces, it
seems unlikely that what you want will be provided.
Such implementations would automatically disqualify the compiler. Compiler-generated piecewise allocation is OK for the stack, not for user storage pools.
No, it is about the overhead of maintaining "collections" associated with
an access type in order to call Finalization for all members of the collection.
1. It is a massive overhead in both memory and performance terms with no purpose whatsoever. I fail to see where that sort of thing might be even marginally useful.
2. What is worse that a collection is not bound to the pool. It is to an access type, which may have a narrower scope. So the user could declare an unfortunate access type, which would corrupt objects in the pool and the
pool designer has no means to prevent that.
On 2021-09-20 10:08, Niklas Holsti wrote:
On 2021-09-20 10:35, Dmitry A. Kazakov wrote:
No. You can have them accessible over other access types with widerSo, unchecked programming, as I said.
scopes:
Collection_Pointer := new X;
Global_Pointer := Collection_Pointer.all'Unchecked_Access;
Right, working with pools is all that thing. Maybe "new" should be named "unchecked_new" (:-))
Finalize and Initialize certainly should have been Unchecked_Finalize and Unchecked_Initialize as they are not enforced. You can override the
parent's Initialize and never call it. It is a plain primitive operations anybody can call any time any place. You can even call it before the
object is fully initialized!
So, why bother with objects the user manually allocates (and forgets to free)?
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
If a compiler is allowed to break up an allocation into multipleI think one cannot enforce that, because the calls to Allocate do not
calls to Allocate (and of course Deallocate), how does one go about
enforcing that the user's header is only created once?
indicate (with parameters) which set of calls concern the same object
allocation.
I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:
for X'Storage_Pool use Pool;
for X'Descriptor_Storage_Pool use Other_Pool;
That way the user can decide when to add (or not) extra headers.
On 2021-09-20 10:35, Dmitry A. Kazakov wrote:
On 2021-09-20 09:05, Niklas Holsti wrote:
[snipping context]
However, your semantic argument (as opposed to the overhead argument)
seems to be based on an assumption that the objects "left over" in a
local collection, and which thus are inaccessible, will still, somehow,
participate in the later execution of the program, which is why you say
that finalizing those objects would "corrupt" them.
It seems to me that such continued participation is possible only if the >>> objects contain tasks or are accessed through some kind of unchecked
programming. Do you agree?
No. You can have them accessible over other access types with wider
scopes:
Collection_Pointer := new X;
Global_Pointer := Collection_Pointer.all'Unchecked_Access;
So, unchecked programming, as I said.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si77kd$rka$1@gioia.aioe.org...
...
1. It is a massive overhead in both memory and performance terms with no
purpose whatsoever. I fail to see where that sort of thing might be even
marginally useful.
The classic example of Finalization is file management on a simple kernel (I use CP/M as the example in my head). CP/M did not try to recover any resources on program exit, it was the programs responsibility to recover
them all (or reboot after every run). If you had holes in finalization, you would easily leak files and since you could only open a limited number of them at a time, you could easily make a system non-responsive.
2. What is worse that a collection is not bound to the pool. It is to an
access type, which may have a narrower scope. So the user could declare an >> unfortunate access type, which would corrupt objects in the pool and the
pool designer has no means to prevent that.
Pools are extremely low-level things that cannot be safe in any sense of the word. A badly designed pool will corrupt everything. Using a pool with the "wrong" access type generally has to be programmed for (as I answered earlier, if I assume anything about allocations, I check for violations and do something else.) And a pool can be used with many access types; many useful ones are.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:si4ell$1b25$1@gioia.aioe.org...
...
Given that an object can be allocated in multiple independent pieces, it >>> seems unlikely that what you want will be provided.
Such implementations would automatically disqualify the compiler.
Compiler-generated piecewise allocation is OK for the stack, not for user
storage pools.
If someone wants to require contigious allocation of objects, there should
be a representation attribute to specify it.
And there should not be an
nonsense restrictions on records with defaulted discriminants unless you specify that you require contiguous allocation.
No, it is about the overhead of maintaining "collections" associated with
an access type in order to call Finalization for all members of the
collection.
How else would you ensure that Finalize is always called on an allocated object?
A better solution would be to know the size of those bounds objects and
treat them differently (I've done that). And the next allocation is going to be the data, so I don't do anything special for them. Probably would be nice to have an attribute for that. But no one has ever asked for any such thing, so I haven't defined anything.
Such pools are highly implementation specific, so I haven't worried about this much..
Randy.
"Emmanuel Briot" <> wrote in message news:44be7c73-f69e-45da...@googlegroups.com...
If a compiler is allowed to break up an allocation into multipleI think one cannot enforce that, because the calls to Allocate do not
calls to Allocate (and of course Deallocate), how does one go about
enforcing that the user's header is only created once?
indicate (with parameters) which set of calls concern the same object
allocation.
I think the only solution would be for this compiler to have another attribute similar to 'Storage_Pool, but that would define the pool for the descriptor:
for X'Storage_Pool use Pool;
for X'Descriptor_Storage_Pool use Other_Pool;
That way the user can decide when to add (or not) extra headers.
Sorry about that, I didn't understand what you were asking. And I get defensive about people who think that a pool should get some specific Size (and only that size), so I leapt to a conclusion and answered accordingly.
The compiler requests all of the memory IT needs, but if the pool needs some additional memory for it's purposes (pretty common), it will need to add
that space itself. It's hard to imagine how it could be otherwise, I guess I would have thought that goes without saying. (And that rather proves that there is nothing that goes without saying.)
Randy.
"Jere" <> wrote in message
news:96e7199f-c354-402f...@googlegroups.com...
On Wednesday, September 15, 2021 at 3:01:52 AM UTC-4, Simon Wright wrote:
Jere <> writes:Thanks! though in this case, my question was ill formed after I missed a detail
Thanks for the response. I'm sorry for all the questions. That's howSeems to me you ask interesting questions which generate enlightening
I learn and I realize it isn't a popular way to learn in the
community, but I have always learned very differently than most.
responses!
in the blog, so the mistake is on me. I will say I hold back some
questions
as it is very intimidating to ask on C.L.A. I mean the first response led off
with "Not sure what you are expecting" so it is hard to know how to formulate
a good question as I always seem to get some harsh responses (which I am sure is because I asked the question poorly). I'm unfortunately a very visual
person and words are not my forte and I feel like when I ask questions about
the boundaries of the language I manage to put folks on the defensive. I don't dislike Ada at all, it is my favorite language, but I think it is hard to
craft questions on some topics without putting forth the impression that
I don't like it, at least with my limited ability to word craft.
On 2021-09-21 02:26, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:si4ell$1b25$1@gioia.aioe.org...
...
Given that an object can be allocated in multiple independent pieces,
it
seems unlikely that what you want will be provided.
Such implementations would automatically disqualify the compiler.
Compiler-generated piecewise allocation is OK for the stack, not for
user
storage pools.
If someone wants to require contigious allocation of objects, there
should
be a representation attribute to specify it.
It would be difficult, because the types are declared prior to pools. That
is when object layout does change.
If the layout does not then you need no attribute.
You can always run a mock allocation to compute overall size and offsets
to the pieces and then do one true allocation. And with stream attributes
you need to implement introspection anyway. So this might have been an
issue for Ada 83, but now one can simply require contiguous allocation in pools.
And there should not be an
nonsense restrictions on records with defaulted discriminants unless you
specify that you require contiguous allocation.
You can keep the object layout. It is only the question of "trolling" the pool, no how objects are represented there.
No, it is about the overhead of maintaining "collections" associated
with
an access type in order to call Finalization for all members of the
collection.
How else would you ensure that Finalize is always called on an allocated
object?
I would not, because it is plain wrong. Finalize must be called for each *deallocated* object.
On 2021-09-21 02:37, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:si77kd$rka$1@gioia.aioe.org...
...
1. It is a massive overhead in both memory and performance terms with no >>> purpose whatsoever. I fail to see where that sort of thing might be even >>> marginally useful.
The classic example of Finalization is file management on a simple kernel
(I
use CP/M as the example in my head). CP/M did not try to recover any
resources on program exit, it was the programs responsibility to recover
them all (or reboot after every run). If you had holes in finalization,
you
would easily leak files and since you could only open a limited number of
them at a time, you could easily make a system non-responsive.
This is why system resources are handled by the OS rather than by the application. But I do not see how this justifies "collections."
2. What is worse that a collection is not bound to the pool. It is to an >>> access type, which may have a narrower scope. So the user could declare
an
unfortunate access type, which would corrupt objects in the pool and the >>> pool designer has no means to prevent that.
Pools are extremely low-level things that cannot be safe in any sense of
the
word. A badly designed pool will corrupt everything. Using a pool with
the
"wrong" access type generally has to be programmed for (as I answered
earlier, if I assume anything about allocations, I check for violations
and
do something else.) And a pool can be used with many access types; many
useful ones are.
This is also true, but again unrelated to the point that tying
finalization *without* deallocation to a pointer type is just wrong, semantically on any abstraction level.
We can't change the Allocate specification since it is what it is, but is there
any consideration to adding functionality to the root storage pool type,
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sibvcr$1ico$1@gioia.aioe.org...
On 2021-09-21 02:26, Randy Brukardt wrote:
How else would you ensure that Finalize is always called on an allocated >>> object?
I would not, because it is plain wrong. Finalize must be called for each
*deallocated* object.
Deallocation is irrelevant. Finalization is called when objects are about to be destroyed, by any method.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sibu1t$12ds$1@gioia.aioe.org...
This is also true, but again unrelated to the point that tying
finalization *without* deallocation to a pointer type is just wrong,
semantically on any abstraction level.
If you didn't finalize everything, then a system like Claw would not work, since there would be objects that would have gotten destroyed (when the access type goes out of scope) and would still be on the various active object chains. (The whole reason that these things are controlled is so that they can be added to and removed from object chains as needed.)
On 2021-09-28 06:31, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:sibvcr$1ico$1@gioia.aioe.org...
On 2021-09-21 02:26, Randy Brukardt wrote:
Deallocation is irrelevant. Finalization is called when objects areHow else would you ensure that Finalize is always called on an allocated >>>> object?
I would not, because it is plain wrong. Finalize must be called for each >>> *deallocated* object.
about to
be destroyed, by any method.
And no object may be destroyed unless deallocated.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
And no object may be destroyed unless deallocated.
Well, if it's important that an allocated object not be destroyed, don't allocate it from a storage pool that can go out of scope!
On 2021-09-28 09:52, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
And no object may be destroyed unless deallocated.
Well, if it's important that an allocated object not be destroyed, don't
allocate it from a storage pool that can go out of scope!
That was never the case.
The case is that an object allocated in a pool gets finalized because the access type (not the pool!) used to allocate the object goes out of the scope.
This makes no sense whatsoever.
Again, finalization must be tied with [logical] deallocation. Just like initialization is with allocation.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:siuigp$bqs$1@gioia.aioe.org...
On 2021-09-28 09:52, Simon Wright wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
And no object may be destroyed unless deallocated.
Well, if it's important that an allocated object not be destroyed, don't >>> allocate it from a storage pool that can go out of scope!
That was never the case.
The case is that an object allocated in a pool gets finalized because the
access type (not the pool!) used to allocate the object goes out of the
scope.
This makes no sense whatsoever.
Again, finalization must be tied with [logical] deallocation. Just like
initialization is with allocation.
But it is. All of the objects allocated from an access type are logically deallocated when the access type goes out of scope (and the memory can be recovered).
Remember that Ada was designed so that one never needs to use Unchecked_Deallocation.
I could see an unsafe language (like C) doing the sort of thing you suggest, but not Ada.
Every object in Ada has a specific declaration point,
initialization point, finalization point, and destruction point. There are
no exceptions.
Come on. There never existed Ada compiler with GC.Untrue; GNAT for JVM, and GNAT for DOTNET.
And nobody could evenHow does this follow?
implement GC with the meaningless semantics of "collections" in the way, killing objects at random. Either with GC or without it, there must be
no such thing as "collections."
Finalization *isn't* random, it happens at well-defined places.I could see an unsafe language (like C) doing the sort of thing you suggest,How random finalizing user-allocated and user-freed objects is safe?
but not Ada.
And I suggest doing exactly nothing as opposed to *unsafe*, costly and meaningless behavior mandated by the standard now.Because these are the places that finalization (and deallocation/destruction) are defined to happen.
Every object in Ada has a specific declaration point,Yes, and how it that related to the issue?
initialization point, finalization point, and destruction point. There are no exceptions.
On Wednesday, September 29, 2021 at 1:57:35 AM UTC-6, Dmitry A. Kazakov wrote:
Come on. There never existed Ada compiler with GC.Untrue; GNAT for JVM, and GNAT for DOTNET.
And nobody could evenHow does this follow?
implement GC with the meaningless semantics of "collections" in the way,
killing objects at random. Either with GC or without it, there must be
no such thing as "collections."
Finalization *isn't* random, it happens at well-defined places.
(And, IIRC, is idempotent; meaning that multiple calls have the same effect as a singular call.)
And I suggest doing exactly nothing as opposed to *unsafe*, costly andBecause these are the places that finalization (and deallocation/destruction) are defined to happen.
meaningless behavior mandated by the standard now.
Every object in Ada has a specific declaration point,Yes, and how it that related to the issue?
initialization point, finalization point, and destruction point. There are >>> no exceptions.
Random = unrelated to the object's life time.
And I suggest doing exactly nothing as opposed to *unsafe*, costly andBecause these are the places that finalization (and
meaningless behavior mandated by the standard now.
Every object in Ada has a specific declaration point,Yes, and how it that related to the issue?
initialization point, finalization point, and destruction point. There >>>> are
no exceptions.
deallocation/destruction) are defined to happen.
So? How exactly any of this implies that the place of Finalization can be
in a place other than the place of deallocation?
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj2008$1cmo$1@gioia.aioe.org...
...
Random = unrelated to the object's life time.
All objects have to disappear before their type disappears, so the object *cannot* live longer than the access type for which it is allocated from.
It's probably a lousy idea to share pool objects (as opposed to pool types) amongst access types.
If you do have a longer lived pool and a shorter lived access type, you will end up with a bunch of zombie objects in the pool that cannot be used in any way (as any access is erroneous). All that can happen is a memory leak.
Don't do that.
And I suggest doing exactly nothing as opposed to *unsafe*, costly and >>>> meaningless behavior mandated by the standard now.Because these are the places that finalization (and
Every object in Ada has a specific declaration point,Yes, and how it that related to the issue?
initialization point, finalization point, and destruction point. There >>>>> are
no exceptions.
deallocation/destruction) are defined to happen.
So? How exactly any of this implies that the place of Finalization can be
in a place other than the place of deallocation?
Deallocation is at most a convinience in Ada; it isn't even required to do anything.
OTOH, object destruction happens before the type goes away, and finalization happens before that. That is the point here.
On 2021-09-30 02:16, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:sj2008$1cmo$1@gioia.aioe.org...
...
Random = unrelated to the object's life time.
All objects have to disappear before their type disappears, so the object
*cannot* live longer than the access type for which it is allocated from.
The type of the access type /= the type of object. Only access objects
must disappear and they do.
It's probably a lousy idea to share pool objects (as opposed to pool
types)
amongst access types.
You need these for access discriminants.
If you do have a longer lived pool and a shorter lived access type, you
will
end up with a bunch of zombie objects in the pool that cannot be used in
any
way (as any access is erroneous). All that can happen is a memory leak.
Don't do that.
Nope, this is exactly how it works with most specialized pools, like
arenas, stacks, reference counting pools etc.
OTOH, object destruction happens before the type goes away, and
finalization
happens before that. That is the point here.
See above, these are different objects of different types. The actual
object type is alive and well (unless killed by some collection).
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj3r92$pla$3@gioia.aioe.org...
On 2021-09-30 02:16, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in messageThe type of the access type /= the type of object. Only access objects
news:sj2008$1cmo$1@gioia.aioe.org...
...
Random = unrelated to the object's life time.
All objects have to disappear before their type disappears, so the object >>> *cannot* live longer than the access type for which it is allocated from. >>
must disappear and they do.
??
There is nothing in a pool except unorganized memory. "Objects" only
exist outside of the pool for some access type.
There has to be some
organizing type, else you would never know where/when things are finalized.
It's probably a lousy idea to share pool objects (as opposed to pool
types)
amongst access types.
You need these for access discriminants.
Those (coextensions) are one of Ada's worst ideas; they have tremendous overhead without any value. Almost everything has to take them into account. Yuck. Access discriminants of existing objects are OK but really don't add anything over a component of an access type.
If you do have a longer lived pool and a shorter lived access type, you
will
end up with a bunch of zombie objects in the pool that cannot be used in >>> any
way (as any access is erroneous). All that can happen is a memory leak.
Don't do that.
Nope, this is exactly how it works with most specialized pools, like
arenas, stacks, reference counting pools etc.
These things don't work as pools in Ada.
You need to use the subpool
mechanism to make them safe,
because otherwise the objects go away before
the type (given these sorts of mechanisms generally have some sort of block deallocation).
Allocated objects can only be deallocated from
the same type as they were allocated.
Indeed, I now believe that any nested access type is evil and mainly is useful to cause nasty cases for compilers. I'd ban them in an Ada-like language (that would also simplify accessibility greatly).
On 2021-10-01 02:04, Randy Brukardt wrote:...
Nope, this is exactly how it works with most specialized pools, like
arenas, stacks, reference counting pools etc.
These things don't work as pools in Ada.
Yes, they normally have Deallocate as a void operation or raise an
exception.
You need to use the subpool
mechanism to make them safe,
I do not see how that could change anything without destroying the whole purpose of such pools, namely nearly zero-cost allocation and
deallocation.
because otherwise the objects go away before
the type (given these sorts of mechanisms generally have some sort of
block
deallocation).
If controlled types need to be used, which rarely happens, a bookkeeping
is added to finalize them. Instead of IMO useless subpools, one could add some allocation bookkeeping support etc.
Indeed, I now believe that any nested access type is evil and mainly is
useful to cause nasty cases for compilers. I'd ban them in an Ada-like
language (that would also simplify accessibility greatly).
See where collections have led you! (:-))
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj6gmg$1n1n$1@gioia.aioe.org...
On 2021-10-01 02:04, Randy Brukardt wrote:...
Nope, this is exactly how it works with most specialized pools, like
arenas, stacks, reference counting pools etc.
These things don't work as pools in Ada.
Yes, they normally have Deallocate as a void operation or raise an
exception.
No, they don't, because they don't work with controlled types, tasks, etc.
You need to use the subpool
mechanism to make them safe,
I do not see how that could change anything without destroying the whole
purpose of such pools, namely nearly zero-cost allocation and
deallocation.
It ties any finalization to the subpool, so all of the contained objects get finalized when the subpool is freed.
...
because otherwise the objects go away before
the type (given these sorts of mechanisms generally have some sort of
block
deallocation).
If controlled types need to be used, which rarely happens, a bookkeeping
is added to finalize them. Instead of IMO useless subpools, one could add
some allocation bookkeeping support etc.
That's again not safe in any sense. You shouldn't need to worry about
whether some abstraction that you use uses finalization, especially as you can't know if someone adds it later.
...
Indeed, I now believe that any nested access type is evil and mainly is
useful to cause nasty cases for compilers. I'd ban them in an Ada-like
language (that would also simplify accessibility greatly).
See where collections have led you! (:-))
No, that's mostly because of accessibility. I'd be happy if one banned doing any allocations with general access types (mixing global/stack allocated objects and allocated objects is pure evil IMHO), but that would be rather hard to enforce.
Note that nested tagged types also cause many implementation problems,
adding a lot of unnecessary overhead.
"Jere" <> wrote in message
news:6a073ced-4c3b-4e87...@googlegroups.com...
...
We can't change the Allocate specification since it is what it is, but is thereWe tried that as a solution for the user-defined dereference problem, and it ended up going nowhere. Your problem is different but the issues of changing the Storage_Pool spec remain. Not sure it could be made to work (one does
any consideration to adding functionality to the root storage pool type,
not want to force everyone to change their existing storage pools).
Randy.
On 2021-10-02 11:06, Randy Brukardt wrote:...
That's again not safe in any sense. You shouldn't need to worry about
whether some abstraction that you use uses finalization, especially as
you
can't know if someone adds it later.
Why compiler assisted bookkeeping is safe for subpools, but unsafe as a stand-alone mechanism?
Sure, but again. there is a paramount use case that requires dynamic elaboration of tagged types, i.e. the relocatable libraries. You cannot
ban them
you cannot forbid tagged extensions declared in a relocatable library.
So getting rid of nesting tagged types will ease nothing.
I was thinking more along the lines of adding a classwide operation on the root storage pool type.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sj9blb$1srp$1@gioia.aioe.org...
On 2021-10-02 11:06, Randy Brukardt wrote:...
That's again not safe in any sense. You shouldn't need to worry
about whether some abstraction that you use uses finalization,
especially as you can't know if someone adds it later.
Why compiler assisted bookkeeping is safe for subpools, but unsafe
as a stand-alone mechanism?
There is no such stand-alone mechanism, and there cannot be one
Sure, but again. there is a paramount use case that requires
dynamic elaboration of tagged types, i.e. the relocatable
libraries. You cannot ban them
I suppose, but you certainly don't have to use them.
That sort of
thing is nonsense that simply makes programs more fragile than they
have to be. I just had a problem with Debian where some older
programs compiled with GNAT refused to run because an update had
invalidated some library. Had to dig out the source code and
recompile.
...
you cannot forbid tagged extensions declared in a relocatable
library.
Of course you can.
The only thing you need to be compatible with is a C interface, which
is the only thing you need to interface to existing libraries that
you can't avoid.
So getting rid of nesting tagged types will ease nothing.
The problem is tagged types not declared at the library level.
Relocatable librarys are still library level (they have their own
global address space).
On 2021-10-03 01:19, Jere wrote:...
That is 100% backward compatible.
On 2021-10-03 06:33, Randy Brukardt wrote:...
Yes, but static monolithic linking is even more fragile. Typically a
customer orders software off the shelf. It means that he says I need, e.g. HTTP client, ModBus master, CANOpen etc. It is simply impossible to
re-link everything for each customer and run integration tests.
So the software is organized as a set of plug-in relocatable libraries,
each of them maintained, versioned and tested separately. You cannot turn clock 20 years back.
Of course you can.
How? Ada does not determine the way you link an executable. If I put a package in a library it is there. If the package derives from a tagged
type
The only thing you need to be compatible with is a C interface, which
is the only thing you need to interface to existing libraries that
you can't avoid.
That would kill most of Ada libraries.
So getting rid of nesting tagged types will ease nothing.
The problem is tagged types not declared at the library level.
Relocatable librarys are still library level (they have their own global
address space).
When the library is loaded dynamically, there is no way to know in the executable the tag of the extension or prepare an extension of the dispatching table. I think it is far worse than a nested declaration, when
is you have some information.
There's no use to an Ada dynamic library -- if it's only for your organization's use, static linking is way better. And if it is for everyone's use, it has to have a C interface, thus no tagged types.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sjbq96$3cl$1@gioia.aioe.org...
On 2021-10-03 06:33, Randy Brukardt wrote:...
Yes, but static monolithic linking is even more fragile. Typically a
customer orders software off the shelf. It means that he says I need, e.g. >> HTTP client, ModBus master, CANOpen etc. It is simply impossible to
re-link everything for each customer and run integration tests.
??? When you are dynamically loading stuff, you simply are assuming everything is OK.
(Which is usually nonsense, but for the sake of argument,
assume that it is OK to do.) When you statically link, you surely can make the same assumption.
Of course, when you statically link Ada code, you can include that code in your static analysis (including, of course, the various guarantees that the Ada language and compiler bring). When you dynamically load, you can have none of that.
So the software is organized as a set of plug-in relocatable libraries,
each of them maintained, versioned and tested separately. You cannot turn
clock 20 years back.
And you still have to do integration testing when using them together -- or you could have done the same at the Ada source code level (that is, version, maintain, and test separately) and still have the advantages of Ada
checking.
...
Of course you can.
How? Ada does not determine the way you link an executable. If I put a
package in a library it is there. If the package derives from a tagged
type
This I don't understand at all. A dynamically loaded library necessarily has a C interface (if it is generally useful, if not, it might as well be maintained as Ada source code, there's no advantage to dynamic linking in that case and lots of disavantages), and that can't export a tagged type.
In any case, a tagged type extension is a compile-time thing -- the compiler has to know all of the details of the type.
The only thing you need to be compatible with is a C interface, which
is the only thing you need to interface to existing libraries that
you can't avoid.
That would kill most of Ada libraries.
There's no use to an Ada dynamic library -- if it's only for your organization's use, static linking is way better.
And if it is for
everyone's use, it has to have a C interface, thus no tagged types.
So getting rid of nesting tagged types will ease nothing.
The problem is tagged types not declared at the library level.
Relocatable librarys are still library level (they have their own global >>> address space).
When the library is loaded dynamically, there is no way to know in the
executable the tag of the extension or prepare an extension of the
dispatching table. I think it is far worse than a nested declaration, when >> is you have some information.
Ignoring that fact that this is useless construct, it is not at all hard to do, because you have to know that the tag and subprograms are declared in
the dynamically loaded thing.
Thus, one has to use a wrapper to call them
indirectly, but that's easy to do when everything is library level. It's essentially the same as shared generics, which Janus/Ada has been doing for decades -- including tagged type derivation.
The problem comes about when you have things whose lifetime is limited and need to have a static link or display to access them. Managing that is a nightmare, no matter how you try to do it.
On 2021-10-14 03:21, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:sjbq96$3cl$1@gioia.aioe.org...
On 2021-10-03 06:33, Randy Brukardt wrote:...
Yes, but static monolithic linking is even more fragile. Typically a
customer orders software off the shelf. It means that he says I need,
e.g.
HTTP client, ModBus master, CANOpen etc. It is simply impossible to
re-link everything for each customer and run integration tests.
??? When you are dynamically loading stuff, you simply are assuming
everything is OK.
A relocatable DLL is tested with a test application.
(Which is usually nonsense, but for the sake of argument,
assume that it is OK to do.) When you statically link, you surely can
make
the same assumption.
A static library is tested same way, true, but integration of a static library is different and testing that is not possible without developing
some massive tool-chain, like Linux distributions had in early days.
Of course, when you statically link Ada code, you can include that code
in
your static analysis (including, of course, the various guarantees that
the
Ada language and compiler bring). When you dynamically load, you can have
none of that.
Yes, but maintainability trumps everything.
So the software is organized as a set of plug-in relocatable libraries,
each of them maintained, versioned and tested separately. You cannot
turn
clock 20 years back.
And you still have to do integration testing when using them together --
or
you could have done the same at the Ada source code level (that is,
version,
maintain, and test separately) and still have the advantages of Ada
checking.
Theoretically yes, in practice it is a combinatorial explosion.
Dynamic libraries flatten that. Yes, this requires normalization of
plug-in interfaces etc.
There's no use to an Ada dynamic library -- if it's only for your
organization's use, static linking is way better.
You compare static vs. import library. The case I am talking about is
static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we do dlsym on entries with Ada calling conventions. No C stuff.
Ignoring that fact that this is useless construct, it is not at all hard
to
do, because you have to know that the tag and subprograms are declared in
the dynamically loaded thing.
I do not see how this helps with, say, Ada.Tags.Expanded_Name getting a
tag from the library as an argument.
Thus, one has to use a wrapper to call them
indirectly, but that's easy to do when everything is library level. It's
essentially the same as shared generics, which Janus/Ada has been doing
for
decades -- including tagged type derivation.
That is OK, but you still have to expand dispatching tables upon loading
the library and shrink them upon unloading (though the latter is not supported, I guess).
The problem comes about when you have things whose lifetime is limited
and
need to have a static link or display to access them. Managing that is a
nightmare, no matter how you try to do it.
The lifetime of library objects in a dynamically loaded library is limited
by loading/unloading of the library.
That is OK, but you still have to expand dispatching tables upon loading
the library and shrink them upon unloading (though the latter is not
supported, I guess).
??? The dispatching tables are defined statically by the compiler, and never change.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:sk8mbv$15ca$1@gioia.aioe.org...
On 2021-10-14 03:21, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:sjbq96$3cl$1@gioia.aioe.org...
On 2021-10-03 06:33, Randy Brukardt wrote:...
Yes, but static monolithic linking is even more fragile. Typically a
customer orders software off the shelf. It means that he says I need,
e.g.
HTTP client, ModBus master, CANOpen etc. It is simply impossible to
re-link everything for each customer and run integration tests.
??? When you are dynamically loading stuff, you simply are assuming
everything is OK.
A relocatable DLL is tested with a test application.
Testing cannot ensure that a contract hasn't been violated, especially the implicit ones that get created by the runtime behavior of a library. At
best, you can test a few percent of the way a library can be used (and
people are good at finding unanticipated ways to use a library).
(Which is usually nonsense, but for the sake of argument,
assume that it is OK to do.) When you statically link, you surely can
make
the same assumption.
A static library is tested same way, true, but integration of a static
library is different and testing that is not possible without developing
some massive tool-chain, like Linux distributions had in early days.
??? Your "test application" is just a way of running unit tests against a library. You can surely do exactly the same testing with a statically-linked library, it's hard to imagine how a library is packaged would make any difference.
Yes, but maintainability trumps everything.
I agree with the sentiment, but the only way to get any sort of maintenability is with strong contracts and lots of static analysis.
Otherwise, subtle changes in a library will break the users and there will
be no way to find where the dependency is. Nothing I've ever worked on has ever been close to maintainable because there is so much that Ada cannot describe (even though Ada itself is certainly a help in this area). You just have to re-test to make sure that no major problems have been introduced (there's a reason that compiler writer's rerun a huge test suite every day).
Only if you don't use unit tests. But then how you can test a dynamic
library escapes me. (I've never used unit tests with Janus/Ada because it is too hard to set up the initial conditions for a meaningful test. The easiest way to do that is to compile something, but of course you no longer can do unit tests as you have the entire rest of the system dragged along.)
Dynamic libraries flatten that. Yes, this requires normalization of
plug-in interfaces etc.
As noted above, I don't see how. If testing a dynamic library is possible, surely running the same tests against a static library would give the same results (and assurances).
There's no use to an Ada dynamic library -- if it's only for your
organization's use, static linking is way better.
You compare static vs. import library. The case I am talking about is
static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we do
dlsym on entries with Ada calling conventions. No C stuff.
That sort of stuff is just plain evil. :-)
I don't see any way that such loading could work with Ada semantics; there
is an assumption that all of your ancestors exist before you can do
anything. The elaboration checks were intended to check that.
??? The dispatching tables are defined statically by the compiler, and never change. What I'd do for dynamically loaded libraries is use a wrapper that indirectly calls the dynamically loaded libraries' subprograms. So loading the library (actually, declaring the extension) simply has to set up an
array of pointers to the dynamically loaded subprograms. (You can't call
them statically because you don't know where they'll be.) The dispatch
tables never change.
The problem comes about when you have things whose lifetime is limited
and
need to have a static link or display to access them. Managing that is a >>> nightmare, no matter how you try to do it.
The lifetime of library objects in a dynamically loaded library is limited >> by loading/unloading of the library.
They're still treated as library-level,
"Randy Brukardt" <randy@rrsoftware.com> writes:
That is OK, but you still have to expand dispatching tables upon loading >>> the library and shrink them upon unloading (though the latter is not
supported, I guess).
??? The dispatching tables are defined statically by the compiler, and never >> change.
It would be nice if different variants of a dynamically loaded library
could introduce different derived types; that would support a "plugin"
model nicely.
For example, suppose an editor defines a library interface for computing indent for various languages. Then one variant could provide Ada,
another Pascal, etc. Each could be a derived type.
I think you are saying this is simply not possible with Ada tagged types.
"Randy Brukardt" <randy@rrsoftware.com> writes:
That is OK, but you still have to expand dispatching tables upon loading >>> the library and shrink them upon unloading (though the latter is not
supported, I guess).
??? The dispatching tables are defined statically by the compiler, and
never
change.
It would be nice if different variants of a dynamically loaded library
could introduce different derived types; that would support a "plugin"
model nicely.
For example, suppose an editor defines a library interface for computing indent for various languages. Then one variant could provide Ada,
another Pascal, etc. Each could be a derived type.
I think you are saying this is simply not possible with Ada tagged types.
On 2021-10-15 02:36, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:sk8mbv$15ca$1@gioia.aioe.org...
On 2021-10-14 03:21, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in messageA static library is tested same way, true, but integration of a static
news:sjbq96$3cl$1@gioia.aioe.org...
On 2021-10-03 06:33, Randy Brukardt wrote:...
library is different and testing that is not possible without developing >>> some massive tool-chain, like Linux distributions had in early days.
??? Your "test application" is just a way of running unit tests against a
library. You can surely do exactly the same testing with a
statically-linked
library, it's hard to imagine how a library is packaged would make any
difference.
It is linking static stuff alternately that need to be tested. If you use
a subset of statically linked components you need to change the code that uses them correspondingly, unless you do an equivalent of dynamically
loaded components without any advantages of.
As an example, consider switching between GNUTLS and OpenSSL for
encryption of say MQTT connections.
Otherwise, subtle changes in a library will break the users and there
will
be no way to find where the dependency is. Nothing I've ever worked on
has
ever been close to maintainable because there is so much that Ada cannot
describe (even though Ada itself is certainly a help in this area). You
just
have to re-test to make sure that no major problems have been introduced
(there's a reason that compiler writer's rerun a huge test suite every
day).
Yes, but it is not economically viable anymore. Nobody would pay for that.
Only if you don't use unit tests. But then how you can test a dynamic
library escapes me. (I've never used unit tests with Janus/Ada because it
is
too hard to set up the initial conditions for a meaningful test. The
easiest
way to do that is to compile something, but of course you no longer can
do
unit tests as you have the entire rest of the system dragged along.)
We test Ada packages statically linked and we have
semi-unit/semi-integration tests that load the library first. It is not a
big deal.
Dynamic libraries flatten that. Yes, this requires normalization of
plug-in interfaces etc.
As noted above, I don't see how. If testing a dynamic library is
possible,
surely running the same tests against a static library would give the
same
results (and assurances).
Only if you create some equivalent of "static" plug-in with all
disadvantages of proper plug-in and none of the advantages.
There's no use to an Ada dynamic library -- if it's only for your
organization's use, static linking is way better.
You compare static vs. import library. The case I am talking about is
static vs. late dynamic loading, i.e. dlopen/dlsym stuff. And, yes, we
do
dlsym on entries with Ada calling conventions. No C stuff.
That sort of stuff is just plain evil. :-)
Yes! (:-))
I don't see any way that such loading could work with Ada semantics;
there
is an assumption that all of your ancestors exist before you can do
anything. The elaboration checks were intended to check that.
We have a core libraries which are import libraries for the plug-in. When
a plug-in is loaded, the core libraries are elaborated unless already
loaded, the plug-in library itself is not elaborated, because automatic elaboration would deadlock under Windows. Then a dedicated entry point is called in the plug-in library. The first thing it does is a call to the plug-in elaboration code. GNAT generates an <library-name>init entry for that. After this the plug-in registers itself providing a tagged object, which primitive operations are basically the library's true interface.
I know it sounds horrific, but it works pretty well.
??? The dispatching tables are defined statically by the compiler, and
never
change. What I'd do for dynamically loaded libraries is use a wrapper
that
indirectly calls the dynamically loaded libraries' subprograms. So
loading
the library (actually, declaring the extension) simply has to set up an
array of pointers to the dynamically loaded subprograms. (You can't call
them statically because you don't know where they'll be.) The dispatch
tables never change.
And how do you dispatch? Consider the case:
The core library:
package A is
type T is tagged ...;
procedure Foo (X : in out T);
procedure Trill_Me (X : in out T'Class);
end A;
package body A is
procedure Trill_Me (X : in out T'Class) is
begin
X.Foo; -- Dispatches to Foo overridden in a loadable library
end Trill_Me;
end A;
Inside the loadable library:
type S is new T with ...;
overriding procedure Foo (X : in out S);
...
X : B;
...
Trill_Me (X);
Do you keep a pointer to the dispatching table inside the object, like C++ does? Because I had a more general model in mind, when dispatching tables were attached to the primitive operations rather than objects.
The problem comes about when you have things whose lifetime is limited >>>> and
need to have a static link or display to access them. Managing that is >>>> a
nightmare, no matter how you try to do it.
The lifetime of library objects in a dynamically loaded library is
limited
by loading/unloading of the library.
They're still treated as library-level,
Right, and this is the problem, because semantically anything inside a dynamically loaded library is not just nested, worse, it is more like new/Unchecked_Deallocation, but with things like types etc.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:skbdb5$u50$1@gioia.aioe.org...
On 2021-10-15 02:36, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
In the case you describe, you'd have a binding that abstracts the
two underlying libraries, and you'd unit test that. Assuming it
passes with both implementations, it shouldn't matter which is used
in a particular program.
Yes, but it is not economically viable anymore. Nobody would pay
for that.
Has software gotten so
bad that no one cares about that?
I certainly would never do that to
our customers (and I hate support anyway, I want to reduce it as
much as possible).
There's no plug-ins in a statically linked system. What would be the
point? I'm assuming that we're talking about Ada interfaced
libraries here (C is a different kettle of fish), so you're talking
about switching implementations of a single Ada spec. We've done that
going back to the beginning of Ada time; it's managed by decent build
tools and has gotten pretty simple in most Ada compilers. So what
would a plug-in buy?
I don't see any way that such loading could work with Ada
semantics; there is an assumption that all of your ancestors
exist before you can do anything. The elaboration checks were
intended to check that.
We have a core libraries which are import libraries for the
plug-in. When a plug-in is loaded, the core libraries are
elaborated unless already loaded, the plug-in library itself is not
elaborated, because automatic elaboration would deadlock under
Windows. Then a dedicated entry point is called in the plug-in
library. The first thing it does is a call to the plug-in
elaboration code. GNAT generates an <library-name>init entry for
that. After this the plug-in registers itself providing a tagged
object, which primitive operations are basically the library's true
interface.
I know it sounds horrific, but it works pretty well.
It does sound horrific, and it doesn't seem to buy much.
A tag is a property of a type in Ada, and it includes the dispatch
table.
You can't unload a library until all of the things that depend upon
it have been unloaded, so from the perspective of a compiler, it acts
like library-level. The whole mess about loading/unloading is on the
user (which is what I meant about "unsafe" yesterday), and if you get
it wrong, your program is erroneous and can do any manner of things.
It's no more worth it for a compiler to worry about bad unloading
than it is to worry about dangling pointers.
Why do you think people keep on asking for Ada preprocessor in c.l.a?
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:
Why do you think people keep on asking for Ada preprocessor in c.l.a?
Certainly not something I've noticed
On 2021-10-16 16:32, Simon Wright wrote:
"Dmitry A. Kazakov" <mai...> writes:
Why do you think people keep on asking for Ada preprocessor in c.l.a?
Certainly not something I've noticedComes periodically. People falsely believe that conditional compilation
could allow static linking for dynamically configured projects.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 406 |
Nodes: | 16 (2 / 14) |
Uptime: | 113:18:11 |
Calls: | 8,529 |
Calls today: | 8 |
Files: | 13,212 |
Messages: | 5,920,730 |