aarm_202x_ch04.adb:491:40: error: container aggregate must use [], not ()
Blady <p.p11@orange.fr> writes:
aarm_202x_ch04.adb:491:40: error: container aggregate must use [], not ()
If you look at ARM202x 4.3.5, you'll see that *container* aggregates
must use []. I'm sure there wa a whole lot of argument about this in the
ARG!
Le 19/06/2022 à 16:15, Simon Wright a écrit :
Blady <p.p11@orange.fr> writes:
aarm_202x_ch04.adb:491:40: error: container aggregate must use [], not () >> If you look at ARM202x 4.3.5, you'll see that *container* aggregatesmust use []. I'm sure there wa a whole lot of argument about this in the
ARG!
Yes I was aware of that but I wanted to give Empty_Map its real value
of the full type definition.
My understanding is:
you declare the aggregate aspect as:
type Map_Type is private
with Aggregate => (Empty => Empty_Map,
Add_Named => Add_To_Map);
thus:
MM : Map_Type;
...
MM := [] -- the compiler uses Empty_Map
MM := [1=>"toto", 4=>"titi"]; -- the compiler uses Add_To_Map
now if I declare:
Empty_Map : constant Map_Type := [];
then it could be an recursive infinite call, could be?
Hello,
Following the example section of RM Ada 2022 § 4.3.5 Container Aggregate,
I want to try map aggregates:
453. type Map_Type is private
454. with Aggregate => (Empty => Empty_Map,
455. Add_Named => Add_To_Map);
456.
457. procedure Add_To_Map (M : in out Map_Type; Key : in Integer;
Value : in String);
458.
459. Empty_Map : constant Map_Type;
... -- End of example code
482. private
...
488. type Map_Type is array (1..10) of String (1..10);
The reason for this is obvious in your question: it is ambiguous if an aggregate is an array aggregate or a container aggregate wherever the full type is visible, and that is not worth making work (any choice would be a surprise in some contexts).
This code leads to internal compiler error with gnatmake 12.1.0:
On 2022-06-20 23:47, Randy Brukardt wrote:
The reason for this is obvious in your question: it is ambiguous if an
aggregate is an array aggregate or a container aggregate wherever the
full
type is visible, and that is not worth making work (any choice would be a
surprise in some contexts).
I don't agree that making the language regular does not worth work.
The choice is obviously inconsistent with handling both existing cases:
1. Built-in operations -> hiding:
type T is private;
function "+" (Left, Right : T) return T; -- Perfectly legal
private
type T is range 1..100;
2. Primitive operations -> overriding.
What's good for the goose is good for the gander? Nope. Here is a totally
new way of handling an operation!
tirsdag den 21. juni 2022 kl. 00.10.52 UTC+2 skrev Jesper Quorning:
This code leads to internal compiler error with gnatmake 12.1.0:
This is the bug report:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106037
The crash happens another place than the #106031 report by Blady.
I ran into a third ICE with erroneous memory access while playing around
with an access type.
/Jesper
1. Built-in operations -> hiding:
type T is private;
function "+" (Left, Right : T) return T; -- Perfectly legal
private
type T is range 1..100;
2. Primitive operations -> overriding.
This case is not worth the effort, IMHO. (Of course, it is in the language now, so we're stuck with it.) If I was running the circus, private types could only be completed with a record type.
The problem here is illustrated by the OP, who seemed to expect to get the container aggregate when the full view is visible. We looked at making the container aggregate invisible and allowing the array aggregate in the full view, but it would be something new (the contents of aggregates don't depend on visibility in Ada 2012), and it seems useless (see my answer to [1]).
That is, the OPs construct is primarily useful in small examples; it's not a real world thing you would want to do. (There ALWAYS is some other data that you need to go along with an array: a length, a validity flag, etc.) So why make implementers spend a lot of effort implementing it??
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:t8qrmo$p79$1@gioia.aioe.org...
...
1. Built-in operations -> hiding:
type T is private;
function "+" (Left, Right : T) return T; -- Perfectly legal
private
type T is range 1..100;
2. Primitive operations -> overriding.
BTW, these two are really the same thing. In the first example, the "+" is overriding the predefined operation.
But note an important difference here from the aggregate case: in no case is an operation available for the private type that is *not* available for the full view. Since the syntax and semantics of container aggregates and array aggregate are subtly different (they are as close as we could make them, but that is not that close), there definitely are things that would be only possible when written for the private view. That would be new for Ada. So while a definition could be made, it would be confusing in some cases. And this case isn't useful enough to make that effort.
On 2022-06-22 01:28, Randy Brukardt wrote:...
Well, I can give a useful example straight away. A string has two array interfaces, the encoding and the character view. The former must be a built-in array.
On 2022-06-22 01:39, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:t8qrmo$p79$1@gioia.aioe.org...
...
1. Built-in operations -> hiding:
type T is private;
function "+" (Left, Right : T) return T; -- Perfectly legal
private
type T is range 1..100;
2. Primitive operations -> overriding.
BTW, these two are really the same thing. In the first example, the "+"
is
overriding the predefined operation.
Yes, they should be in a better world. A primitive operation is always reachable. The case above works differently from:
type T is private;
overriding
function "+" (Left, Right : T) return T is abstract;
But this is a deeper problem of having such operations primitive.
But note an important difference here from the aggregate case: in no case
is
an operation available for the private type that is *not* available for
the
full view. Since the syntax and semantics of container aggregates and
array
aggregate are subtly different (they are as close as we could make them,
but
that is not that close), there definitely are things that would be only
possible when written for the private view. That would be new for Ada. So
while a definition could be made, it would be confusing in some cases.
And
this case isn't useful enough to make that effort.
Much could be resolved by attributing proper types to all assumed or real intermediate steps:
type T is private;
function "+" (Left, Right : T) return T;
private
type T_Parent is range 1..100;
type T is new T_Parent;
Don't think this changes anything (at least not in Ada as it stands), since that is essentially the meaning of "range 1 .. 100":
type Some_Int is range 1 .. 100;
means
type Some_Int is new <Some_Int_Chosen_by_the_Impl> range 1 .. 100;
In any case having an explicit array interface does not preclude built-in arrays.
On 2022-06-23 03:06, Randy Brukardt wrote:...
(How something gets implemented should not be part of a language
design, so long as the design does not prevent an efficient
implementation.)
I certainly would not treat them as special in any way, just a series of
function calls. (Possibly records could be treated that way as well,
although it is less clear that an efficient implementation is possible
for
them.)
Syntax sugar for subprogram calls is not enough because it does not allow generic programming. One should be able to write a program that deals with any instance of the interface. Like a generic body working with any actual array or a class-wide body which unfortunately is impossible to have for arrays presently.
On 23.06.22 11:32, Dmitry A. Kazakov wrote:
In any case having an explicit array interface does not preclude built-in
arrays.
Indeed, if there were no built-in arrays, how would one
- specify aspects like "contiguous memory",
- permit implementations to create efficient addressing
for arrays' storage units
- map to hardware?
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:t91buq$10im$1@gioia.aioe.org...
On 2022-06-23 03:06, Randy Brukardt wrote:...
(How something gets implemented should not be part of a language
design, so long as the design does not prevent an efficient
implementation.)
I certainly would not treat them as special in any way, just a series of >>> function calls. (Possibly records could be treated that way as well,
although it is less clear that an efficient implementation is possible
for
them.)
Syntax sugar for subprogram calls is not enough because it does not allow
generic programming. One should be able to write a program that deals with >> any instance of the interface. Like a generic body working with any actual >> array or a class-wide body which unfortunately is impossible to have for
arrays presently.
You're thinking too small. Obviously, in a language without an syntactic array construct, every data structure would be some sort of record.
So
class-wide operations would be available for all of those -- and without all of the complications of a separate formal array type. The idea is to have
one mechanism for pretty much everything, and let the compiler sort out the results. Back when we created Janus/Ada, that wasn't really practical
because of memory and CPU speed constraints, but none of that holds true anymore. Simplify the language, complicate the compiler!
On 2022-06-24 03:24, Randy Brukardt wrote:
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:t91buq$10im$1@gioia.aioe.org...
On 2022-06-23 03:06, Randy Brukardt wrote:...
(How something gets implemented should not be part of a language
design, so long as the design does not prevent an efficient
implementation.)
I certainly would not treat them as special in any way, just a series
of
function calls. (Possibly records could be treated that way as well,
although it is less clear that an efficient implementation is possible >>>> for
them.)
Syntax sugar for subprogram calls is not enough because it does not
allow
generic programming. One should be able to write a program that deals
with
any instance of the interface. Like a generic body working with any
actual
array or a class-wide body which unfortunately is impossible to have for >>> arrays presently.
You're thinking too small. Obviously, in a language without an syntactic
array construct, every data structure would be some sort of record.
They are fundamentally different. Record interface is static mapping:
identifier -> value
1D array interface is dynamic mapping:
ordered value -> value
It not only has run-time semantics of (indexing). It is also ordering of
the index which implies enumeration, ranges, slices.
So
class-wide operations would be available for all of those -- and without
all
of the complications of a separate formal array type. The idea is to have
one mechanism for pretty much everything, and let the compiler sort out
the
results. Back when we created Janus/Ada, that wasn't really practical
because of memory and CPU speed constraints, but none of that holds true
anymore. Simplify the language, complicate the compiler!
I don't buy the idea of run-time penalty for having abstract data types
and I don't see why built-in arrays cannot coexist with user-defined ones without turning the language into LISP.
Furthermore, the age of free CPU cycles came to an end. Soon we will have return back to sanity.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:t93mr6$pvm$1@gioia.aioe.org...
On 2022-06-24 03:24, Randy Brukardt wrote:
You're thinking too small. Obviously, in a language without an syntactic >>> array construct, every data structure would be some sort of record.
They are fundamentally different. Record interface is static mapping:
identifier -> value
1D array interface is dynamic mapping:
ordered value -> value
It not only has run-time semantics of (indexing). It is also ordering of
the index which implies enumeration, ranges, slices.
Dymanic means a function. And there is no reason to treat a few functions as special (again, at the user level).
I don't buy the idea of run-time penalty for having abstract data types
and I don't see why built-in arrays cannot coexist with user-defined ones
without turning the language into LISP.
They add a huge amount of complication for very little gain.
Furthermore, the age of free CPU cycles came to an end. Soon we will have >> return back to sanity.
I don't think programming abstractly and translating that into good code
will ever be a bad idea. After all, that is the idea behind Ada. If you
truly want to worry about the cost of compilation, then you have program in
a very close to the metal language, even lower level than C. And current machines are way harder to generate code for than the Z-80 that we started out on (and even then, we generated pretty bad code with the very tiny compiler).
I'd rather plan for a future where the compiler tool set does a lot of correctness checking for one's programs;
On 2022-06-25 05:13, Randy Brukardt wrote:...
I don't think programming abstractly and translating that into good code
will ever be a bad idea. After all, that is the idea behind Ada. If you
truly want to worry about the cost of compilation, then you have program
in
a very close to the metal language, even lower level than C. And current
machines are way harder to generate code for than the Z-80 that we
started
out on (and even then, we generated pretty bad code with the very tiny
compiler).
No, I worry about cost of execution. You want to simplify the compiler at
the expense of the program complexity and efficiency of its code.
I'd rather plan for a future where the compiler tool set does a lot of
correctness checking for one's programs;
Yes and correctness checking requires proper and very refined abstractions you are ready to throw away. Here is a contradiction. In a language like Forth there is basically nothing to check.
Slices are not an abstraction. They are a way to describe a particular kind of processor operation in a vaguely abstract way.
The distributed overhead that they [slices] cause is immense (for
instance, you can't have a discontigious array represesentation with
slices,
The distributed overhead that they cause is
immense (for instance, you can't have a discontigious array represesentation with slices, unless you are willing to pay a substantial cost for *every* array parameter). They're the anti-abstraction feature.
On 2022-06-28 0:37, Randy Brukardt wrote:
Slices are not an abstraction. They are a way to describe a particular
kind
of processor operation in a vaguely abstract way.
The abstraction is the concept of a one-dimensional array (a vector or a >sequence).
Why would you want to have a non-contiguous representation for one-dimensional arrays? Perhaps to make "unbounded" (extensible) arrays?
On 2022-06-27 23:37, Randy Brukardt wrote:
The distributed overhead that they cause is
immense (for instance, you can't have a discontigious array
represesentation
with slices, unless you are willing to pay a substantial cost for *every*
array parameter). They're the anti-abstraction feature.
There is nothing wrong with having non-contiguous slices of non-contiguous arrays! Contiguity of index does not automatically imply contiguity of element allocation, unless specifically required by the [sub]type
constraint.
1D array abstraction is a mapping index -> element where
1. Index has an order. Index has operations 'Succ and 'Pred;
2. The mapping is convex. If there are elements for two indices, then
there are elements for all indices between them.
Contiguity is a representation constraint.
One should be able to have an array equivalent of unbounded string with unbounded slices both allocated non-contiguously. The slices you could
shrink or expand:
Text (45..80) := ""; -- Cut a piece off
At the same time one should have a contiguous subtype of the same
unbounded string for interfacing purposes, dealt by copy-out copy-in.
--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
What you say here is precisely how they should work. But you would make the cost of supporting that necessary on every array object, because one could not know when an array is passed as a parameter what sort of representation it has. So one has to use dispatching helper operations to implement all assignments. And that's way more expensive than traditional array implementation.
If you didn't have slices that could be passed as array parameters, then
that problem does not exist. And I don't think slices are sufficiently worthwhile to force expensive, unoptimizable code.
* Maps are usually constrained. It does not make sense to concatenate,
sort, slice, or slide a map. The abstraction of a map includes
non-discrete key subtypes, so arrays used as maps are a special case.
A language that provided direct support for these abstractions should
not need to provide arrays.
But there is (or shouldn't be) anything special about a one-dimensional
array (presuming you intend to allow arrays with more dimensions). And the "abstraction" you talk about is selecting a bunch of barely related elements from a multi-dimensional array.
On 2022-06-29 06:01, Randy Brukardt wrote:
But there is (or shouldn't be) anything special about a
one-dimensional array (presuming you intend to allow arrays with
more dimensions). And the "abstraction" you talk about is selecting
a bunch of barely related elements from a multi-dimensional array.
Arrays are usually used to implement map, (mathematical) matrices and vectors, or sequences. Each usage tends to have unique features:
* Maps are usually constrained. It does not make sense to concatenate,
sort, slice, or slide a map.
* Matrices have component types that behave like numbers. The
mathematical definition of matrices includes integer indices with a
lower bound of 1. Vectors are usually considered to be matrices of one
column ("column vector") which can be transposed to obtain matrices of
one row ("row vector").
* Sequences are usually unconstrained. Typical discussion of sequences outside of programming use integer values to indicate positions, using
terms such as the first thing in a sequence, the second thing, ..., so indices should be of an integer type with a lower bound of 1. It
sometimes makes sense to concatenate, sort, slice, or slide sequences.
A language that provided direct support for these abstractions should
not need to provide arrays.
On 2022-06-29 11:30, Jeffrey R.Carter wrote:
* Maps are usually constrained. It does not make sense to concatenate, sort, >> slice, or slide a map.
In mathematics, maps (functions) are often sliced, in other words restricted to
a subset of their full domain. They are also often concatenated, in the sense of
combining functions defined on separate domain sets into a combined function defined on the union of those separate domain sets. Those operations would be useful in programs too.
The essential aspect of maps and map operations is that there is no "sliding" that changes the relationship of domain values (keys) with range values.
That said, it very often makes sense to provide sorted access to the elements of
a map, sorted by some criterion, while maintaining the relationship of keys and
their mapped values. That might be seen as sorting the map into a sequence (as
described below).
Perhaps I could have been clearer. I mean that it doesn't make sense to concatenate, sort, slice, or slide an array used as a map. For example, if
we represent a map Character => Natural as
type Char_Count_Map is array (Character) of Natural;
Map : Char_Count_Map;
and want a map with the domain restricted to the ASCII letters, both
capital and small,
Map ('A' .. 'Z') & Map ('a' ..'z')
doesn't do what we want.
You are talking about the abstraction of a map in general, and the
operations you describe are different from the array operations I was
talking about.
Except if someone invents new abstractions and needs a "raw memory" type
to implement them.
A language that provided direct support for these abstractions should not
need to provide arrays.
Which is of course impossible considering the variety of maps (e.g. graph
is a map etc) and all problem-space specific.
On 2022-06-29 06:07, Randy Brukardt wrote:
What you say here is precisely how they should work. But you would make
the
cost of supporting that necessary on every array object, because one
could
not know when an array is passed as a parameter what sort of
representation
it has. So one has to use dispatching helper operations to implement all
assignments. And that's way more expensive than traditional array
implementation.
It is already there because Ada 83 arrays have definite and indefinite representations. Adding a user-defined representation on top is nothing.
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message news:t9h4i9$118a$1@gioia.aioe.org...
...
A language that provided direct support for these abstractions should not >>> need to provide arrays.
Which is of course impossible considering the variety of maps (e.g. graph
is a map etc) and all problem-space specific.
Nobody uses arrays to implement graphs anyway.
That's something you do when
you are using a language like Fortran 66 that doesn't have any abstractions.
My design for a post-Ada langyage has a "Fixed_Vector" container for the purposes of interfacing; it supports setting component sizes so it should match any sort of interface. But most abstractions should be built on top of some sort of bounded or unbounded container. The implementation would spend much of its effort optimizing those basic containers rather than worrying about making arrays fast.
You said something about slices of matrices being a common operation. And I agree with that, and it is one that Ada cannot support. It would be better if slices were implemented as a form of function, so that they can be used when they make sense (and only then). No reason to build in such things. (My post-Ada language design includes variable-returning functions, so that sort of need can be accomadated.)
My thinking along these lines I call King and have described informally at https://github.com/jrcarter/King. Taft's seems to be Parasail (and now Paradiso). Guest has orenda at https://github.com/Lucretia/orenda. Do you have
any sort of description or specification of your "post-Ada language"? I would be
interested in seeing that or learning more about it.
--
Jeff Carter
Jeff, great document (and great name:-) My impression is that it is maybe too much Adaist... except that modules don't have state?! That sounds very limiting to me.
"... It would be better if slices were implemented as a form of function,
so that they can be used when they make sense... (My post-Ada language
design includes variable-returning functions, so that sort of need can be accomadated.)"
(Randy)
What is a "variable-returning function"?
Is there available material on this post-Ada language design of yours?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 399 |
Nodes: | 16 (2 / 14) |
Uptime: | 83:51:33 |
Calls: | 8,359 |
Calls today: | 4 |
Files: | 13,162 |
Messages: | 5,896,380 |