• Mathematics of arrays

    From Arjen Markus@21:1/5 to All on Thu Aug 12 01:38:37 2021
    Some weeks ago a group was created on Slack that is discussing the possibilities of "mathematics of arrays" (MoA) within the context of Fortran. The idea of this methodology is to express algorithms in such a way that they are independent of the number
    of dimensions, which will make it easier for the compiler (both the MoA component and the regular compiler) to optimise the program. As the theory is fairly abstract, the initiator of this group Lenore Mullin, is preparing an introduction to the theory
    and its applications.

    If you are interested, let us know or join the Slack channel, "MoA Global Team".

    To get a bit better acquainted with the idea, here are a few references: https://docs.google.com/presentation/u/0/d/1g4TQB57NbmueIcl4icnkqIlVU5xSnB6Fkr-9IW8MzTA/mobilepresent
    https://arxiv.org/abs/1904.02612 https://downloads.hindawi.com/journals/sp/2013/672424.pdf

    Regards,

    Arjen

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to arjen.m...@gmail.com on Thu Aug 12 04:20:36 2021
    On Thursday, August 12, 2021 at 1:38:39 AM UTC-7, arjen.m...@gmail.com wrote:
    Some weeks ago a group was created on Slack that is discussing the possibilities of
    "mathematics of arrays" (MoA) within the context of Fortran.

    Pretty neat. Reminds me, in the Fortran 66 days that I had matrix routines that would accept diagonal, triangular, or square matrices and process them accordingly.

    Also, in the 672424.pdf, near the end there is a disclaimer regarding Intel compilers
    and non-Intel processors, indicating that some optimizations are not done for non-intel processors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robin Vowels@21:1/5 to arjen.m...@gmail.com on Fri Aug 13 20:56:08 2021
    On Thursday, August 12, 2021 at 6:38:39 PM UTC+10, arjen.m...@gmail.com wrote:
    Some weeks ago a group was created on Slack that is discussing the possibilities of "mathematics of arrays" (MoA) within the context of Fortran. The idea of this methodology is to express algorithms in such a way that they are independent of the number
    of dimensions, which will make it easier for the compiler (both the MoA component and the regular compiler) to optimise the program. As the theory is fairly abstract, the initiator of this group Lenore Mullin, is preparing an introduction to the theory
    and its applications.

    If you are interested, let us know or join the Slack channel, "MoA Global Team".

    To get a bit better acquainted with the idea, here are a few references: https://docs.google.com/presentation/u/0/d/1g4TQB57NbmueIcl4icnkqIlVU5xSnB6Fkr-9IW8MzTA/mobilepresent
    https://arxiv.org/abs/1904.02612 https://downloads.hindawi.com/journals/sp/2013/672424.pdf
    .
    This was done in 1954 on the Pilot ACE, and from 1955 on DEUCE,
    where it became widely used.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robin Vowels@21:1/5 to arjen.m...@gmail.com on Sun Aug 15 02:04:00 2021
    On Thursday, August 12, 2021 at 6:38:39 PM UTC+10, arjen.m...@gmail.com wrote:
    Some weeks ago a group was created on Slack that is discussing the possibilities of "mathematics of arrays" (MoA) within the context of Fortran. The idea of this methodology is to express algorithms in such a way that they are independent of the number
    of dimensions, which will make it easier for the compiler (both the MoA component and the regular compiler) to optimise the program. As the theory is fairly abstract, the initiator of this group Lenore Mullin, is preparing an introduction to the theory
    and its applications.
    .
    Upthread I already mentioned that this idea is not new,
    having been in use on Pilot ACE (1954) and widely used
    on DEUCE (1955).
    .
    It was also used in PL/I-F (1966) on the IBM 360, where, for example,
    in the whole array operation
    A = B + C;
    where each is defined as a matrix, produced the following code:
    000064 78 07 B 0F8 LE 0,VO..B(7)
    000068 7A 06 B 160 AE 0,VO..C(6)
    00006C 70 08 B 090 STE 0,VO..A(8)
    .
    to perform the floating-point addition
    in a single loop, the index registers being incremented
    each time around the loop. [Compiled with PL/I-F courtesy
    of Markus Loew.]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to All on Sun Aug 15 02:44:59 2021
    There are three links listed, so maybe they don't all say the same thing.
    One you had to request the paper, which I did.

    The third one related to ways to store data when using FFT, which I don't think is related to what the Ace did in 1955.

    There is a not unusual case where you take some data, apply FFT, operate
    on the resulting data, and apply inverse FFT. It turns out that the FFT is more efficient if you arrange the resulting data in a different order. Specifically, in the order that you would get if you reversed the order of
    the bits in the (0 origin) index. Since many operations can be done on data
    in that order, it is more efficient to leave it that way.

    I am then reminded of the way, I believe, Python and Octave store arrays.
    When you apply transpose to an array, instead of actually transposing the
    data, it just remembers, maybe with one bit, that it is transposed.

    Then in future operations, checking that bit, it indexes the array appropriately.

    But you can generalize this into many different array storage methods, along with remembering in which way they are allocated. One can efficiently store triangular matrices.

    Now, how many different storage arrangements do you allow for?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ron Shepard@21:1/5 to All on Sun Aug 15 13:10:43 2021
    On 8/15/21 4:44 AM, gah4 wrote:
    I am then reminded of the way, I believe, Python and Octave store arrays. When you apply transpose to an array, instead of actually transposing the data, it just remembers, maybe with one bit, that it is transposed.

    I did not know that Python and Octave did that. However, I have always
    thought that fortran missed this opportunity with the introduction of
    array operations in f90. If the language had allowed programmer access
    to the array metadata, then operations such as transpose and complex conjugation could be done with O(0) effort rather than O(N) effort (for
    N being the number of elements). It would also have been possible to use
    those operations on the left hand side of statements, the same way as
    occurs in standard math notation.

    A = Fast_Transpose( array expression )

    Fast_Transpose(A) = array expression

    This already occurs in the language in some cases, but it is not under
    control of the programmer, it is just a compiler optimization step. To
    give an example, consider the following statement

    call sub( Transpose(A), B )

    where the first argument is intent(in). If you check memory locations of
    A(1,1) before the call and inside the subroutine, they are sometimes the
    same. That means the compiler has recognized the opportunity, and
    instead of generating a temporary array for the first argument (which
    requires O(N) effort to allocate and copy the contents), it instead just fiddles with the array metadata, and uses the original data storage
    (which requires only O(0) effort).

    That is a good idea! I just wish I could do that same thing directly as
    a programmer.

    $.02 -Ron Shepard

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robin Vowels@21:1/5 to Ron Shepard on Sun Aug 15 19:24:18 2021
    On Monday, August 16, 2021 at 4:10:49 AM UTC+10, Ron Shepard wrote:
    On 8/15/21 4:44 AM, gah4 wrote:
    I am then reminded of the way, I believe, Python and Octave store arrays. When you apply transpose to an array, instead of actually transposing the data, it just remembers, maybe with one bit, that it is transposed.
    I did not know that Python and Octave did that. However, I have always thought that fortran missed this opportunity with the introduction of
    array operations in f90. If the language had allowed programmer access
    to the array metadata,
    .
    It does.
    Try LBOUND and UBOUND.
    .
    then operations such as transpose and complex
    conjugation could be done with O(0) effort rather than O(N) effort (for
    N being the number of elements). It would also have been possible to use those operations on the left hand side of statements, the same way as
    occurs in standard math notation.

    A = Fast_Transpose( array expression )

    Fast_Transpose(A) = array expression

    This already occurs in the language in some cases, but it is not under control of the programmer, it is just a compiler optimization step. To
    give an example, consider the following statement

    call sub( Transpose(A), B )

    where the first argument is intent(in). If you check memory locations of A(1,1) before the call and inside the subroutine, they are sometimes the same. That means the compiler has recognized the opportunity, and
    instead of generating a temporary array for the first argument (which requires O(N) effort to allocate and copy the contents), it instead just fiddles with the array metadata, and uses the original data storage
    (which requires only O(0) effort).

    That is a good idea! I just wish I could do that same thing directly as
    a programmer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Jos Bergervoet@21:1/5 to Robin Vowels on Thu Aug 19 23:04:21 2021
    On 21/08/16 4:24 AM, Robin Vowels wrote:
    On Monday, August 16, 2021 at 4:10:49 AM UTC+10, Ron Shepard wrote:
    On 8/15/21 4:44 AM, gah4 wrote:
    .
    I am then reminded of the way, I believe, Python and Octave store arrays. >>> When you apply transpose to an array, instead of actually transposing the >>> data, it just remembers, maybe with one bit, that it is transposed.
    .
    I did not know that Python and Octave did that. However, I have always
    thought that fortran missed this opportunity with the introduction of
    array operations in f90. If the language had allowed programmer access
    to the array metadata,
    .
    It does.
    Try LBOUND and UBOUND.

    That's only part of the array descriptor metadata (the stride still
    missing..) And it gives the user only read access.

    --
    Jos

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to Jos Bergervoet on Thu Aug 19 16:54:40 2021
    On Thursday, August 19, 2021 at 2:08:04 PM UTC-7, Jos Bergervoet wrote:
    On 21/08/16 4:24 AM, Robin Vowels wrote:

    (snip)
    It does.
    Try LBOUND and UBOUND.

    That's only part of the array descriptor metadata (the stride still missing..) And it gives the user only read access.

    Yes, it would be interesting to have an extension, or alternative, to C_F_POINTER that would allow one to specify the strides separarately,
    such that one could arrange for all the different ways of addressing
    array elements.

    Reminds me that VAX defines an array descriptor, to be used for
    pass by descriptor subroutine calls. It includes both the virtual
    origin (needed for PL/i) and the actual origin (needed for Fortran).

    In PL/I, when you pass an array to a procedure, it keeps its lower
    bound, unlike Fortran. (Except when it does.) There is no provision
    for run-time generation of such, though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robin Vowels@21:1/5 to All on Thu Aug 19 21:18:14 2021
    On Friday, August 20, 2021 at 9:54:41 AM UTC+10, gah4 wrote:
    On Thursday, August 19, 2021 at 2:08:04 PM UTC-7, Jos Bergervoet wrote:
    On 21/08/16 4:24 AM, Robin Vowels wrote:
    (snip)
    It does.
    Try LBOUND and UBOUND.

    That's only part of the array descriptor metadata (the stride still missing..) And it gives the user only read access.
    Yes, it would be interesting to have an extension, or alternative, to C_F_POINTER that would allow one to specify the strides separarately,
    such that one could arrange for all the different ways of addressing
    array elements.

    Reminds me that VAX defines an array descriptor, to be used for
    pass by descriptor subroutine calls. It includes both the virtual
    origin (needed for PL/i) and the actual origin (needed for Fortran).

    In PL/I, when you pass an array to a procedure, it keeps its lower
    bound, unlike Fortran. (Except when it does.) There is no provision
    for run-time generation of such, though.
    .
    PL/I passes both its lower bound and its upper bound
    (along with the array address, of course).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Arjen Markus@21:1/5 to All on Fri Aug 20 00:34:18 2021
    On Friday, August 20, 2021 at 1:54:41 AM UTC+2, gah4 wrote:
    On Thursday, August 19, 2021 at 2:08:04 PM UTC-7, Jos Bergervoet wrote:
    On 21/08/16 4:24 AM, Robin Vowels wrote:
    (snip)
    It does.
    Try LBOUND and UBOUND.

    That's only part of the array descriptor metadata (the stride still missing..) And it gives the user only read access.
    Yes, it would be interesting to have an extension, or alternative, to C_F_POINTER that would allow one to specify the strides separarately,
    such that one could arrange for all the different ways of addressing
    array elements.

    Reminds me that VAX defines an array descriptor, to be used for
    pass by descriptor subroutine calls. It includes both the virtual
    origin (needed for PL/i) and the actual origin (needed for Fortran).

    In PL/I, when you pass an array to a procedure, it keeps its lower
    bound, unlike Fortran. (Except when it does.) There is no provision
    for run-time generation of such, though.

    I posted a small program that uses "pointer functions" to achieve access to an array in a "direct" and in "transposed" way, both read and write. Here it is:

    ! moa_access_func.f90 --
    ! Use a pointer function to transpose a matrix
    !
    module myfunc
    implicit none

    real, dimension(3,10), target :: array

    type moa
    logical :: transpose
    real, dimension(:,:), pointer :: array
    contains
    procedure :: init => init_moa
    procedure :: elem => elem_moa
    end type moa

    contains

    subroutine init_moa( me, array, transpose )
    class(moa), intent(inout) :: me
    real, dimension(:,:), target, intent(in) :: array
    logical, intent(in) :: transpose

    me%transpose = transpose
    me%array => array
    end subroutine init_moa

    function elem_moa(me, i, j) result(elem)
    class(moa), intent(inout) :: me
    integer, intent(in) :: i, j
    real, pointer :: elem

    if ( .not. me%transpose ) then
    elem => me%array(i,j)
    else
    elem => me%array(j,i)
    endif
    end function
    end module myfunc

    program test_myfunc
    use myfunc
    implicit none

    type(moa) :: x, xt

    integer :: i, j

    call x%init( array, .false. )
    call xt%init( array, .true. )

    array = 0.0

    do j = 1,10
    x%elem(1,j) = j
    enddo

    ! Now change an element "transpose-wise"
    xt%elem(1,3) = -1.0

    write(*,*) 'Original matrix:'
    write(*,*) 'Row, values'
    write(*,'(i5,a,10f6.2)') (i, ': ', (x%elem(i,j), j = 1,10), i = 1,3)

    write(*,*) 'Transposed matrix:'
    write(*,*) 'Row, values'
    write(*,'(i5,a,3f6.2)') (i, ': ', (xt%elem(i,j), j = 1,3), i = 1,10)
    end program test_myfunc

    I do not claim it to be efficient, but you do get two different "views" on the same array.

    Regards,

    Arjen

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From gah4@21:1/5 to All on Fri Aug 20 00:17:25 2021
    On Thursday, August 19, 2021 at 9:18:15 PM UTC-7, Robin Vowels wrote:

    (snip, I wrote)
    In PL/I, when you pass an array to a procedure, it keeps its lower
    bound, unlike Fortran. (Except when it does.) There is no provision
    for run-time generation of such, though.

    PL/I passes both its lower bound and its upper bound
    (along with the array address, of course).

    Yes. Exactly what Fortran passes isn't so obvious, but in many cases the called routine
    gets an array with lower bound 1 for each dimension. It might be that it passes the lower
    bounds, and then they are remapped on the other side.

    Also, Fortran array expressions always have a lower bound of 1
    on each dimension, even when the arrays in the expression have other
    lower bounds.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)