• Odd glMultiDrawArrays behaviour...

    From Nobody@21:1/5 to All on Sat Jul 9 06:10:21 2016
    On Fri, 08 Jul 2016 12:31:37 +0100, John Irwin wrote:

    So:

    a) glGetError() isn't reporting any errors, and
    b) if you replace the glMultiDrawArrays() call with a call to the
    function:

    void myMultiDrawArrays(GLenum mode, const GLint* first,
    const GLsizei* count, GLsizei drawcount)
    {
    GLsizei i;
    for (i = 0; i < drawcount; i++)
    glDrawArrays(mode, first[i], count[i]);
    }

    then everything works, just inefficiently?

    If that's the case, I can't see how this can be anything other than a
    driver bug.

    glMultiDrawArrays() isn't affected by any GL state beyond that which
    also affects glDrawArrays().

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Irwin@21:1/5 to All on Fri Jul 8 12:31:37 2016
    Does anyone know why glMultiDrawArrays is causing me problems when I use it
    to render line-strips from a vertex buffer containing more than 65536
    vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are
    thousands of strips, this would mean calling glDrawArrays thousands of times
    in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Irwin@21:1/5 to Nobody on Sun Jul 10 10:43:44 2016
    "Nobody" <nobody@nowhere.invalid> wrote in message news:pan.2016.07.09.05.10.20.65000@nowhere.invalid...
    On Fri, 08 Jul 2016 12:31:37 +0100, John Irwin wrote:

    So:

    a) glGetError() isn't reporting any errors, and
    b) if you replace the glMultiDrawArrays() call with a call to the
    function:

    void myMultiDrawArrays(GLenum mode, const GLint* first,
    const GLsizei* count, GLsizei drawcount)
    {
    GLsizei i;
    for (i = 0; i < drawcount; i++)
    glDrawArrays(mode, first[i], count[i]);
    }

    then everything works, just inefficiently?

    That's the gist of it...

    If that's the case, I can't see how this can be anything other than a
    driver bug.

    Thanks for your feedback. I suspect it's a driver bug too. Unfortunately my graphics card is no longer supported so there is no prospect of an updated driver.

    However I've got glMultiDrawElements working with my vertex data, which is a relief as that function requires you to specify explicitly the data type of
    the indices. The downside is that I need to work with a large index buffer containing the consecutive integers 0,1,2... which, in a sane world, would normally be considered redundant. But it seems I've little choice in the matter.

    John.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From puta@21:1/5 to All on Tue Sep 27 22:26:40 2016
    El viernes, 8 de julio de 2016, 4:31:43 (UTC-7), John Irwin escribió:
    Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.



    El viernes, 8 de julio de 2016, 4:31:43 (UTC-7), John Irwin escribió:
    Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.



    El viernes, 8 de julio de 2016, 4:31:43 (UTC-7), John Irwin escribió:
    Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.



    El viernes, 8 de julio de 2016, 4:31:43 (UTC-7), John Irwin escribió:
    Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.



    El viernes, 8 de julio de 2016, 4:31:43 (UTC-7), John Irwin escribió:
    Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.



    El viernes, 8 de julio de 2016, 4:31:43 (UTC-7), John Irwin escribió:
    Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.

    The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
    indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.

    I suspect it's a problem with glMultiDrawArrays itself rather than the data
    I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.

    I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.

    John.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)