On Fri, 08 Jul 2016 12:31:37 +0100, John Irwin wrote:
So:
a) glGetError() isn't reporting any errors, and
b) if you replace the glMultiDrawArrays() call with a call to the
function:
void myMultiDrawArrays(GLenum mode, const GLint* first,
const GLsizei* count, GLsizei drawcount)
{
GLsizei i;
for (i = 0; i < drawcount; i++)
glDrawArrays(mode, first[i], count[i]);
}
then everything works, just inefficiently?
If that's the case, I can't see how this can be anything other than a
driver bug.
Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.
John.
Does anyone know why glMultiDrawArrays is causing me problems when I use it to render line-strips from a vertex buffer containing more than 65536 vertices? No crashing but the results are obviously wrong.
The line-strips are stored sequentially in a single vertex buffer which will usually contain more than this number of vertices, depending on user interaction. So it's possible for the "first" array to contain vertex
indices greater than 65536. Could it be possible that the indices are being converted to 16-bit values even though the input values are 32-bit? It looks like only those line-strips which are referenced with indices greater than 65536 are rendered incorrectly; the rest up to this point are ok.
I suspect it's a problem with glMultiDrawArrays itself rather than the data
I supply to it. I can render from the same buffer with multiple calls to glDrawArrays, one for each strip, and the results are ok. But because the strips are in general quite small (<100 vertices each) and there are thousands of strips, this would mean calling glDrawArrays thousands of times in each rendering pass, which is far from efficient.
I cannot find any reference to this problem either online or in the OpenGL documentation so it may just be a limitation, if not bug, in the OpenGL implementation used by my graphics card. But I'd be interested to hear from anyone who can confirm this behaviour or who can shed some insight on what I might be going on here. Thanks very much.
John.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 286 |
Nodes: | 16 (2 / 14) |
Uptime: | 82:34:42 |
Calls: | 6,495 |
Calls today: | 6 |
Files: | 12,096 |
Messages: | 5,276,781 |